input
stringlengths 6.82k
29k
|
---|
Instruction: Does the aberrant expression of CD2 and CD25 by skin mast cells truly correlate with systemic involvement in patients presenting with mastocytosis in the skin?
Abstracts:
abstract_id: PUBMED:25402852
Does the aberrant expression of CD2 and CD25 by skin mast cells truly correlate with systemic involvement in patients presenting with mastocytosis in the skin? Background: Neoplastic mast cells involving the bone marrow (BMMCs) of patients with mastocytosis display an aberrant expression of CD25 and/or CD2 antigens. The aim of this study was to determine the frequency of CD2 and CD25 expression on skin mast cells (sMCs) of patients with mastocytosis in the skin at the early stage of the disease. Furthermore, the usefulness of the phenotypic profile of sMCs for the diagnosis of systemic mastocytosis (SM) was evaluated.
Methods: The 52 adults included in the study were diagnosed with mastocytosis strictly according to the criteria of the World Health Organization. CD117, CD2 and CD25 antigen expression on sMCs was detected by immunohistochemistry. The presence of the KIT D816V mutation in the BM was analyzed using allele-specific PCR.
Results: The presence of CD2- or CD25-positive sMCs was detected in 57.1% of cutaneous mastocytosis (CM) and 90.3% of SM cases (p = 0.008). In all mastocytosis patients, CD2 expression on sMCs was more frequent than CD25 expression (67.3 and 38.5%, respectively). Moreover, CD2 expression on sMCs was more frequent in SM than in CM cases (p = 0.02). The presence of one of the aberrant sMC antigens was detected in 84.2% of patients with the KIT D816V mutation in the BM. A positive correlation between densities of CD25- and CD117-positive sMCs was found in SM patients (r = 0.46, p = 0.009).
Conclusions: Although sMCs displayed immunoreactivity for one of the neoplastic antigens in the majority of SM patients, the aberrant CD2 and/or CD25 expression on sMCs is not as indicative of SM as the BMMC immunophenotype.
abstract_id: PUBMED:33327223
A case report on concurrent occurrence of systemic mastocytosis and myeloid sarcoma presenting with extensive skin involvements and the results of genetic study. Introduction: Systemic mastocytosis is a rare disease due to mast cell accumulation in various extracutaneous sites. Systemic mastocytosis with an associated clonal hematologic non-MC lineage disease is the second most common subtype of systemic mastocytosis. The most common mutation associated with both systemic mastocytosis and myeloid sarcoma is mutation in Kit. Here, we identified the novel KIT D816V and ARID1A G1254S mutations co-occurring in systemic mastocytosis with myeloid sarcoma.
Patient Concerns: A 33-year old male patient presented multiple skin lesions for 10 years. Symptoms accelerated in 2017 with decreased body weight. Physical examination revealed enlarged lymph nodes in his neck, axilla and inguinal region; conjunctival hemorrhage; gingival hyperplasia. Skin biopsy showed mast cell infiltration. Flow cytometry detected CD2, CD25 and CD117 positive cells in lymph nodes. Codon 816 KIT mutation D816V and codon 1245 ARID1A mutation G1254S were found in peripheral blood. MPO, CD117, CD68 positive cells in lymph nodes indicated co-existing myeloid sarcoma.
Diagnosis: Systemic mastocytosis with an associated clonal hematologic non-MC lineage disease of myeloid sarcoma INTERVENTIONS:: Cytarabine and daunorubicin for myeloid sarcoma and dasatinib for systemic mastocytosis were initiated. Anti-histamine and anti-leukotrienes therapy were used to prevent NSAIDs-induced shock. Platelets were infused to treat bone marrow suppression.
Outcomes: Patient was discharged after recovered from bone marrow suppression. Dasatinib continued on outpatient.
Conclusion: This is the first case of patient with systemic mastocytosis and myeloid sarcoma simultaneously presenting extensive skin involvements. Mutations of Kit and Arid1a emphasis the importance to notice possibility of various tumors occurring in patients with multiple mutations. In addition, cysteine-leukotrienes-receptor antagonists should always be used to prevent anaphylactic shock due to mast cell activation.
abstract_id: PUBMED:26100086
Clinical, immunophenotypic, and molecular characteristics of well-differentiated systemic mastocytosis. Background: Well-differentiated systemic mastocytosis (WDSM) is a rare variant of systemic mastocytosis (SM) characterized by bone marrow (BM) infiltration by mature-appearing mast cells (MCs) often lacking exon 17 KIT mutations. Because of its rarity, the clinical and biological features of WDSM remain poorly defined.
Objective: We sought to determine the clinical, biological, and molecular features of a cohort of 33 patients with mastocytosis in the skin in association with BM infiltration by well-differentiated MCs and to establish potential diagnostic criteria for WDSM.
Methods: Thirty-three patients with mastocytosis in the skin plus BM aggregates of round, fully granulated MCs lacking strong CD25 and CD2 expression in association with clonal MC features were studied.
Results: Our cohort of patients showed female predominance (female/male ratio, 4:1) and childhood onset of the disease (91%) with frequent familial aggregation (39%). Skin involvement was heterogeneous, including maculopapular (82%), nodular (6%), and diffuse cutaneous (12%) mastocytosis. KIT mutations were detected in only 10 (30%) of 33 patients, including the KIT D816V (n = 5), K509I (n = 3), N819Y (n = 1), and I817V (n = 1) mutations. BM MCs displayed a unique immunophenotypic pattern consisting of increased light scatter features, overexpression of cytoplasmic carboxypeptidase, and aberrant expression of CD30, together with absent (79%) or low (21%) positivity for CD25, CD2, or both. Despite only 9 (27%) of 33 patients fulfilling the World Health Organization criteria for SM, our findings allowed us to establish the systemic nature of the disease, which fit with the definition of WDSM.
Conclusions: WDSM represents a rare clinically and molecularly heterogeneous variant of SM that requires unique diagnostic criteria to avoid a misdiagnosis of cutaneous mastocytosis per current World Health Organization criteria.
abstract_id: PUBMED:25952500
Transglutaminase 2 expressed in mast cells recruited into skin or bone marrow induces the development of pediatric mastocytosis. Background: Mastocytosis is characterized by a pathological increase in mast cells in organs such as skin and bone marrow. Transglutaminase 2 (TG2) expressed in mast cells contributes to allergic diseases, but its role in mastocytosis has not been investigated. This study aimed to investigate whether TG2 contributes to pediatric mastocytosis.
Methods: Serum, various skin tissues or bone marrow (BM) biopsy and aspirates were obtained from pediatric normal control or patients with indolent systemic mastocytosis (SM), mastocytoma, and urticaria pigmentosa (UP). Tryptase, individual cytokines, leukotriene C4 (LTC4 ), and TG2 activity in the serum were determined by enzyme-linked immunosorbent assay, mast cell population by May-Grünwald-Giemsa, CD 117 by immunofluorescence, cell surface molecules by Western blot, and colocalization of c-kit and TG2 or IL-10-expressing cells, CD25, and FOXP3 by immunohistochemistry.
Results: Infiltration of CD25(+) CD117(+) CD2(-) mast cells into BM and scalp/trunk/ear dermis; expression of FcεRI, tryptase, c-kit, FOXP3, CCL2/CCR2, and vascular cell adhesion molecule-1; and colocalization of c-kit and TG2 were enhanced in patient's skin tissues or BM, particularly SM, but colocalization of c-kit and IL-10-expressing cells was decreased vs. normal tissues. Amounts of LTC4 and inflammatory cytokines, expression of tryptase or TG2 activity were increased in patient's serum, BM aspirates, or ear/scalp skin tissues, respectively, vs. normal persons, but IL-10 level was decreased.
Conclusion: The data suggest that mast cells, recruited in the skin and BM by CCL2/CCR, may induce the development of pediatric mastocytosis through reducing IL-10 due to upregulating TG2 activity via transcription factor nuclear factor-κB. Thus, TG2 may be used in diagnosis of pediatric mastocytosis, particularly SM.
abstract_id: PUBMED:38112220
No indication of aberrant neutrophil extracellular trap release in indolent or advanced systemic mastocytosis. In disease states with chronic inflammation, there is a crosstalk between mast cells and neutrophil granulocytes in the inflamed microenvironment, which may be potentiated by tryptase. In systemic mastocytosis (SM), mast cells are constitutively active and tryptase is elevated in blood. Mast cell activation in SM leads to symptoms from various organs depending on where the active mast cells reside, for example, palpitations, flush, allergic symptoms including anaphylactic reactions, and osteoporosis. Whether neutrophil function is altered in SM is not well understood. In the current study, we assessed nucleosomal citrullinated histone H3 (H3Cit-DNA) as a proxy for neutrophil extracellular trap release in plasma from 55 patients with indolent and advanced SM. We observed a strong trend towards a correlation between leukocyte count, eosinophil count and neutrophil count and H3Cit-DNA levels in patients with advanced SM but not in indolent SM; however, no differences in H3Cit-DNA levels in SM patients compared with healthy controls. H3Cit-DNA levels did not correlate with SM disease burden, tryptase levels, history of anaphylaxis or presence of cutaneous mastocytosis; thus, there is no evidence of a general neutrophil extracellular trap release in SM. Interestingly, H3Cit-DNA levels and leukocyte counts were elevated in a subgroup of SM patients with aberrant mast cell CD2 expression, which warrants further investigation. In conclusion, we found no evidence of global increase in neutrophil extracellular trap release in SM.
abstract_id: PUBMED:28439288
Systemic mastocytosis with KIT V560G mutation presenting as recurrent episodes of vascular collapse: response to disodium cromoglycate and disease outcome. Background: Mastocytosis are rare diseases characterized by an accumulation of clonal mast cells (MCs) in one or multiple organs or tissues. Patients with systemic mastocytosis (SM), whose MCs frequently arbor the activating D816V KIT mutation, may have indolent to aggressive diseases, and they may experience MC mediator related symptoms. Indolent SM with recurrent anaphylaxis or vascular collapse in the absence of skin lesions, ISMs(-), is a specific subtype indolent SM (ISM), and this clonal MC activation disorder represents a significant fraction of all MC activation syndromes. The V560G KIT mutation is extremely rare in patients with SM and its biological and prognostic impact remains unknown.
Case Presentation: A 15-year old boy was referred to our hospital because of repeated episodes of flushing, hypotension and syncope since the age of 3-years, preceded by skin lesions compatible with mastocytosis on histopathology that had disappeared in the late-early childhood. Diagnosis of ISM, more precisely the ISMs(-) variant, was confirmed based on the clinical manifestations together with increased baseline serum tryptase levels and the presence of morphologically atypical, mature appearing (CD117+high, FcεRI+) phenotypically aberrant (CD2+, CD25+) MCs, expressing activation-associated markers (CD63, CD69), in the bone marrow. Molecular genetic studies revealed the presence of the KIT V560G mutation in bone marrow MCs, but not in other bone marrow cells, whereas the screening for mutations in codon 816 of KIT was negative. The patient was treated with oral disodium cromoglycate and the disease had a favorable outcome after an eleven-year follow-up period, during which progressively lower serum tryptase levels together with the fully disappearance of all clinical manifestations was observed.
Conclusions: To the best of our knowledge this first report of a patient with ISM, whose bone marrow MCs carry the KIT V560G activating mutation, manifesting as recurrent spontaneous episodes of flushing and vascular collapse in the absence of skin lesions at the time of diagnosis, in whom disodium cromoglycate had led to long term clinical remission.
abstract_id: PUBMED:12588358
Differential expression of CD2 on neoplastic mast cells in patients with systemic mast cell disease with and without an associated clonal haematological disorder. Recently, aberrant coexpression of CD2 and CD25 has been reported to reliably distinguish neoplastic mast cells from normal or so-called reactive mast cells. Such expression is included in the consensus diagnostic criteria for systemic mast cell disease (SMCD). In our study of patients with SMCD, we found CD2 expression to be more prevalent on mast cells from patients without an associated haematological disorder (P = 0.04). Furthermore, no correlation was found between mast cell CD2 expression and other clinicopathological features in these patients.
abstract_id: PUBMED:22372203
Mastocytosis and related disorders. Mastocytosis represents a heterogeneous group of disorders characterized by an abnormal accumulation of mast cells in one or more organ systems. Mastocytosis is further divided into different subtypes according to the sites of involvement, laboratory findings, and degree of organ impairment. Cutaneous mastocytosis is diagnosed in the presence of skin involvement and absence of extracutaneous disease, and is most commonly seen in the pediatric population. Systemic mastocytosis, the disease form most commonly seen in adults, is characterized by the presence of multifocal, compact (dense) mast cell aggregates in the bone marrow or other extracutaneous organs. The mast cells may display atypical, often spindle-shape morphology and/or aberrant CD2 and/or CD25 expression. Elevation of serum tryptase and/or presence of KIT D816V mutation are other common findings. Systemic mastocytosis is further divided into different subtypes based on a combination of clinical features and laboratory findings. Recent studies have indicated that CD30 is frequently expressed in aggressive systemic mastocytosis and mast cell leukemia but infrequently in indolent systemic mastocytosis, and may be a useful marker for distinguishing these subtypes of systemic mastocytosis from one another. A group of related myeloid disorders, collectively termed myelomastocytic overlap syndromes, may pose diagnostic difficulty because of their significant clinical and pathologic overlap with systemic mastocytosis, and these will also be discussed in this review.
abstract_id: PUBMED:35596520
Histopathological characteristics are instrumental to distinguish monomorphic from polymorphic maculopapular cutaneous mastocytosis in children. Background: Mastocytosis is characterized by the accumulation of mast cells (MCs) in the skin or other organs, and can manifest at any age. A significant number of paediatric mastocytosis cases persist after puberty. In particular, monomorphic maculopapular cutaneous mastocytosis (mMPCM) is often persistent and associated with systemic mastocytosis. However, clinical differentiation of MPCM from polymorphic (p)MPCM can be difficult.
Aim: To identify histopathological features that can help to distinguish mMPCM from other subtypes of paediatric mastocytosis.
Methods: This was a retrospective study using skin biopsies from patients with any subtype of mastocytosis. The localization and density of the MC infiltrate, MC morphology and expression of aberrant markers were evaluated and correlated with clinical characteristics.
Results: In total, 33 biopsies were available for evaluation from 26 children [(10 with mMPCM, 5 with mastocytoma, 3 with diffuse cutaneous mastocytosis (DCM), 8 with pMPCM)] and 7 adults with MPCM. The MC number was increased in all patients, but was higher in children than adults (P < 0.01). The presence of mMPCM was associated with sparing of the papillary dermis from MC infiltration, whereas MC density in the papillary dermis was highest in pMPCM and DCM (P < 0.01). The positive predictive value of the presence of a reticular MC infiltrate for mMPCM was 72.7% (95% CI 51.4-87.0), and the negative predictive value was 83.3% (95% CI 42.2-97.2). There were no relevant differences in the expression of CD2, CD25 or CD30 between the different subtypes.
Conclusion: Skin histopathology might enhance the phenotypical differentiation of mMPCM from other subtypes in children, thereby increasing the accuracy of one's prognosis.
abstract_id: PUBMED:26525770
Expression of CD2 and CD25 on mast cell populations can be seen outside the setting of systemic mastocytosis. Background: Systemic mastocytosis (SM) is a diagnosis made using clinical, laboratory, and histologic parameters. Aberrant CD2 and/or CD25 expression on mast cells provides one minor criterion for a diagnosis of SM. To validate a tube (CD45/CD117/CD2/CD25) for mast cell evaluation, flow cytometry (FC) on residual material from marrow aspirates samples submitted to the hematopathology laboratory was performed.
Methods: Samples evaluated (n = 98) had no clinical or morphologic suspicion for SM. Samples were excluded if there was history of a myeloid stem cell neoplasm. Ten documented cases of SM were evaluated for comparison.
Results: Among cases without history of SM, 17.3% (n = 17) showed expression of CD2 and/or CD25 on ≥10% of the mast cell population (CD25 alone in 14 cases, CD2 alone in 2 cases, both in one case), while 82.6% (n = 81) showed no expression of these antigens. The percentage of mast cells showing aberrant CD2 and/or CD25 expression respectively ranged from 12.1% to 98.8% and 22.2% to 95.7% Interestingly, all of the cases with evidence of aberrant antigen expression on mast cells were collected post-therapy while 22.1% of the negative samples were collected pre-therapy. A cut-off of 60% CD25 expression on mast cells identified all cases of SM while minimizing false positives.
Conclusions: These findings demonstrate that aberrant expression of CD2 and/or CD25 may be seen on mast cells outside of the setting of SM. The data suggests that this phenomenon may be seen more commonly following chemotherapy and that FC of mast cells should be interpreted with caution in the post-chemotherapy setting. © 2015 International Clinical Cytometry Society.
Answer: The aberrant expression of CD2 and CD25 by skin mast cells (sMCs) is not as indicative of systemic mastocytosis (SM) as the bone marrow mast cell (BMMC) immunophenotype. While sMCs displayed immunoreactivity for one of the neoplastic antigens in the majority of SM patients, the presence of CD2- or CD25-positive sMCs was detected in 57.1% of cutaneous mastocytosis (CM) and 90.3% of SM cases, indicating a higher frequency in SM. However, CD2 expression on sMCs was more frequent than CD25 expression (67.3% and 38.5%, respectively), and more frequent in SM than in CM cases. The presence of one of the aberrant sMC antigens was detected in 84.2% of patients with the KIT D816V mutation in the BM. A positive correlation between densities of CD25- and CD117-positive sMCs was found in SM patients. Despite these findings, the study concludes that the aberrant expression of CD2 and/or CD25 on sMCs is not as indicative of SM as the BMMC immunophenotype (PUBMED:25402852).
In another study, a patient with systemic mastocytosis and myeloid sarcoma simultaneously presenting extensive skin involvements was found to have mutations of Kit and Arid1a. This case highlights the importance of considering the possibility of various tumors occurring in patients with multiple mutations (PUBMED:33327223).
Furthermore, well-differentiated systemic mastocytosis (WDSM), a rare variant of SM, is characterized by bone marrow infiltration by mature-appearing mast cells often lacking exon 17 KIT mutations. In this cohort, skin involvement was heterogeneous and KIT mutations were detected in only 30% of patients. BM MCs displayed a unique immunophenotypic pattern with absent (79%) or low (21%) positivity for CD25, CD2, or both, despite only 27% of patients fulfilling the World Health Organization criteria for SM (PUBMED:26100086).
In summary, while there is a correlation between the aberrant expression of CD2 and CD25 on sMCs and systemic involvement in mastocytosis, it is not definitive, and other factors such as BMMC immunophenotype and the presence of KIT mutations are also important in diagnosing SM. |
Instruction: Is daily chest X-ray necessary after lung resection?
Abstracts:
abstract_id: PUBMED:33517491
Chest X-ray in suspected lung cancer is harmful. Objectives: The aim of this study was to analyse the use of the chest radiograph (CXR) as the first-line investigation in primary care patients with suspected lung cancer.
Methods: Of 16,945 primary care referral CXRs (June 2018 to May 2019), 1,488 were referred for suspected lung cancer. CXRs were coded as follows: CX1, normal but a CT scan is recommended to exclude malignancy; CX2, alternative diagnosis; or CX3, suspicious for cancer. Kaplan-Meier survival analysis was undertaken by stratifying patients according to their CX code.
Results: In the study period, there were 101 lung cancer diagnoses via a primary care CXR pathway. Only 10% of patients with a normal CXR (CX1) underwent subsequent CT and there was a significant delay in lung cancer diagnosis in these patients (p < 0.001). Lung cancer was diagnosed at an advanced stage in 50% of CX1 patients, 38% of CX2 patients and 57% of CX3 patients (p = 0.26). There was no survival difference between CX codes (p = 0.42).
Conclusion: Chest radiography in the investigation of patients with suspected lung cancer may be harmful. This strategy may falsely reassure in the case of a normal CXR and prioritises resources to advanced disease.
Key Points: • Half of all lung cancer diagnoses in a 1-year period are first investigated with a chest X-ray. • A normal chest X-ray report leads to a significant delay in the diagnosis of lung cancer. • The majority of patients with a normal or abnormal chest X-ray have advanced disease at diagnosis and there is no difference in survival outcomes based on the chest X-ray findings.
abstract_id: PUBMED:31666847
Effectiveness of Lung Ultrasound in Comparison with Chest X-Ray in Diagnosis of Lung Consolidation. Background: Lung ultrasound (US) is an available and inexpensive tool for the diagnosis of community-acquired pneumonia (CAP); it which has no hazards of radiation and can be easily used.
Aim: To evaluate the efficacy of lung ultrasound in the diagnosis and follow-up of CAP.
Patients And Methods: 100 patients aged from 40 to 63 years with a mean age of 52.3 ± 10 years admitted to the Critical Care Department, Cairo University with pictures of CAP. Lung US was performed for all patients initially, then a plain chest X-ray (CXR) was performed. Another lung ultrasound was performed on the 10th day after admission.
Results: Initial chest X-ray was correlated with the initial chest ultrasound examination in CAP diagnosis (R-value = 0.629, P < 0.001). Cohen's κ was run to determine if there is an agreement between the findings of the initial chest X-ray findings and those of the initial chest ultrasound in CAP diagnosis. A moderate agreement was found where κ = .567 (95% CI, 0.422 to 0.712) and P < 0.001. Upon initial examination, the CXR diagnosed CAP in 48.0% of patients, while lung US diagnosed the disease in 70% of patients. Moreover, lung US was more sensitive than CXR (P-value < 0.001). Compared to the accuracy of computed tomography (CT) chest (100%) which is the gold standard for CAP diagnosis, the accuracy of lung US was 95.0%, while the accuracy of CXR was 81.0%.
Conclusion: This study proved the effectiveness of lung ultrasound in CAP diagnosis.
abstract_id: PUBMED:35572863
Uncertainty analysis of chest X-ray lung height measurements and size matching for lung transplantation. Background: Errors in measuring chest X-ray (CXR) lung heights could contribute to the occurrence of size-mismatched lung transplant procedures.
Methods: We first used Bland-Altman analysis for repeated measures to evaluate contributors to measurement error of chest X-ray lung height. We then applied error propagation theory to assess the impact of measurement error on size matching for lung transplantation.
Results: A total 387 chest X-rays from twenty-five donors and twenty-five recipients were measured by two raters. Individual standard deviation for lung height differences were independent of age, sex, donor vs. recipient, diagnostic group and race/ethnicity and all were pooled for analysis. Bias between raters was 0.27 cm (±0.03) and 0.22 cm (±0.06) for the right and left lung respectively. Within subject variability was the biggest contributor to error in measurement, 2.76 cm (±0.06) and 2.78 cm (±0.2) for the right and left lung height. A height difference of 4.4 cm or more (95% CI: ±4.2, ±4.6 cm) between the donor and the recipient right lung height has to be accepted to ensure matching for at least 95% of patients with the same true lung height. This difference decreases to ±1.1 cm (95% CI: ±0.9, ±1.3 cm) when the average from all available chest X-rays is used. The probability of matching a donor and a recipient decreases with increasing true lung height difference.
Conclusions: Individual chest X-ray lung heights are imprecise for the purpose of size matching in lung transplantation. Averaging chest X-rays lung heights reduced uncertainty.
abstract_id: PUBMED:36321523
Early-stage lung cancer associated with higher frequency of chest x-ray up to three years prior to diagnosis. Objectives: Symptom awareness campaigns have contributed to improved early detection of lung cancer. Previous research suggests that this may have been achieved partly by diagnosing lung cancer in those who were not experiencing symptoms of their cancer. This study aimed to explore the relationship between frequency of chest x-ray in the three years prior to diagnosis and stage at diagnosis.
Settings: Lung cancer service in a UK teaching hospital.
Participants: Patients diagnosed with lung cancer between 2010 and 2013 were identified. The number of chest x-rays for each patient in the three years prior to diagnosis was recorded. Statistical analysis of chest x-ray frequency comparing patients with early- and late-stage disease was performed.
Results: One-thousand seven-hundred fifty patients were included - 589 (33.7%) with stage I/II and 1,161 (66.3%) with stage III/IV disease. All patients had at least one chest x-ray in the six months prior to diagnosis. Those with early-stage disease had more chest x-rays in this period (1.32 vs 1.15 radiographs per patient, P = 0.009). In the period 36 months to six months prior to lung cancer diagnosis, this disparity was even greater (1.70 vs 0.92, radiographs per patient, P < 0.001).
Conclusions: Increased rates of chest x-ray are likely to contribute to earlier detection. Given the known symptom lead time many patients diagnosed through chest x-ray may not have been experiencing symptoms caused by their cancer. The number of chest x-rays performed could reflect patient and/or clinician behaviours in response to symptoms.
abstract_id: PUBMED:30427130
Comparison of lung ultrasound and chest X-ray findings in children with bronchiolitis. Respiratory syncytial virus is the main pathogen responsible for bronchiolitis. Usually, there is no indication to perform diagnostic imaging or run laboratory tests in patients with bronchiolitis since the diagnosis is based on the clinical presentation. Chest radiogram can be useful in severe cases. So far, lung ultrasound has not been considered as an alternative in guidelines for imaging diagnosis of bronchiolitis. The aim of the study was to compare lung ultrasound and chest X-ray findings in children with bronchiolitis. In our study we retrospectively compared diagnostic imaging findings in children with confirmed respiratory syncytial virus infection. The study included 23 children aged 2 weeks to 24 months and 3 children older than 24 months. Chest X-ray showed lesions in only 4 cases, whereas ultrasound abnormalities were found in 21 patients. Pathologies revealed by chest X-ray were the same for all 4 cases and consisted of an enlarged hilus and peribronchial cuffing. Sonographic lesions included inflammatory consolidations larger than 10 mm in 11 patients, small consolidations (<10 mm diameter) in 8 patients, interstitial syndromes in 6 patients, and alveolar-interstitial syndromes in 11 patients. A small amount of pleural effusion was detected in 3 patients. Considering safety, short time of examination, high sensitivity in finding pleural effusion, small consolidations and signs of interstitial infiltrations, transthoracic lung ultrasound may be useful in the diagnosis of bronchiolitis.
abstract_id: PUBMED:11147625
Sensitivity and specificity of chest X-ray screening for lung cancer: review article. Background: The incidence and mortality rates of lung carcinoma have been increasing during the last years. Despite this, medical public policy holds that chest X-ray screening is ineffective in the early detection of lung carcinoma.
Methods: The authors reviewed the most important studies published in the literature regarding the role of chest X-ray screening in the early diagnosis of lung carcinoma in a high risk population. None of the four randomized, controlled trials on lung carcinoma screening conducted in male cigarette smokers demonstrated a reduction in the mortality rate. Accordingly, no organization that formulates screening policy advocates any specific early detection strategies for lung carcinoma.
Results: A careful analysis of randomized, controlled trials showed that there was no improvement in the mortality rate in the screened populations, but there is considerable evidence that chest X-ray screening is associated with earlier detection and improved survival.
Conclusions: In the authors' opinion, the considerable improvements in distribution by disease stage, tumor resectability, and patient survival in the screened groups demonstrate the effectiveness of chest X-ray screening in the early detection of lung carcinoma. The authors conclude that radiographic screening is the only valid method of secondary prevention in cigarette smokers.
abstract_id: PUBMED:31220971
Neonatal lung diseases: lung ultrasound or chest x-ray. Chest X-ray (CXR) examination is a well-recognized imaging modality in the diagnosis of neonatal lung diseases. On the other hand, lung ultrasound (LUS) has been an emerging and increasingly studied modality. However, the role of LUS as well as its potential to replace CXRs in the detection of neonatal lung diseases has been debated. We combine the present research progress and our own clinical experience to elaborate on various aspects of the potential routine use of lung ultrasound in neonatal intensive care units. We conclude that both LUS and CXR have a number of advantages and disadvantages. They should serve as complementary diagnostic methods in providing accurate, timely, and reliable information.
abstract_id: PUBMED:2399745
Mass chest x-ray of at-risk probands In the GDR annually repeated indiscriminate mass miniature radiographies (MMR) of all persons in the age of 16 years and over were performed during 3 decades. The aim was, beside finding cases of pulmonary tuberculosis, the early detection of bronchogenic carcinoma. The decrease of tuberculosis incidence and the shift of nearly all new cases to higher age groups lead to a change of this policy. The new regulation, in force since 1986, provides mass X-ray examinations of all persons in the age of 40 years and over with 2 years interval, and X-ray examinations of the lung of persons in several risk groups with shorter, mainly annual intervals. The latter became a task of the chest clinics. The experience of 2 chest clinics with the organization of these examinations and the yield of new cases of pulmonary tuberculosis and bronchogenic carcinoma are reported. It is necessary to improve the organization and to reduce the number of risk-groups to those with an acceptable balance of input and result. Otherwise, the staff of the chest clinics is impeded to fulfill more rewarding tasks in the care of patients. The results of the attempts of early detection of bronchogenic carcinoma and improvement of life expectancy by X-ray examination of risk groups and early resection were disappointing. Thus, other ways must be found. A revision of the regulation of 1986, concerning risk groups, is proposed with the aim to rationalize their surveillance.
abstract_id: PUBMED:35416574
Prognostic significance of peripheral consolidations at chest x-ray in severe COVID-19 pneumonia. To evaluate the possible prognostic significance of the development of peripheral consolidations at chest x-ray in COVID-19 pneumonia, we retrospectively studied 92 patients with severe respiratory failure (PaO2/FiO2 ratio < 200 mmHg) that underwent at least two chest x-ray examinations (baseline and within 10 days of admission). Patients were divided in two groups based on the evolution of chest x-ray toward the appearance of peripheral consolidations or toward a greater extension of the lung abnormalities but without peripheral consolidations. Patients who developed lung abnormalities without peripheral consolidations as well as patients who developed peripheral consolidations showed, at follow-up, a significant worsening of the PaO2/FiO2 ratio but a significantly lower mortality and intubation rate was observed in patients with peripheral consolidations at chest x-ray. The progression of chest x-ray toward peripheral consolidations is an independent prognostic factor associated with lower intubation rate and mortality.
abstract_id: PUBMED:33457446
COVID-19 detection and heatmap generation in chest x-ray images. Purpose: The outbreak of COVID-19 or coronavirus was first reported in 2019. It has widely and rapidly spread around the world. The detection of COVID-19 cases is one of the important factors to stop the epidemic, because the infected individuals must be quarantined. One reliable way to detect COVID-19 cases is using chest x-ray images, where signals of the infection are located in lung areas. We propose a solution to automatically classify COVID-19 cases in chest x-ray images. Approach: The ResNet-101 architecture is adopted as the main network with more than 44 millions parameters. The whole net is trained using the large size of 1500×1500 x-ray images. The heatmap under the region of interest of segmented lung is constructed to visualize and emphasize signals of COVID-19 in each input x-ray image. Lungs are segmented using the pretrained U-Net. The confidence score of being COVID-19 is also calculated for each classification result. Results: The proposed solution is evaluated based on COVID-19 and normal cases. It is also tested on unseen classes to validate a regularization of the constructed model. They include other normal cases where chest x-ray images are normal without any disease but with some small remarks, and other abnormal cases where chest x-ray images are abnormal with some other diseases containing remarks similar to COVID-19. The proposed method can achieve the sensitivity, specificity, and accuracy of 97%, 98%, and 98%, respectively. Conclusions: It can be concluded that the proposed solution can detect COVID-19 in a chest x-ray image. The heatmap and confidence score of the detection are also demonstrated, such that users or human experts can use them for a final diagnosis in practical usages.
Answer: The provided abstracts do not directly address the question of whether daily chest X-rays are necessary after lung resection. The abstracts focus on various aspects of chest X-ray (CXR) use, including its effectiveness in diagnosing lung conditions, its role in lung cancer screening, and its comparison with other diagnostic tools such as lung ultrasound (US) and computed tomography (CT). However, none of the abstracts specifically discuss the postoperative management of patients who have undergone lung resection and the necessity of daily CXRs in that context.
To answer the question, clinical guidelines and studies specifically investigating postoperative care after lung resection would need to be consulted. These would likely consider factors such as the risk of complications, the patient's clinical status, and the potential benefits and harms of frequent imaging. In general, the decision to perform daily CXRs after lung resection would be based on individual patient needs, the presence of symptoms or complications, and the treating physician's clinical judgment. |
Instruction: Is accurate preoperative assessment of pancreatic cystic lesions possible?
Abstracts:
abstract_id: PUBMED:24479516
Is accurate preoperative assessment of pancreatic cystic lesions possible? Introduction: Cystic lesions of the pancreas (CLP) are of different origin and behaviour. Mucinous lesions with the risk of invasive cancer represent an important subgroup. The key point in differential diagnosis of CLP is to distinguish malignant and benign lesions and also correct indication for surgery in order to minimize the impact of serious complications after resection. Different and unsatisfying predictive values of each of the examinations make proper diagnosis challenging. We focused on overall diagnostic accuracy of preoperative imaging and analytic studies. We studied the accuracy of distinguishing between non-neoplastic vs. neoplastic and bening vs. malignant lesions.
Material And Methods: We retrospectively analyzed all of the patients (N=72) with CLP (median of age 58 years, range 22-79) recommended for surgery. CT, EUS, ERCP, MRCP findings, cytology and aspirate analysis were used to establish preoperative diagnosis. Finally, preoperative diagnoses were compared with postoperative pathological findings to establish overall accuracy of preoperative assessment.
Results: During 5 years, 72 patients underwent resection for CLP. We performed 66 (92%) resection and 6 (8%) palliative procedures with 32% morbidity and 7% of one hospital stay mortality. All the patients were examined by CT and EUS. FNA was performed in 44 (61%) patients. Cytology was evaluable in 39 (88%) cases. ERCP was done in 40 (55%) patients. Pathology revealed non-neoplastic CLP in 25 (35%) and neoplastic lesions in 47 (65%) specimens. Mucinous lesions accounted for 25%. Malignant or potentially malignant CLP were found in 37 (51%) patients. Sensitivity, specificity and diagnostic accuracy of preoperative diagnosis for distinguishing between inflammatory and neoplastic, and benign and malignant was 100%, 46%, 85% and 61%, 61%, 44%, respectively.
Conclusion: Correct and accurate preoperative assessment of CLP remains challenging. Despite the wide range of diagnostic modalities, the definitive preoperative identification of malignant or high-risk CLP is inaccurate. Because of this, a significant portion of the patients undergo pancreatic resection for benign or inflammatory lesions that are not potentially life-threatening. Possible serious complications after pancreatic surgery are the main reason for precise selection of patients with cystic affections recommended for surgery.
abstract_id: PUBMED:37245934
Pancreatic Cystic Lesions: Next Generation of Radiologic Assessment. Pancreatic cystic lesions are frequently identified on cross-sectional imaging. As many of these are presumed branch-duct intraductal papillary mucinous neoplasms, these lesions generate much anxiety for the patients and clinicians, often necessitating long-term follow-up imaging and even unnecessary surgical resections. However, the incidence of pancreatic cancer is overall low for patients with incidental pancreatic cystic lesions. Radiomics and deep learning are advanced tools of imaging analysis that have attracted much attention in addressing this unmet need, however, current publications on this topic show limited success and large-scale research is needed.
abstract_id: PUBMED:29899320
Pancreatic Cystic Lesions: Pathogenesis and Malignant Potential. Pancreatic cancer remains one of the most lethal cancers despite extensive research. Further understanding of precursor lesions may enhance the ability to treat and prevent pancreatic cancer. Pancreatic cystic lesions (PCLs) with malignant potential include: mucinous PCLs (intraductal papillary mucinous neoplasms and mucinous cystic neoplasm), solid pseudopapillary tumors and cystic neuroendocrine tumors. This review summarizes the latest literature describing what is known about the pathogenesis and malignant potential of these PCLs, including unique epidemiological, radiological, histological, genetic and molecular characteristics.
abstract_id: PUBMED:21160696
Imaging of benign and malignant cystic pancreatic lesions and a strategy for follow up. Cystic lesions in a variety of organs are being increasingly recognized as an incidental finding on cross-sectional imaging. These lesions can be benign, premalignant or malignant. When these cystic lesions are small it can be difficult to characterize them radiologically. However, with appropriate clinical history and knowledge of typical imaging features of cystic pancreatic lesions this can enable accurate diagnosis and thus guide appropriate treatment. In this review, we provide an overview of the most common types of cystic lesions and their appearance on computer tomography, magnetic resonance imaging and ultrasound. We will also discuss the follow up and management strategies of these cystic lesions.
abstract_id: PUBMED:21731238
Pancreatic cystic lesion in an infant. Pancreatic cystic lesions are rare clinical entities. To the best of our knowledge, only 38 cases have been reported in the English literature in children under the age of 2 years. We present a 2-month-old infant with a cystic lesion in the head of pancreas. We reviewed the possible causes and present our dilemmas in the management of these patients.
abstract_id: PUBMED:27048403
Cystic lesions of the pancreas-is radical surgery really warranted? Purpose: The purpose of this study was to retrospectively evaluate diagnostic accuracy of cystic lesions of the pancreas in order to determine if less aggressive surgical treatment might be safe and therefore warranted.
Methods: A retrospective cohort study was conducted in 232 patients with either observed or resected cystic lesions of the pancreas referred for evaluation and treatment to the University Medical Center Freiburg, Germany, between 2001 and 2011.
Results: Most patients had MRI or CT for preoperative imaging (90.6 %). Preoperatively, benign pseudocysts (BPC) were diagnosed in 84 (36.2 %) patients and intraductal papillary mucinous neoplasm (IPMN) in 59 (25.2 %) patients, whereas serous cyst adenoma, mucinous cystic neoplasm (MCN), solid pseudopapillary tumors (SPPTs), and neuroendocrine tumors (NETs) were less common. In 43 % of patients, the preoperative diagnosis concurred with the postoperative diagnosis. The preoperative diagnosis was accurate in BPC, less so in IPMN, and inaccurate in MCN, NET, and SPPT. However, prediction of tumor biology was accurate; only 11 % of the lesions regarded as benign turned out to be malignant after resection, and no patient without resection developed malignancy at a median follow-up of 8 months. Subsequently, 89 % of diagnosed benign tumors had indeed benign pathology.
Conclusions: The prediction of biology is often correct, whereas specific diagnosis is often wrong. A considerable amount of benign lesions are treated more aggressively than warranted if malignancy is suspected prior to surgery. Parenchyma-sparing techniques might be an option, but prospective multicenter studies need to follow. Experienced pancreatic radiologists can improve accuracy of preoperative biology.
abstract_id: PUBMED:26602569
Non-neoplastic pancreatic lesions that may mimic malignancy. The widespread use of abdominal ultrasound (US), computed tomography (CT), and magnetic resonance imaging (MRI) has resulted in an increased identification of asymptomatic pancreatic lesions. Preoperative diagnoses of pancreatic lesions can be difficult. Solid and cystic lesions and anatomic variants of normal can all mimic tumor clinically and radiologically. Newer imaging modalities have increased the likelihood of the accurate diagnosis of non-neoplastic pancreatic disease, however, despite the many advances; it still remains a challenge to differentiate rarer non-neoplastic entities and inflammatory masses from adenocarcinoma, preoperatively. Adding to the challenge is the fact that a variety of inflammatory, solid and cystic non-neoplastic lesions have significant clinical and radiological overlap with malignancies. About 5-10% of pancreatectomies performed with the primary clinical diagnosis of pancreatic carcinoma are later proved to be essentially non-neoplastic lesions. It is vital to include these non-neoplastic entities in the differential diagnosis while working up abnormal clinical and radiological pancreatic findings because it may drastically alter therapeutic options for the patients. The significance of recognizing these lesions preoperatively is to help to guide the clinical decision-making process and the avoidance of an unnecessary pancreatectomy. Examples of such entities include chronic pancreatitis, sarcoidosis, intrapancreatic accessory spleen (IPAS), lymphoid hyperplasia, lipomatous pseudohypertrophy (LPH), lymphangioma, lymphoepithelial cyst (LEC) and endometriosis.
abstract_id: PUBMED:26288611
Management of Incidental Pancreatic Cystic Lesions. Background: Pancreatic cystic lesions (PCL) are common. They are increasingly detected as an incidental finding of transabdominal ultrasound or cross-sectional imaging. In contrast to other parenchymal organs, dysontogenetic pancreatic cysts are extremely rare. In symptomatic patients the most frequent PCL are acute and chronic pseudocysts. The majority of incidental cystic lesions, however, are neoplasias which have different risks of malignancy.
Methods: PubMed was searched for studies, reviews, meta-analyses, and guidelines using the following key words: ('pancreatic cystic lesions' OR 'cystic pancreatic lesions' OR 'intraductal papillary mucinous neoplasia' OR 'mucinous cystic neoplasia' OR 'pancreatic cyst' OR 'pancreatic pseudocyst') AND (management OR treatment OR outcome OR prognosis OR diagnosis OR imaging OR 'endoscopic ultrasound' EUS-FNA OR EUS OR 'endoscopic ultrasonography' OR CT OR MRI). Retrieved papers were reviewed with regard to the diagnostic and therapeutic management of incidental PCL.
Results: In addition to clinical criteria, transabdominal ultrasonography including contrast-enhanced ultrasonography, cross-sectional radiological imaging, and endoscopic ultrasound (EUS) are used for diagnostic characterization and risk assessment. EUS plays an outstanding role in differential diagnosis and prognostic characterization of incidental PCL. In a single examination it is possible to perform high-resolution morphological description, perfusion imaging, as well as fine-needle aspiration of cyst content, cyst wall, and solid components. An international consensus guideline has defined worrisome and high-risk criteria for the risk assessment of mucinous pancreatic cysts, which are mainly based on the results of EUS and cross-sectional imaging. Nevertheless, despite diagnostic progress and guideline recommendations, differential diagnosis and management decisions remain difficult. This review will discuss problems in and approaches to the diagnosis of incidental PCL.
Conclusion: An evidence-based algorithm for the diagnosis of incidental PCL is proposed.
abstract_id: PUBMED:25789068
Magnetic resonance cholangiopancreatography: Comparison of two- and three-dimensional sequences for the assessment of pancreatic cystic lesions. The present study aimed to compare two-dimensional (2D) and three-dimensional (3D) magnetic resonance cholangiopancreatography (MRCP) for the assessment of pancreatic cystic lesions. Between February 2009 and December 2011, 35 patients that had been diagnosed with pancreatic cystic lesions, which was confirmed by surgery and pathology, underwent pre-operative 2D or 3D MRCP for pre-operative evaluation. In the present study, the quality of these 2D and 3D MRCP images, the visualization of the features of the cystic lesions, visualization of the pancreatic main duct and prediction of ductal communication with the cystic lesions were evaluated and compared using statistical software. The 3D MRCP images were determined to be of higher quality compared with the 2D MRCP images. The features of the cystic lesions were visualized better on 3D MRCP compared with 2D MRCP. The same capability for the visualization of the segment of the pancreatic main duct was exhibited by 3D and 2D MRCP. There was no significant difference between the area under the receiver operating characteristic curve values of 2D and 3D MRCP, which assessed the prediction of communication between cystic lesions and the pancreatic main duct. It was concluded that, compared with 2D MRCP, 3D MRCP provides an improved assessment of pancreatic cystic lesions, but does not exhibit an improved capability for the visualization of the pancreatic main duct or for the prediction of communication between cystic lesions and the pancreatic main duct.
abstract_id: PUBMED:37886207
The Hong Kong consensus recommendations on the diagnosis and management of pancreatic cystic lesions. Background: The finding of pancreatic cystic lesions (PCL) on incidental imaging is becoming increasingly common. International studies report a prevalence of 2.2-44.7% depending on the population, imaging modality and indication for imaging, and the prevalence increases with age. Patients with PCL are at risk of developing pancreatic cancer, a disease with a poor prognosis. This publication summarizes recommendations for the diagnosis and management of PCL and post-operative pancreatic exocrine insufficiency (PEI) from a group of local specialists.
Methods: Clinical evidence was consolidated from narrative reviews and consensus statements formulated during two online meetings in March 2022. The expert panel included gastroenterologists, hepatobiliary surgeons, oncologists, radiologists, and endocrinologists.
Results: Patients with PCL require careful investigation and follow-up due to the risk of malignant transformation of these lesions. They should undergo clinical investigation and pancreas-specific imaging to classify lesions and understand the risk profile of the patient. Where indicated, patients should undergo pancreatectomy to excise PCL. Following pancreatectomy, patients are at risk of PEI, leading to gastrointestinal dysfunction and malnutrition. Therefore, such patients should be monitored for symptoms of PEI, and promptly treated with pancreatic enzyme replacement therapy (PERT). Patients with poor response to PERT may require increases in dose, addition of a proton pump inhibitor, and/or further investigation, including tests for pancreatic function. Patients are also at risk of new-onset diabetes mellitus after pancreatectomy; they should be screened and treated with insulin if indicated.
Conclusions: These statements are an accurate summary of our approach to the diagnosis and management of patients with PCL and will be of assistance to clinicians treating these patients in a similar clinical landscape.
Answer: Accurate preoperative assessment of pancreatic cystic lesions (PCLs) remains challenging. Despite the availability of various diagnostic modalities such as CT, EUS, ERCP, MRCP, cytology, and aspirate analysis, the definitive preoperative identification of malignant or high-risk PCLs is inaccurate (PUBMED:24479516). The sensitivity, specificity, and diagnostic accuracy for distinguishing between inflammatory and neoplastic, and benign and malignant lesions are not optimal, leading to a significant number of patients undergoing pancreatic resection for benign or inflammatory lesions that are not potentially life-threatening (PUBMED:24479516).
Advanced tools of imaging analysis like radiomics and deep learning have attracted attention in addressing this unmet need, but current publications show limited success, and large-scale research is needed (PUBMED:37245934). The prediction of tumor biology is often correct, whereas the specific diagnosis is often wrong, indicating that a considerable number of benign lesions are treated more aggressively than warranted if malignancy is suspected prior to surgery (PUBMED:27048403).
Moreover, despite advances in imaging modalities, it remains a challenge to differentiate rarer non-neoplastic entities and inflammatory masses from adenocarcinoma preoperatively (PUBMED:26602569). The widespread use of abdominal imaging has resulted in increased identification of asymptomatic pancreatic lesions, which can mimic tumor clinically and radiologically (PUBMED:26602569).
In summary, while the preoperative assessment of PCLs has improved, it is not yet completely accurate, and there is a need for further research and development of more precise diagnostic tools to improve the preoperative identification of malignant or high-risk PCLs and to avoid unnecessary surgeries for benign conditions (PUBMED:24479516; PUBMED:37245934; PUBMED:27048403; PUBMED:26602569). |
Instruction: Post-imatinib surgery in advanced/metastatic GIST: is it worthwhile in all patients?
Abstracts:
abstract_id: PUBMED:27318456
Indications for surgery in advanced/metastatic GIST. Gastrointestinal stromal tumours (GISTs) are a relatively rare entity and often present as a locally advanced tumour or with metastatic disease. Complete surgical resection is the only means of cure in localised disease; however, imatinib therapy has greatly advanced the management of GIST and is established as both an adjunct to surgery in high-risk cases and as principle therapy in metastatic disease. Surgery in advanced GIST has undergone a renaissance in recent years with the potential for a combined treatment approach with either neoadjuvant imatinib in locally advanced primary disease or as an adjunct to imatinib in those with metastases or recurrent disease. Neoadjuvant imatinib can render a locally advanced primary GIST resectable, allow less invasive procedures or promote preservation of function, especially if the tumour is located in an anatomically difficult position. The role of surgery in metastatic or recurrent disease is more controversial and case selection is critical. The potential benefit is difficult to quantify, although surgery may have a limited favourable impact on progression-free survival and overall survival for those patients whose disease is responding to imatinib or those with limited focal progression. Patients with imatinib resistant disease should not be offered surgery unless as an emergency where palliative intervention may be justified.
abstract_id: PUBMED:21324142
Neoadjuvant imatinib in patients with locally advanced non metastatic GIST in the prospective BFR14 trial. Background: The role of surgery in the management of patients with advanced gastrointestinal stromal tumors (GIST) in the era of imatinib mesylate (IM) remains debated. We analyzed the outcome of patients with non metastatic locally advanced primary GIST treated with IM within the prospective BFR14 phase III trial.
Methods: The database of the BFR14 trial was searched for patients with no metastasis at time of inclusion. Patients treated for recurrent disease were excluded. Twenty-five of 434 patients met these criteria.
Results: Fifteen of 25 patients (60%) had a partial response to IM. Nine of the 25 patients (36%) underwent surgical resection of their primary tumor after a median of 7.3 months of IM treatment (range 3.4-12.0). Per protocol patients received continuous IM treatment in the post resection period, in an adjuvant setting. With a median follow-up of 53.5 months, there was a significant improvement in progression-free survival (PFS) and overall survival (OS) for patients who underwent surgical resection versus those who did not (median not reached vs 23.6 months, p = 0.0318 for PFS and median not reached vs 42.2 months, p = 0.0217 for OS). In the group of patients who underwent resection followed by IM, the 3-year PFS and OS rates were 67% and 89% respectively
Conclusions: Following neoadjuvant IM for non metastatic locally advanced GIST 9 of 25 patients (36%) were selected for resection of the primary tumor. OS and PFS figures were close to those of localised intermediate or high risk GIST (70% at 5 years) in the subgroup of operated patients, while the outcome of the non-operated subgroup was similar to that of metastatic GIST.
abstract_id: PUBMED:29682621
Surgery for metastatic gastrointestinal stromal tumor: to whom and how to? Although imatinib is a standard treatment for metastatic or recurrent gastrointestinal stromal tumors (GISTs), acquired c-kit mutations reportedly cause secondary resistance to imatinib. Sunitinib is a tyrosine kinase inhibitor (TKI) that can be used as second-line therapy in imatinib-resistant or -intolerant GISTs. For sunitinib-resistant or -intolerant GISTs, regorafenib is a standard third-line treatment. Although TKI therapies have revolutionized the treatment of recurrent or metastatic GISTs, they cannot cure GISTs. Therefore, in the era of TKIs, role of cytoreductive surgery for recurrent or metastatic GISTs has been discussed. Retrospective studies of treatment strategies with front-line surgery prior to imatinib have shown that initial cytoreduction confers no benefit in cases of advanced or recurrent GIST, and administering imatinib is the principle treatment. Most retrospective studies report cytoreductive surgery to be feasible in patients with metastatic GIST whose disease is stable or responsive to imatinib. Cytoreductive surgery may be indicated in limited disease progression refractory to imatinib when complete resection is possible, but case selection is critical. Cytoreductive surgery for metastatic GIST treated with sunitinib seems less feasible because of high rates of incomplete resections and complications. The role of cytoreductive surgery for metastatic GISTs would be difficult to establish in a prospective study; individualized treatments need to be carefully designed based on c-kit and platelet-derived growth factor receptor alpha (PDGFRA) mutations and other factors.
abstract_id: PUBMED:37686582
Evolution of Patterns of Care and Outcomes in the Real-Life Setting for Patients with Metastatic GIST Treated in Three French Expert Centers over Three Decades. Gastrointestinal stromal tumors (GIST) are rare mesenchymal tumors characterized by KIT or PDGFRA mutations. Over three decades, significant changes in drug discovery and loco-regional (LR) procedures have impacted treatment strategies. We assessed the evolution of treatment strategies for metastatic GIST patients treated in the three national coordinating centers of NetSarc, the French network of sarcoma referral centers endorsed by the National Institute of Cancers, from 1990 to 2018. The primary objective was to describe the clinical and biological profiles as well as the treatment modalities of patients with metastatic GIST in a real-life setting, including access to clinical trials and LR procedures in the metastatic setting. Secondary objectives were to assess (1) patients' outcome in terms of time to next treatment (TNT) for each line of systemic treatment, (2) patients' overall survival (OS), (3) evolution of patients' treatment modalities and OS according to treatment access: <2002 (pre-imatinib approval), 2002-2006 (pre-sunitinib approval), 2006-2014 (pre-regorafenib approval), post 2014, and (4) the impact of clinical trials and LR procedures on TNT and OS in the metastatic setting. 1038 patients with a diagnosis of GIST made in one of the three participating centers between 1990 and 2018 were included in the national prospective database. Among them, 492 patients presented metastasis, either synchronous or metachronous. The median number of therapy lines in the metastatic setting was 3 (range 0-15). More than half of the patients (55%) participated in a clinical trial during the course of their metastatic disease and half (51%) underwent additional LR procedures on metastatic sites. The median OS in the metastatic setting was 83.4 months (95%CI [72.7; 97.9]). The median TNT was 26.7 months (95%CI [23.4; 32.3]) in first-line, 10.2 months (95%CI [8.6; 11.8]) in second line, 6.7 months (95%CI [5.3; 8.5]) in third line, and 5.5 months (95%CI [4.3; 6.7]) in fourth line, respectively. There was no statistical difference in OS in the metastatic setting between the four therapeutic periods (log rank, p = 0.18). In multivariate analysis, age, AFIP Miettinen classification, mutational status, surgery of the primary tumor, participation in a clinical trial in the first line and LR procedure to metastatic sites were associated with longer TNT in the first line, whereas age, mitotic index, mutational status, surgery of the primary tumor and LR procedure to metastatic sites were associated with longer OS. This real-life study advocates for early reference of metastatic GIST patients to expert centers to orchestrate the best access to future innovative clinical trials together with LR strategies and further improve GIST patients' survival.
abstract_id: PUBMED:19628568
Post-imatinib surgery in advanced/metastatic GIST: is it worthwhile in all patients? Background: Surgical indication for metastatic gastrointestinal stromal tumor (GIST) treated with imatinib is not yet established.
Materials And Methods: We analyzed 80 patients who underwent surgery for metastatic GIST after imatinib therapy from July 2002 to October 2007. Patients were divided into those with surgery at best clinical response (group A, n = 49) and those with surgery at focal progression (group B, n = 31). Primary end points were progression-free survival (PFS) and disease-specific survival (DSS).
Results: Two-year postoperative PFS was 64.4% in group A and 9.7% in group B (P < 0.01). In group A, median PFS was not reached; in group B, it was 8 months. Median DSS from the time of imatinib onset was not reached in either group. Five-year DSS was 82.9% in group A and 67.6% in group B (P < 0.01). Multivariate analysis confirmed a significantly shorter PFS and DSS in group B. Surgical morbidity occurred in 13 patients (16.3%).
Conclusions: Surgery for focal progressive lesions could be considered as part of the second-line/third-line armamentarium in selected cases. Surgery of residual disease upon best clinical response seems associated with survival benefit compared with historical controls in similar patient collectives treated with imatinib alone. However, evidence from prospective randomized trials is needed to make definite recommendations.
abstract_id: PUBMED:26820287
The Role of Surgery in Metastatic Gastrointestinal Stromal Tumors. Opinion Statement: Gastrointestinal stromal tumors (GISTs) are the most common sarcomas and mesenchymal neoplasms of the gastrointestinal tract. Macroscopically complete (R0/R1) resection is the standard treatment for localized resectable GIST with adjuvant imatinib therapy recommended for patients with intermediate or high-risk disease. In patients with advanced unresectable or metastatic GIST, imatinib has significantly improved outcomes. However, while most patients achieve partial response (PR) or stable disease (SD) on imatinib (with maximal response typically seen by 6 months on treatment), approximately half will develop secondary resistance by 2 years. Available data suggest that cytoreductive surgery may be considered in patients with metastatic GIST who respond to imatinib, particularly if a R0/R1 resection is achieved. The benefit of surgery in patients with focal tumor progression on imatinib is unclear, but may be considered. Patients with multifocal progression undergoing surgery generally have poor outcomes. Thus, surgery should be considered in patients with metastatic GIST whose disease responds to imatinib with a goal of performing R0/R1 resection. Optimal timing of surgery is unclear but should be considered between 6 months and 2 years after starting imatinib. Although surgery in patients with metastatic GIST treated with sunitinib is feasible, incomplete resections are common, complication rates are high, and survival benefit is unclear. Therefore, a careful multidisciplinary consultation is required to determine optimal treatment options on a case-by-case basis. Finally, patients with metastatic GIST should resume tyrosine kinase inhibitor treatment postoperatively.
abstract_id: PUBMED:38465042
Advanced and Metastatic Gastrointestinal Stromal Tumors Presenting With Surgical Emergencies Managed With Surgical Resection: A Case Series. Advanced and metastatic gastrointestinal stromal tumors (GISTs) presenting with surgical emergencies are rare. Neoadjuvant imatinib being the treatment of choice for non-metastatic advanced disease with a proven role in downstaging the disease may not be feasible in patients presenting with bleeding and obstruction. We present a case series with retrospective analysis of a prospectively maintained database of patients with advanced and metastatic GISTs presenting with surgical emergencies. Clinical characteristics, imaging and endoscopic findings, surgical procedures, histological findings, and outcomes in these patients were studied. Four patients were included in this case series, with three males and one female (age range: 24-60 years). Two patients presented with melena; one with hemodynamic instability despite multiple blood transfusions underwent urgent exploratory laparotomy for bleeding gastric GIST, while the other underwent surgical exploration after careful evaluation given the recurrent, metastatic disease with a stable metabolic response on six months of imatinib. One patient with metastatic jejunal GIST who presented with an umbilical nodule and intestinal obstruction was given a trial of non-operative management for 72 hours, but due to non-resolution of obstruction, segmental jejunal en bloc resection with the dome of the urinary bladder with reconstruction and metastasectomy was needed. The patient with advanced gastric GIST who presented with gastric outlet obstruction was resuscitated, and an attempt of endoscopic naso-jejunal tube placement was tried, which failed, and exploration was needed. The mean length of hospital stay was 7.5 days. Histopathological examination confirmed GIST in all four patients with microscopic negative resection margins. All patients were started on imatinib with dose escalation to 800 mg in the patient with recurrent and metastatic disease; however, the patient with bleeding gastric GIST experienced severe adverse effects of imatinib and discontinued the drug shortly. All four patients are disease-free on follow-ups of 15 months, 48 months for the patient with advanced non-metastatic disease, and six and 24 months for the patients with metastatic disease. In the era of tyrosine kinase inhibitor (TKI) therapy for advanced and metastatic disease, upfront surgery is usually reserved for surgical emergencies only. Surgical resection, the cornerstone for the treatment of resectable GIST, may also be clinically relevant in metastatic settings, although it requires a careful and individualized approach.
abstract_id: PUBMED:32118738
Preoperative imatinib treatment in patients with locally advanced and metastatic/recurrent gastrointestinal stromal tumors: A single-center analysis. The advent of imatinib mesylate (IM) has dramatically revolutionized the prognosis of advanced and metastatic/recurrent gastrointestinal stromal tumors (GISTs). The objective of this retrospective study is to investigate the safety and efficacy of combination of surgery following IM treatment in the management of advanced and metastatic/recurrent GISTs. We further explore the long-term clinical outcomes in these who underwent therapy of preoperative IM.Eligible patients with GISTs before the onset of the IM therapy and were periodically followed up in the outpatient clinic were included in this study. Detailed clinical and pathologic characteristics were obtained from the medical records of our institution. Univariate and multivariate regression analyses were performed to use for the evaluation of potential prognostic factors.A total of 51 patients were included in the study, of these patients, 36 patients underwent surgery and median duration of preoperative IM is 8.2months (range 3.5-85 months). Significant median tumor shrinkage rate was 29.27% (95% confidence interval 21.00%-34.00%) observed in these patients who responded to IM, and partial response and stable disease were achieved in 24 patients (47.06%) and 23 patients (45.10%), respectively, in light of the RECIST guideline (version 1.1). After the median follow-up of 43.70 months (range 14.2-131.1 months), 1- and 3-year overall survival (OS) were estimated to be 96.1% and 94.0%, respectively, and there was a significant improvement in OS for patients who received surgical intervention versus those who did not.Our study consolidates that patients were received preoperative IM therapy could shrink the size of tumors and facilitate organ-function preservation. The long-term analysis on this study supports that surgical intervention following IM therapy benefits for patients with primary advanced and recurrent or metastatic GISTs on long-term prognosis.
abstract_id: PUBMED:25682222
Molecular targeted therapies in advanced or metastatic chordoma patients: facts and hypotheses. Chordomas, derived from undifferentiated notochordal remnants, represent less than 4% of bone primary tumors. Despite surgery followed by radiotherapy, local and metastatic relapses are frequent. In case of locally advanced or metastatic chordomas, medical treatment is frequently discussed. While chemotherapy is ineffective, it would appear that some molecular targeted therapies, in particular imatinib, could slow down the tumor growth in case-reports, retrospective series, and phase I or II trials. Nineteen publications, between January 1990 and September 2014, have been found describing the activity of these targeted therapies. A systematic analysis of these publications shows that the best objective response with targeted therapies was stabilization in 52 to 69% of chordomas. Given the indolent course of advanced chordoma and because of the absence of randomized trial, the level of evidence to treat chordomas with molecular therapy is low (level III), whatever the drug. Furthermore, we could not draw firm conclusion on the activity of imatinib. Other putative targets have also been described. Therefore, further clinical trials are expected, especially with these targets. Nevertheless, it seems essential, in those future studies, to consider the naturally slow course of the disease.
abstract_id: PUBMED:31245289
Apatinib Treatment in Metastatic Gastrointestinal Stromal Tumor. Background: Gastrointestinal stromal tumors (GISTs) are the most common mesenchymal tumors of the gastrointestinal tract. The clinical management of patients with metastatic GISTs is exceptionally challenging due to their poor prognosis. Apatinib is a multiple tyrosine kinase inhibitor. Here, we present the unique case with metastatic GISTs who derived clinical benefit from apatinib following the failure of imatinib and sunitinib. Case presentation: A 57-year-old man was admitted to our hospital diagnosed with metastatic and recurrent GISTs following surgical resection. Fifty-four months after the first-line imatinib treatment, he developed progressive disease and then was treated with cytoreductive surgery combined with imatinib. Disease progression occurred after 7 months. He then received second-line sunitinib and achieved a progression-free survival of 11 months. Apatinib mesylate was then administered. Follow-up imaging revealed a stable disease. Progression-free survival following apatinib therapy was at least 8 months. The only toxicities were hypertension and proteinuria, which were both controllable and well-tolerated. Conclusions: Treatment with apatinib provides an additional option for the treatment of patients with GISTs refractory to imatinib and sunitinib.
Answer: Post-imatinib surgery in advanced/metastatic GIST is not considered worthwhile for all patients. The decision to perform surgery in the context of metastatic GIST is complex and must be individualized based on various factors, including the patient's response to imatinib and the extent of the disease.
Surgery may be beneficial for patients whose disease is responding to imatinib or those with limited focal progression, as it can have a favorable impact on progression-free survival (PFS) and overall survival (OS) (PUBMED:27318456). In the BFR14 trial, patients with non-metastatic locally advanced primary GIST who underwent surgical resection after neoadjuvant imatinib treatment showed significant improvement in PFS and OS compared to those who did not have surgery (PUBMED:21324142).
However, surgery for metastatic GIST treated with sunitinib seems less feasible due to high rates of incomplete resections and complications (PUBMED:29682621). Additionally, retrospective studies suggest that surgery at the time of best clinical response to imatinib may be associated with a survival benefit, but surgery for focal progressive lesions should be considered as part of the second-line/third-line treatment in selected cases (PUBMED:19628568).
The role of surgery in patients with metastatic GIST who respond to imatinib is supported, particularly if a complete (R0/R1) resection is possible. However, the benefit of surgery in patients with focal tumor progression on imatinib is unclear, and patients with multifocal progression generally have poor outcomes (PUBMED:26820287).
In cases of surgical emergencies, such as bleeding or obstruction, surgery may be the only option, even in the context of advanced or metastatic disease. Surgical resection can be clinically relevant in these settings, but it requires careful consideration (PUBMED:38465042).
Overall, while surgery can be part of the treatment strategy for advanced/metastatic GIST in certain situations, it is not universally recommended for all patients. The decision should be made on a case-by-case basis, considering the patient's response to targeted therapies, the feasibility of achieving complete resection, and the potential risks and benefits of the procedure. |
Instruction: Imaging features and metastatic pattern of non-IVC retroperitoneal leiomyosarcomas: are they different from IVC leiomyosarcomas?
Abstracts:
abstract_id: PUBMED:24834891
Imaging features and metastatic pattern of non-IVC retroperitoneal leiomyosarcomas: are they different from IVC leiomyosarcomas? Purpose: The purposes of this study were to describe the imaging features and metastatic pattern of non-inferior vena cava (IVC) retroperitoneal leiomyosarcomas (non-IVC LMS) and to compare them with those of IVC leiomyosarcomas (IVC LMS) to assess any differences between the 2 groups.
Materials And Methods: In this institutional review board-approved, Health Insurance Portability and Accountability Act-compliant retrospective study, all 56 patients with pathologically confirmed primary retroperitoneal leiomyosarcoma (34 non-IVC LMS and 22 IVC LMS) seen at our tertiary cancer center during a 10-year period were included. All available imaging of primary tumor (18 non-IVC LMS and 19 IVC LMS) and follow-up imaging studies (on all 56 patients) were reviewed in consensus by 2 fellowship-trained oncoradiologists. Imaging features and metastatic spread of non-IVC LMS were described and compared with those of IVC LMS. Continuous variables were compared using the Student t test, binary variables with the Fisher exact test, and survival using the log-rank test.
Results: Non-inferior vena cava retroperitoneal leiomyosarcomas had a mean size of 11.3 cm (range, 3.7-27 cm) and most commonly occurred in the perirenal space (16/18). Primary tumors were hyperattenuating to muscle (11/18) and showed heterogeneous enhancement (17/18). Lungs (22/34), peritoneum (18/34), and liver (18/34) were the most common metastatic sites. There was no significant difference between the imaging features and metastatic pattern of non-IVC and IVC LMS. Although non-IVC LMS presented at a more advanced stage (P < 0.002), there was statistically non-significant trend toward better median survival of non-IVC LMS (P = 0.07).
Conclusions: Non-inferior vena cava retroperitoneal leiomyosarcomas are large heterogeneous tumors arising in the perirenal space and frequently metastasize to lungs, peritoneum, and liver. From a radiologist's perspective, non-IVC LMS behave similar to IVC-LMS.
abstract_id: PUBMED:35581461
Inferior Vena Cava (IVC) Resection Without Reconstruction for a Large IVC Leiomyosarcoma. Background: Retroperitoneal tumours arising from the inferior vena cava (IVC) are rare tumours often requiring large vessel resection for complete surgical excision. Limited exposure to such tumours often discourages surgeons from offering surgical resection to these patients, depriving them of the only potentially curative modality. We present here the surgical technique for resection of a large IVC sarcoma without IVC reconstruction.
Methods: A 53-year-old lady presented with a large retroperitoneal sarcoma encasing the infra-hepatic IVC with tumour thrombus extension into the hepatic cloaca as well as the left renal vein. Surgical resection was planned as the disease remained stable after 2 cycles of neoadjuvant chemotherapy with adriamycin and ifosfamide.
Results: Complete surgical excision of the tumour was achieved by performing a resection of the entire length of infra-hepatic IVC and right kidney, without IVC reconstruction. Left renal vein was divided after careful preservation of a draining collateral. Tumour thrombus was extracted from the hepatic cloaca, and proximal IVC stump closure was achieved with preservation of right hepatic vein insertion. Total blood loss during the procedure was 2300 mL, and the patient recovered without compromise of renal function or development of lower limb oedema.
Conclusion: IVC resection without reconstruction can be safely performed for large retroperitoneal sarcomas involving major vascular structures. Familiarity with the retroperitoneal, retro-hepatic and supra-hepatic anatomy is paramount to achieving good surgical outcomes.
abstract_id: PUBMED:34866851
Retrohepatic Caval Leiomyosarcoma Antesitum Resection: A Case Report and a Review of Literature. Leiomyosarcoma of the inferior vena cava (IVC-LMS) is a rare mesenchymal tumor arising from the tunica media. In this report, we describe the antesitum resection without inferior vena cava (IVC) reconstruction of retrohepatic IVC-LMS. To our knowledge, this is the first report of antesitum resection for IVC-LMS from India. A 51-year-old lady presented with features of Budd-Chiari syndrome and abdominal pain. Evaluation showed the tumor arising from IVC extending from above the renal veins and upto the subdiaphragmatic IVC. Blood investigations were normal with a positron emission tomography scan showing no metastatic disease. Antesitum, in situ resection of the IVC-LMS was performed. The patient recovered well postoperatively without renal or hepatic issues. Radical resection of IVC-LMS offers a chance for cure even in locally advanced status. Experience in complex liver resection and liver transplantation has made the resection of such tumors possible and safe.
abstract_id: PUBMED:34415408
Current update on IVC leiomyosarcoma. Primary leiomyosarcoma of the inferior vena cava (IVC) is a rare soft tissue sarcoma associated with poor prognosis. Patients are often asymptomatic or present with nonspecific abdominal symptoms, which delays initial diagnosis and contributes to poor oncologic outcome. Key imaging modalities include ultrasonography (US), computed tomography (CT), and magnetic resonance imaging (MRI). Characteristic imaging features include imperceptible caval lumen, dilation of the IVC, heterogeneous enhancement of the tumor, and development of extensive collateral circulation. Surgical resection is the mainstay of treatment, while chemotherapy and/or radiation may serve as therapy adjuncts. This article reviews the pathology, clinical findings, imaging features and management of IVC leiomyosarcoma.
abstract_id: PUBMED:35943295
Resection of retroperitoneal tumors with inferior vena cava involvement without caval reconstruction. Background And Objectives: Retroperitoneal tumors with involvement of the inferior vena cava (IVC) often require resection of the IVC to achieve complete tumor removal. This study evaluates the safety and efficacy of IVC ligation without caval reconstruction.
Methods: A retrospective chart review of patients who underwent IVC ligation (IVC-Ligation) and IVC resection with reconstruction (IVC-Reconstruction) at our institution between May 2004 and April 2021 was performed. Outcomes from the two surgical techniques were compared via univariate analysis using the Kruskal-Wallis test for continuous variables and Fisher's exact test for categorical variables.
Results: Forty-nine IVC-Ligation and six IVC-Reconstruction surgeries were identified. There were no differences in baseline demographics, tumor characteristics, complication rates, postoperative morbidity, or overall 5-year survival between groups. IVC-Reconstruction patients were more likely to require intensive care unit admission (83% vs. 33%; p = 0.0257) and the IVC-Ligation cohort had a tendency to present with nondebilitating postoperative lymphedema (35% vs. 0%; p = 0.1615), which resolved for most patients.
Conclusions: IVC-Ligation is a viable surgical option for select patients presenting with retroperitoneal tumors with IVC involvement and provides acceptable short- and medium-term outcomes.
abstract_id: PUBMED:36115116
Leiomyosarcoma of the inferior vena cava presenting with bilateral lower extremity edema with comorbid sarcoidosis: A case report. We present a case of a 70-year-old female with Leiomyosarcoma (LMS) of the inferior vena cava (IVC). Although this is an extremely rare entity, in contradistinction, it is also the most common primary malignancy of the IVC [5]. The patient has a history of sarcoidosis, hypertension, diabetes mellitus type two, and chronic obstructive pulmonary disease (COPD). She presented with a complaint of bilateral lower extremity edema and was admitted where a computerized tomography (CT) scan of the abdomen and pelvis showed a large mass filling the IVC, a finding confirmed by magnetic resonance imaging. Radical resection of the retroperitoneal tumor was carried out including portions of the inferior vena cava with en bloc radical right nephrectomy and right adrenalectomy. The pathologic diagnosis of inferior venal caval leiomyosarcoma (IVC LMS) was made with positive immunostains for desmin, vimentin and smooth muscle actin. The rarity of this entity, clinical presentation along with concomitant sarcoidosis makes this an interesting case.
abstract_id: PUBMED:37774418
Inferior vena cava tumor thrombus: clinical outcomes at a canadian tertiary center. Objective: This study reports the surgical management and outcomes of patients with malignancies affecting the IVC.
Methods: This was a retrospective study that considered patients undergoing surgery for IVC thrombectomy in Calgary, Canada, from 1 January 2010 to 31 December 2021. Parameters of interest included primary malignancy, the extent of IVC involvement, surgical strategy, and medium-term outcomes.
Results: Six patients underwent surgical intervention for malignancies that affected the IVC. One patient had a retroperitoneal leiomyosarcoma, 1 had hepatocellular carcinoma with thrombus extending into the IVC and right atrium, 1 had adrenocortical carcinoma with IVC thrombus extending into the right atrium, and 3 had clear cell renal cell carcinoma with thrombus extending into the IVC. Surgical strategy for the IVC thrombectomy varied where 5 patients required the institution of cardiopulmonary bypass and underwent deep hypothermic circulatory arrest. No patient died perioperatively. One patient died 15-months post-operatively from aggressive malignancy.
Conclusion: Different types of malignancy can affect the IVC and surgical intervention is usually indicated for these patients. Herein, we have reported the outcomes of IVC thrombectomy at our center.
abstract_id: PUBMED:37139713
A rare case of retro-hepatic inferior vena caval leiomyosarcoma - Computed tomography, surgical and intraoperative echocardiography imaging. Leiomyosarcoma of retro-hepatic portion of inferior vena cava (IVC) is a rare vascular tumor with poor prognosis if adequate surgical resection is not achieved. Surgical repair includes dissection of the tumor and reconstruction of the IVC with a tube graft. Establishing a normal flow and gradient in IVC and hepatic veins is imperative for a successful repair. We report a case of retro hepatic IVC leiomyosarcoma where the preoperative computed tomography described the anatomy and extension of the tumor whereas intraoperative transesophageal echocardiography helped in the assessment of adequacy of surgical repair.
abstract_id: PUBMED:26943403
Inferior vena cava leiomyosarcoma: preoperative diagnosis and surgical management. Inferior vena cava (IVC) leiomyosarcoma is a rare malignant neoplasm more commonly seen in women in the fifth to sixth decade of life. Complete resection of the tumor with negative margins is the only therapeutical option that has demonstrated a survival benefit. This report presents a case of a 67-year-old woman affected by a lower segment IVC leiomyosarcoma incidentally detected during a chronic abdominal pain study. The patient was treated with tumorectomy, resection and ligation of the infrarenal IVC without signs or recurrence on a 12-month follow-up.
abstract_id: PUBMED:38259705
Transvenous biopsy of inferior vena cava leiomyosarcoma: two case reports. Leiomyosarcomas of the inferior vena cava (IVC) are uncommon malignancies. There is limited research detailing optimal diagnostic and clinical management. Here, we present 2 unique cases of IVC leiomyosarcoma including one in which the mass was partially ruptured through the vessel at initial presentation. We detail radiologic findings, 2 different transvenous approaches for biopsy of these masses, and subsequent oncological management.
Answer: The imaging features and metastatic pattern of non-inferior vena cava (IVC) retroperitoneal leiomyosarcomas (non-IVC LMS) do not significantly differ from those of IVC leiomyosarcomas (IVC LMS). A study that included 56 patients with pathologically confirmed primary retroperitoneal leiomyosarcoma (34 non-IVC LMS and 22 IVC LMS) found that non-IVC LMS had a mean size of 11.3 cm and most commonly occurred in the perirenal space. The primary tumors were hyperattenuating to muscle and showed heterogeneous enhancement. The most common metastatic sites for non-IVC LMS were the lungs, peritoneum, and liver. There was no significant difference between the imaging features and metastatic pattern of non-IVC and IVC LMS. However, non-IVC LMS presented at a more advanced stage, but there was a statistically non-significant trend toward better median survival for non-IVC LMS (PUBMED:24834891).
Other studies have focused on the surgical management of IVC leiomyosarcomas, including resection without IVC reconstruction (PUBMED:35581461; PUBMED:34866851; PUBMED:35943295; PUBMED:36115116; PUBMED:37774418; PUBMED:37139713; PUBMED:26943403; PUBMED:38259705). These studies highlight the complexity of surgical interventions for IVC LMS and the importance of familiarity with retroperitoneal, retro-hepatic, and supra-hepatic anatomy to achieve good surgical outcomes. They also discuss the viability of IVC ligation without caval reconstruction as a surgical option for select patients with retroperitoneal tumors involving the IVC, providing acceptable short- and medium-term outcomes. However, these studies do not directly compare the imaging features and metastatic patterns of non-IVC LMS to IVC LMS. |
Instruction: Is there a link between the food-cobalamin malabsorption and the pernicious anemia?
Abstracts:
abstract_id: PUBMED:8534965
Malabsorption of food cobalamin. Food-cobalamin malabsorption is marked by the inability to release cobalamin from food, which therefore cannot be taken up by intrinsic factor for absorption. The defect is not detectable by classical clinical tests like the Schilling test which are all based on the absorption of free, crystalline cobalamin. Tests of food-cobalamin absorption have been devised, the most popular ones using cobalamin bound to eggs or to chicken serum. The disparity between the abnormal results of these tests and the normal results with the Schilling test defines the disorder of food-cobalamin malabsorption. Release of cobalamin from food requires acid and pepsin, and most food-cobalamin malabsorptive states can be traced to gastric defects. However, other mechanisms may also play a role. The malabsorption is limited to food cobalamin and any free cobalamin, presumably including recycled biliary cobalamin, will be absorbed normally, which may explain its frequently insidious nature. The effect on cobalamin status covers a broad spectrum. At one extreme, some individuals, perhaps in the earliest stages, have normal cobalamin status, while at the other extreme may be found deficiency every bit as severe as in the most florid case of pernicious anaemia. Most often, however, the deficiency is mild, frequently marked by only a low serum cobalamin level, mild evidence of metabolic insufficiency and, sometimes, minimal clinical sequelae. Moreover, in some cases the gastric defect progresses and intrinsic factor secretion is affected, thus transforming into classical pernicious anaemia; this is not inevitable, however, and probably occurs in only a minority of patients. The course of food-cobalamin malabsorption is therefore a varied one. Nevertheless, it may be the most common cause of subtle or mild cobalamin deficiency and it is also sometimes associated with severe deficiency. Its identification and treatment need to be considered more widely in the clinical setting.
abstract_id: PUBMED:18990540
Food-cobalamin syndrome Food-cobalamin malabsorption is a new well-characterized syndrome. In association with pernicious anemia, it is the leading etiology of cobalamin deficiency in adult, especially in elderly patient. Currently, it is an exclusion diagnosis that requires a well-codified clinical strategy for diagnosis. There are several causes of food-cobalamin malabsorption, mainly gastric disorders and drugs (metformin and anti-acid drugs). Current treatment modality includes oral cobalamin administration with lower doses than in pernicious anemia. Studies are in the way to better characterize the food-cobalamin malabsorption in a clinical practice perspective and to validate the usefulness of oral cobalamin therapy.
abstract_id: PUBMED:38344487
Etiology, Clinical Manifestations, Diagnosis, and Treatment of Cobalamin (Vitamin B12) Deficiency. Cobalamin, also known as vitamin B12, is a water-soluble vitamin. Cobalamin deficiency can be frequently seen in people all around the world. It can have non-specific symptoms, and in patients who are in a very critical state, it can lead to neurological or hematological abnormalities. While pernicious anemia used to be the main cause, it now accounts for a smaller number of cases, with food-bound cobalamin malabsorption being more common. Early diagnosis and appropriate management are crucial to avoid severe complications like spinal cord degeneration and pancytopenia. The primary method of treatment has been injections of vitamin B12 which are given through the intramuscular route but now the oral replacement therapy has also been very effective in treating the patients. There is increasing evidence linking increased levels of vitamin B12 to hematological and hepatic disorders, particularly cancers. This review has primarily highlighted the metabolism, clinical manifestations, diagnosis, and treatment of cobalamin deficiency in the past decade.
abstract_id: PUBMED:24365360
Neurologic aspects of cobalamin (B12) deficiency. Optimal functioning of the central and peripheral nervous system is dependent on a constant supply of appropriate nutrients. Particularly important for optimal functioning of the nervous system is cobalamin (vitamin B12). Cobalamin deficiency is particularly common in the elderly and after gastric surgery. Many patients with clinically expressed cobalamin deficiency have intrinsic factor-related malabsorption such as that seen in pernicious anemia. The commonly recognized neurological manifestations of cobalamin deficiency include a myelopathy with or without an associated neuropathy. This review deals with neurological aspects of vitamin B12 deficiency and attempts to highlight recent developments.
abstract_id: PUBMED:12755792
Efficacy of short-term oral cobalamin therapy for the treatment of cobalamin deficiencies related to food-cobalamin malabsorption: a study of 30 patients. Background: It has been suggested that oral cobalamin (vitamin (B12)) therapy may be an effective therapy for treating cobalamin deficiencies related to food-cobalamin malabsorption. However, the duration of this treatment was not determined.
Patients And Method: In an open-label, nonplacebo study, we studied 30 patients with established cobalamin deficiency related to food-cobalamin malabsorption, who received between 250 and 1000 microg of oral crystalline cyanocobalamin per day for at least 1 month.
Endpoints: Blood counts, serum cobalamin and homocysteine levels were determined at baseline and during the first month of treatment.
Results: During the first month of treatment, 87% of the patients normalized their serum cobalamin levels; 100% increased their serum cobalamin levels (mean increase, +167 pg/dl; P < 0.001 compared with baseline); 100% had evidence of medullary regeneration; 100% corrected their initial macrocytosis; and 54% corrected their anemia. All patients had increased hemoglobin levels (mean increase, +0.6 g/dl) and reticulocyte counts (mean increase, +35 x 10(6)/l) and decreased erythrocyte cell volume (mean decrease, 3 fl; all P < 0.05).
Conclusion: Our findings suggest that crystalline cyanocobalamin, 250-1000 microg/day, given orally for 1 month, may be an effective treatment for cobalamin deficiencies not related to pernicious anemia.
abstract_id: PUBMED:9322548
Cobalamin, the stomach, and aging. Low cobalamin concentrations are common in the elderly. Although only a minority of such persons display clinically obvious symptoms or signs, metabolic data clearly show cellular deficiency of cobalamin in most cases. The evidence suggests that this is not a normal physiologic expression of the aging process. Rather, the elderly seem at increased risk for mild, preclinical cobalamin deficiency. Classical disorders such as pernicious anemia are the cause of this deficiency in only a small proportion of the elderly. A more frequent problem is food-cobalamin malabsorption, which usually arises from atrophic gastritis and hypochlorhydria but other mechanisms seem to be involved in some patients. The diminished absorption should not be viewed as a natural consequence of aging. The partial nature of this form of malabsorption produces a more slowly progressive depletion of cobalamin than does the more complete malabsorption engendered by disruption of intrinsic factor-mediated absorption. The slower progression of depletion probably explains why mild, preclinical deficiency is associated with food-cobalamin malabsorption more often than with pernicious anemia. Decisions about the optimal management of the very common problem of mild, preclinical cobalamin deficiency in the elderly await further clarification of the processes and the complex issues involved, including the possibility that routine nitrous oxide use during surgery, proposed dietary changes, and other practices may further stress the marginal cobalamin status of many elderly people.
abstract_id: PUBMED:20577903
Mandatory fortification of the food supply with cobalamin: an idea whose time has not yet come. The success of folic acid fortification has generated consideration of similar fortification with cobalamin for its own sake but more so to mitigate possible neurologic risks from increased folate intake by cobalamin-deficient persons. However, the folate model itself, the success of which was predicted by successful clinical trials and the known favorable facts of high folic acid bioavailability and the infrequency of folate malabsorption, may not apply to cobalamin fortification. Cobalamin bioavailability is more restricted than folic acid and is unfortunately poorest in persons deficient in cobalamin. Moreover, clinical trials to demonstrate actual health benefits of relevant oral doses have not yet been done in persons with mild subclinical deficiency, who are the only practical targets of cobalamin fortification because >94% of persons with clinically overt cobalamin deficiency have severe malabsorption and therefore cannot respond to normal fortification doses. However, it is only in the severely malabsorptive disorders, such as pernicious anemia, not subclinical deficiency, that neurologic deterioration following folic acid therapy has been described to date. It is still unknown whether mild deficiency states, which usually arise from normal absorption or only food-bound cobalamin malabsorption, have real health consequences or how often they progress to overt clinical cobalamin deficiency. Reports of cognitive or other risks in the common subclinical deficiency state, although worrisome, have been inconsistent. Moreover, their observational nature proved neither causative connections nor documented health benefits. Extensive work, especially randomized clinical trials, must be done before mandatory dietary intervention on a national scale can be justified.
abstract_id: PUBMED:20708373
Oral vitamin B12: Efficacy and safety data in 31 patients with pernicious anemia and food-cobalamin malabsorption Objective: The aim of this study is to validate the efficacy and safety of oral cobalamin therapy in the treatment of cobalamin deficiency related to various causes.
Patient And Method: It's a retrospective study, including 31 patients with documented cobalamin deficiency related to food-cobalamin malabsorption (n=20) and pernicious anemia (n=11). These patients were treated at least for 3 months with oral cyanocobalamin, between 125 to 1000microg per day. Serum cobalamin levels and hematological parameters were compared before and after the therapy and in relation with the nature of cobalamin deficiency. Safety data were also recorded.
Results: After 3 months of therapy, the serum cobalamin levels have significantly increased in all the patients, with a mean of +161.6±79.3pg/mL in the food-cobalamin malabsorption group (P<0,00005) and +136.7±67.4pg/mL in the pernicious anemia group (P<0,0001). Hematological parameters have been normalized in 90 % of the patients, independently of the cause of the cobalamin deficiency. Only 1 patient presented an urticarial reaction.
Conclusion: This study confirms the efficacy and safety of oral cobalamin therapy in food-cobalamin malabsorption and also in case of pernicious anemia.
abstract_id: PUBMED:18990719
An update on cobalamin deficiency in adults. Cobalamin (vitamin B12) deficiency is particularly common in the elderly (>65 years of age), but is often unrecognized because of its subtle clinical manifestations; although they can be potentially serious, particularly from a neuropsychiatric and hematological perspective. In the general population, the main causes of cobalamin deficiency are pernicious anemia and food-cobalamin malabsorption. Food-cobalamin malabsorption syndrome, which has only recently been identified, is a disorder characterized by the inability to release cobalamin from food or its binding proteins. This syndrome is usually caused by atrophic gastritis, related or unrelated to Helicobacter pylori infection, and long-term ingestion of antacids and biguanides. Besides these syndromes, mutations in genes encoding endocytic receptors involved in the ileal absorption and cellular uptake of cobalamin have been recently uncovered and explain, at least in part, the hereditary component of megaloblastic anemia. Management of cobalamin deficiency with cobalamin injections is currently well codified, but new routes of cobalamin administration (oral and nasal) are being studied, especially oral cobalamin therapy for food-cobalamin malabsorption.
abstract_id: PUBMED:15289425
Vitamin B12 (cobalamin) deficiency in elderly patients. Vitamin B12 or cobalamin deficiency occurs frequently (> 20%) among elderly people, but it is often unrecognized because the clinical manifestations are subtle; they are also potentially serious, particularly from a neuropsychiatric and hematological perspective. Causes of the deficiency include, most frequently, food-cobalamin malabsorption syndrome (> 60% of all cases), pernicious anemia (15%-20% of all cases), insufficient dietary intake and malabsorption. Food-cobalamin malabsorption, which has only recently been identified as a significant cause of cobalamin deficiency among elderly people, is characterized by the inability to release cobalamin from food or a deficiency of intestinal cobalamin transport proteins or both. We review the epidemiology and causes of cobalamin deficiency in elderly people, with an emphasis on food-cobalamin malabsorption syndrome. We also review diagnostic and management strategies for cobalamin deficiency.
Answer: Yes, there is a link between food cobalamin malabsorption and pernicious anemia. Food-cobalamin malabsorption is characterized by the inability to release cobalamin (vitamin B12) from food, which then cannot be taken up by intrinsic factor for absorption. This condition can sometimes progress to affect intrinsic factor secretion, thereby transforming into classical pernicious anemia, although this transformation is not inevitable and probably occurs in only a minority of patients (PUBMED:8534965). Pernicious anemia is a condition where there is a lack of intrinsic factor, which is necessary for the absorption of cobalamin. While pernicious anemia used to be the main cause of cobalamin deficiency, food-bound cobalamin malabsorption is now more common (PUBMED:38344487). Food-cobalamin malabsorption is often associated with gastric disorders and can be a precursor to pernicious anemia in some cases (PUBMED:18990540). However, it is important to note that not all cases of food-cobalamin malabsorption will progress to pernicious anemia, and the two conditions can also exist independently of each other. |
Instruction: Contraction coupled endothelial nitric oxide release: a new paradigm for local vascular control?
Abstracts:
abstract_id: PUBMED:11516210
Contraction coupled endothelial nitric oxide release: a new paradigm for local vascular control? Introduction: Nitric oxide (NO), a potent vasodilator, is presumed to be constitutively released in most mammalian blood vessels. In isolated rat thoracic aorta, however, hemoglobin (Hb), a nitric oxide scavenger, elicited contraction only when the vessels were precontracted with an alpha adrenergic agonist. Does vascular contraction induce endothelial NO release?
Methods: Thoracic aortic rings from male Sprague-Dawley rats were prepared with or without the endothelium. Vessel rings were contracted with several distinct types of contractile agonists and NO release was probed using a Hb contraction assay in the presence and absence of nitro-l-arginine methyl ester (NAME), a NO synthase inhibitor.
Results: In vessel rings precontracted with norepinephrine, potassium chloride, arginine vasopressin, prostaglandin F(2alpha), or serotonin, Hb elicited significant additional contractions. In contrast, Hb failed to elicit significant contractions in vessel rings without the functional endothelium or vessels pretreated with NAME. The Hb mediated additional contraction was not inhibited by calmidazolium, a calmodulin antagonist, and protein kinase inhibitors staurosporine and 2,5-dihydromethylcinnamate. Intercellular gap junction inhibitor 2,3-butanedione monoxime at a low dose (<2 mM) significantly attenuated the NE/Hb mediated contractions but at a high dose (>15 mM) completely prevented both contractions. The contraction coupled NO release may be mediated through a mechanism distinct from the Ca(2+)-calmodulin-dependent endothelial NOS pathway.
Conclusions: In the isolated rat thoracic aorta, endothelial NO release may be coupled to contractile stimulus. This vascular property appears to render a unique local control mechanism independent of baroreflex and other central mechanisms.
abstract_id: PUBMED:10225308
Vascular smooth muscle contraction is an independent regulator of endothelial nitric oxide production. This investigation was conducted to determine whether endothelial nitric oxide (NO) production is regulated by vascular smooth muscle contraction. Unperfused ring segments of rat aorta and mesenteric artery were studied using isometric tension recording (n = 6-8 in all experiments). Following a reference contraction to K+ 80 mM (100%), arteries were left either unstimulated or stimulated by different concentrations of K+ or prostaglandin F2alpha (PGF2alpha) to induce different levels of vascular precontraction. N(G)-nitro-L-arginine methyl ester (L-NAME 0.1-300 microM) or NS 2028 (0.03-3 microM), which is a new specific inhibitor of the NO-sensitive guanylate cyclase, was then added at increasing concentrations to evaluate endothelial NO production. L-NAME and NS 2028 produced a concentration-dependent vasoconstrictor response which was progressively enhanced with increasing levels of precontraction. For L-NAME, this amounted in aorta to (% of reference contraction): 35+/-1% and 105 +/- 4% (precontraction by K(+) 20 and 30 mM) and 22+/-1%, 89+/-1%, 138+/-1% and 146+/-2% (precontraction by PGF2alpha 0.5, 1, 2 and 3 microM). A similar coupling was found in the mesenteric artery. A precontraction as little as 2% was enough to trigger a vasoconstrictor response to L-NAME. In contrast, L-NAME and NS 2028 had no effect in non-contracted arteries, not even when passive mechanical stretch was increased by 100%. The results suggest (i) that endothelial NO formation is progressively increased with increasing vascular tone, and (ii) that vascular isometric contraction per se stimulates endothelial NO formation. It is concluded, that active vascular smooth muscle contraction is an independent regulator of endothelial NO production.
abstract_id: PUBMED:32172815
Nitric Oxide and Endothelial Dysfunction. Nitric oxide is a strong vasodilatory and anti-inflammatory signaling molecule that plays diverse roles in maintaining vascular homeostasis. Nitric oxide produced by endothelial cells is a critical regulator of this balance, such that endothelial dysfunction is defined as a reduced capacity for nitric oxide production and decreased nitric oxide sensitivity. This ultimately results in an imbalance in vascular homeostasis leading to a prothrombotic, proinflammatory, and less compliant blood vessel wall. Endothelial dysfunction is central in numerous pathophysiologic processes. This article reviews mechanisms governing nitric oxide production and downstream effects, highlighting the role of nitric oxide signaling in organ system pathologies.
abstract_id: PUBMED:26499181
Vascular nitric oxide: Beyond eNOS. As the first discovered gaseous signaling molecule, nitric oxide (NO) affects a number of cellular processes, including those involving vascular cells. This brief review summarizes the contribution of NO to the regulation of vascular tone and its sources in the blood vessel wall. NO regulates the degree of contraction of vascular smooth muscle cells mainly by stimulating soluble guanylyl cyclase (sGC) to produce cyclic guanosine monophosphate (cGMP), although cGMP-independent signaling [S-nitrosylation of target proteins, activation of sarco/endoplasmic reticulum calcium ATPase (SERCA) or production of cyclic inosine monophosphate (cIMP)] also can be involved. In the blood vessel wall, NO is produced mainly from l-arginine by the enzyme endothelial nitric oxide synthase (eNOS) but it can also be released non-enzymatically from S-nitrosothiols or from nitrate/nitrite. Dysfunction in the production and/or the bioavailability of NO characterizes endothelial dysfunction, which is associated with cardiovascular diseases such as hypertension and atherosclerosis.
abstract_id: PUBMED:7947359
Endothelial control of vascular and myocardial function in heart failure. The effect of vascular endothelium, endocardium, and coronary endothelium on vascular tone and myocardial contraction-relaxation sequence in heart failure is discussed. Vascular endothelium affects underlying vascular smooth muscle through paracrine secretion of relaxing and constricting factors. In heart failure, systemic vasoconstriction results not only from neuroendocrine activation, but also from disturbed local endothelial control of vascular tone because of impaired endothelial-dependent vasodilation and because of increased plasma concentration of endothelin. Experimental evidence obtained in isolated cardiac muscle strips established the influence of endocardial endothelium on the duration of myocardial contraction and on the onset of myocardial relaxation. By analogy to vascular endothelium, both diffusible agents that abbreviate (endothelial-derived relaxation factor-like substance) and those that prolong (endocardin) myocardial contraction have been shown to be released from the endocardium. Similar agents are released from the coronary endothelium and, because of the close proximity of capillaries and myocytes, could exert a major effect on myocardial performance. Endothelial dysfunction and concomitant lack of release of myocardial relaxant factors could explain left ventricular relaxation abnormalities observed in the cardiac allograft or in arterial hypertension. Since endothelial-derived relaxation factor or nitric oxide mediates the coronary reactive hyperemic response, a negative inotropic action of nitric oxide could contribute to left ventricular failure when left ventricular wall stress is elevated, as occurs after myocardial infarction in the noninfarcted zone and during left ventricular volume or pressure overload in the absence of adequate hypertrophy.
abstract_id: PUBMED:22702717
Mepivacaine-induced contraction is attenuated by endothelial nitric oxide release in isolated rat aorta. Mepivacaine is an aminoamide-linked local anesthetic with an intermediate duration that intrinsically produces vasoconstriction both in vivo and in vitro. The aims of this in-vitro study were to examine the direct effect of mepivacaine in isolated rat aortic rings and to determine the associated cellular mechanism with a particular focus on endothelium-derived vasodilators, which modulate vascular tone. In the aortic rings with or without endothelium, cumulative mepivacaine concentration-response curves were generated in the presence or absence of the following antagonists: N(ω)-nitro-L-arginine methyl ester [L-NAME], indomethacin, fluconazole, methylene blue, 1H-[1,2,4]oxadiazolo[4,3-a]quinoxalin-1-one [ODQ], verapamil, and calcium-free Krebs solution. Mepivacaine produced vasoconstriction at low concentrations (1 × 10(-3) and 3 × 10(-3) mol/L) followed by vasodilation at a high concentration (1 × 10(-2) mol/L). The mepivacaine-induced contraction was higher in endothelium-denuded aortae than in endothelium-intact aortae. Pretreatment with L-NAME, ODQ, and methylene blue enhanced mepivacaine-induced contraction in the endothelium-intact rings, whereas fluconazole had no effect. Indomethacin slightly attenuated mepivacaine-induced contraction, whereas verapamil and calcium-free Krebs solution more strongly attenuated this contraction. The vasoconstriction induced by mepivacaine is attenuated mainly by the endothelial nitric oxide - cyclic guanosine monophosphate pathway. In addition, mepivacaine-induced contraction involves cyclooxygenase pathway activation and extracellular calcium influx via voltage-operated calcium channels.
abstract_id: PUBMED:28942512
Altered Endothelial Nitric Oxide Signaling as a Paradigm for Maternal Vascular Maladaptation in Preeclampsia. Purpose Of Review: The goal of this review is to present the newest insights into what we view as a central failure of cardiovascular adaptation in preeclampsia (PE) by focusing on one clinically significant manifestation of maternal endothelial dysfunction: nitric oxide signaling. The etiology, symptoms, and current theories of the PE syndrome are described first, followed by a review of the available evidence, and underlying causes of reduced endothelial nitric oxide (NO) signaling in PE.
Recent Findings: PE maladaptations include, but are not limited to, altered physiological stimulatory inputs (e.g., estrogen; VEGF/PlGF; shear stress) and substrates (L-Arg; ADMA), augmented placental secretion of anti-angiogenic and inflammatory factors such as sFlt-1 and Eng, changes in eNOS (polymorphisms, expression), and reduced bioavailability of NO secondary to oxidative stress. PE is a complex obstetrical syndrome that is associated with maternal vascular dysfunction. Diminished peripheral endothelial vasodilator influence in general, and of NO signaling specifically, are key in driving disease progression and severity.
abstract_id: PUBMED:16594903
Endothelial mechanotransduction, nitric oxide and vascular inflammation. Numerous aspects of vascular homeostasis are modulated by nitric oxide and reactive oxygen species (ROS). The production of these is dramatically influenced by mechanical forces imposed on the endothelium and vascular smooth muscle. In this review, we will discuss the effects of mechanical forces on the expression of the endothelial cell nitric oxide synthase, production of ROS and modulation of endothelial cell glutathione. We will also review data that exercise training in vivo has a similar effect as laminar shear on endothelial function and discuss the clinical relevance of these basic findings.
abstract_id: PUBMED:31929735
Nitric oxide-mediated inhibition of phenylephrine-induced contraction in response to hypothermia is partially modulated by endothelial Rho-kinase. This study examined the possible upstream cellular signaling pathway associated with nitric oxide (NO)-mediated inhibition of phenylephrine-induced contraction in isolated rat aortae in response to mild hypothermia, with a particular focus on endothelial Rho-kinase. We examined the effects of mild hypothermia (33°C), wortmannin, Nω-nitro-L-arginine methyl ester (L-NAME), Y-27632, 1H-[1,2,4]oxadiazolo[4,3-a]quinoxalin-1-one (ODQ) and methylene blue, alone and combined, on phenylephrine-induced contraction in isolated rat aortae. Finally, we examined the effects of mild hypothermia, wortmannin, Y-27632 and L-NAME, alone and combined, on endothelial nitric oxide synthase (eNOS) and endothelial Rho-kinase membrane translocation induced by phenylephrine. Mild hypothermia attenuated phenylephrine-induced contraction only in endothelium-intact aortae. L-NAME, wortmannin, ODQ and methylene blue increased phenylephrine-induced contraction of endothelium-intact aortae pretreated at 33°C. Wortmannin did not significantly alter the L-NAME-induced enhancement of phenylephrine-induced maximal contraction of endothelium-intact aortae pretreated at 33°C. Wortmannin abolished the ability of Y-27632 to magnify the hypothermic inhibition of maximal phenylephrine-induced contraction. Wortmannin and L-NAME inhibited the enhancing effect of mild hypothermia on phenylephrine-induced eNOS phosphorylation. Y-27632 and L-NAME attenuated the enhancing effect of hypothermia on phenylephrine-induced endothelial Rho-kinase membrane translocation. The results suggest that hypothermia-induced, NO-dependent inhibition of phenylephrine-induced contraction is mediated by phosphoinositide 3-kinase and inhibited by endothelial Rho-kinase activation.
abstract_id: PUBMED:16458216
The effect of cardiopulmonary bypass on the expression of inducible nitric oxide synthase, endothelial nitric oxide synthase, and vascular endothelial growth factor in the internal mammary artery. Objective: Endothelial function of a vessel may be impaired by local or systemic inflammation initiated by cardiopulmonary bypass (CPB) during coronary artery bypass graft (CABG) surgery. The present study was designed to investigate the early effects of CPB on nitric oxide production and vascular endothelial growth factor (VEGF) expression in internal mammary artery (IMA).
Design: Prospective study.
Setting: University hospital.
Participants: Twenty patients who were scheduled for elective CABG with CPB.
Interventions: IMA sections were studied immunohistochemically from these patients. The samples were taken from the distal end of the IMA before the institution of CPB and just before the construction of the IMA-left anterior descending artery anastomosis.
Measurement And Main Results: After CPB, VEGF and endothelial nitric oxide synthase immunoreactivity increased significantly when compared with baseline values in the endothelium (p = 0.0156, p = 0.0313) and adventitia (p = 0.0313, p = 0.0001), respectively. No significant change was observed in inducible nitric oxide synthase immunoreactivity.
Conclusions: The increase in eNOS expression may have been induced by the inflammation caused by CPB.
Answer: The concept of contraction-coupled endothelial nitric oxide (NO) release as a new paradigm for local vascular control is supported by several studies that have investigated the relationship between vascular smooth muscle contraction and endothelial NO production.
In isolated rat thoracic aorta, it was observed that hemoglobin, a nitric oxide scavenger, elicited contraction only when the vessels were precontracted with an alpha-adrenergic agonist, suggesting that vascular contraction may induce endothelial NO release. This was further supported by experiments showing that in vessel rings precontracted with various contractile agonists, hemoglobin elicited significant additional contractions, which were not observed in vessels without functional endothelium or vessels pretreated with nitro-l-arginine methyl ester (NAME), a NO synthase inhibitor. These findings indicate that endothelial NO release may be coupled to contractile stimulus, providing a unique local control mechanism independent of central mechanisms such as baroreflex (PUBMED:11516210).
Another study found that endothelial NO formation is progressively increased with increasing vascular tone, and that active vascular smooth muscle contraction is an independent regulator of endothelial NO production. This was demonstrated by the observation that L-NAME and NS 2028, inhibitors of NO production and signaling, produced a concentration-dependent vasoconstrictor response that was enhanced with increasing levels of precontraction (PUBMED:10225308).
The role of NO in maintaining vascular homeostasis is well-established, with endothelial dysfunction, characterized by reduced NO production and sensitivity, leading to an imbalance in vascular homeostasis and contributing to various pathophysiologic processes (PUBMED:32172815). NO regulates vascular tone mainly by stimulating soluble guanylyl cyclase (sGC) to produce cyclic guanosine monophosphate (cGMP), and its production and bioavailability are critical for preventing endothelial dysfunction associated with cardiovascular diseases (PUBMED:26499181).
In summary, the evidence suggests that contraction-coupled endothelial NO release is indeed a paradigm for local vascular control, with vascular smooth muscle contraction independently regulating endothelial NO production, thereby influencing vascular tone and homeostasis. |
Instruction: Does the CONSORT checklist for abstracts improve the quality of reports of randomized controlled trials on clinical pathways?
Abstracts:
abstract_id: PUBMED:24916891
Does the CONSORT checklist for abstracts improve the quality of reports of randomized controlled trials on clinical pathways? Rationale Aims And Objectives: The extension of the Consolidated Standards of Reporting Trials (CONSORT) statement provides reporting guidelines to improve the reporting quality of randomized controlled trials (RCTs). This present study was aim to assess the reporting quality of abstracts of RCTs on clinical pathway.
Methods: Eight databases were searched from inception to November 2012 to identify RCTs. We extracted basic information and CONSORT items from abstracts. Each abstract was assessed independently by two reviewers. Statistical analyses were performed with SPSS 13.0. Level of significance was set at P < 0.05.
Results: 328 abstracts were included. 300 (91.5%) were published in Chinese, of which 292 were published on high impact factor journals. 28 English abstracts were all published on Science Citation Index (SCI) journals. (1) Intervention, objective and outcome were almost fully reported in all abstracts, while recruitment and funding were never reported. (2) There are nine items (P < 0.05) in Chinese that were of low quality compared with in English. There was statistically difference on total score between Chinese and English abstracts (P < 0.00001). (3) There was no difference in any items between high and low impact factor journal in China. (4) In SCI journals, there were significant changes in reporting for three items trial design (P = 0.026), harms (P = 0.039) and trial registration (P = 0.019) in different periods (pre- and post-CONSORT), but only the numbers of randomized (P = 0.003) changed in Chinese abstracts.
Conclusions: The reporting quality of abstracts of RCTs on clinical pathway still should be improved. After the publication of CONSORT for abstracts guideline, the RCT abstracts reporting quality were improvement to some extent. The abstracts in Chinese journals showed non-adherence to the CONSORT for abstracts guidelines.
abstract_id: PUBMED:31744527
Comments on "Reporting quality of randomized controlled trial abstracts in the seven highest-ranking anesthesiology journals". Randomized controlled trials are considered the gold standard in assessing treatment regimens, and since abstracts may be the only part of a paper that a physician reads, accurate reporting of data in abstracts is essential. The CONSORT checklist for abstracts was designed to standardize data reporting; however, for papers submitted to anesthesiology journals, the level of adherence to the CONSORT checklist for abstracts is unknown. Therefore, we commend Janackovic and Puljak for their efforts in determining the adherence of reports of trials in the highest-impact anesthesiology journals between 2014 and 2016. The results of their study are extremely important; however, we believe that that study had some methodological limitations, which we discuss in this manuscript.
abstract_id: PUBMED:28109324
Reporting of critical care trial abstracts: a comparison before and after the announcement of CONSORT guideline for abstracts. Background: An extension of the Consolidated Standards of Reporting Trials (CONSORT) statement provides a checklist of items to improve the reporting quality of abstracts of randomized controlled trials (RCTs). However, authors of abstracts in some fields have poorly adhered to this guideline. We did an extensive literature survey to examine the quality of reporting trial abstracts in major critical care journals before and after announcement of the CONSORT guideline for abstracts.
Methods: We reviewed abstracts of RCTs published in four major critical care journals with publication dates ranging from 2006 to 2007 (pre-CONSORT) and from 2011 to 2012 (post-CONSORT): Intensive Care Medicine (ICM), Critical Care (CC), American Journal of Respiratory and Critical Care Medicine (AJRCCM), and Critical Care Medicine (CCM). For each item in the CONSORT guideline for abstracts, we considered that an abstract was well-reported when it reported a relevant item and adhered to the guideline. Our primary outcomes were to describe the proportion of abstracts that adhered to the guideline for each item in each period and the changes between the two periods. Pearson's chi-square analysis was performed to compare adherence to the guideline between the two periods.
Results: Our inclusion criteria yielded 185 and 166 abstracts from pre- and post-CONSORT periods, respectively. Less than 50% of abstracts adequately reported trial design (16.3%), participants (44.0%), outcomes in methods (49.4%), randomization (1.8%), blinding (4.2%), numbers randomized (37.4%) and analyzed (8.4%), recruitment (4.2%), outcomes in results (16.9%), harms (27.7%), trial registration (42.2%), and funding (13.9%) in the recent period. There was significant improvement in reporting title, primary outcomes in both methods and results, interventions, harms, trial registration, and funding between the two periods (p < 0.05). Improvements were seen in reporting of participants in the Methods sections in CCM, as well as in outcomes in results and trial registration in AJRCCM and CCM, between the two periods. A significant decline was noted in reporting of interventions in Methods sections in AJRCCM and ICM, as well as the numbers randomized in Results sections in CC, over time.
Conclusions: Reporting of some items in abstracts for critical care trials improved over time, but the adherence to the CONSORT guideline for abstracts was still suboptimal.
abstract_id: PUBMED:35314346
A systematic review of the quality of abstracts reporting on randomized controlled trials presented at major international cardiothoracic conferences. Conference proceedings are widely available and may represent the only report of given research. Poor reporting of randomized controlled trials (RCTs) in conference abstracts may impede interpretability. In 2008, the Consolidating Standards of Reporting Trials group published minimum standards for RCT reporting in conference abstracts (CONSORT-A). We sought to evaluate the reporting quality of abstracts presented at major international cardiothoracic conferences. Abstracts were retrieved for the annual meetings of 5 cardiothoracic societies over 3 consecutive years (2016 to 2018). After screening, those reporting on RCTs were scored by 2 independent reviewers against the 17-item CONSORT-A checklist. The primary endpoint was the total number of checklist criteria reported in individual abstracts. Statistical analysis was performed using STATA ICv16. Of 3233 screened abstracts, 100 (3.1%) reported on RCTs. Average checklist adherence was 35% (median 6/17 items, IQR 2-15) across abstracts. Author contact (n = 0), funding disclosures (n = 3, 2.9%) and randomization methodology (n = 5, 4.8%) were the least-frequently reported. There was no statistically-significant difference in terms of reporting quality between conferences (n = 0.07) or years (p = .06). Trial registration, word count (>300), multicentre trial design and mention of CONSORT in the abstract were associated with higher reporting quality. Reporting quality was not associated with successful full-length publication within 2 years (p = .33). The reporting quality of abstracts of RCTs presented at international cardiothoracic conferences is poor when benchmarked against the CONSORT-A standards. This highlights an area for targeted improvement.
abstract_id: PUBMED:29742449
Quality improvement in randomized controlled trial abstracts in prosthodontics since the publication of CONSORT guideline for abstracts: a systematic review. Objectives: This study aimed to compare the reporting quality of randomized controlled trial (RCT) abstracts in prosthodontics before and after the publication of Consolidated Standards of Reporting Trials (CONSORT) guideline for abstracts and identify the characteristics associated with better reporting quality.
Sources: PubMed was searched for RCT abstracts published from 2001 to 2007 (pre-CONSORT period) and from 2010 to 2016 (post-CONSORT period) in six leading prosthodontic journals.
Study Selection: After applying the inclusion/exclusion criteria, 131 RCT abstracts were selected. The t test was performed to compare the overall quality between the two periods. Univariable and multivariable linear regressions were used to identify any factors relating to the reporting quality. The level of significance was set at P < 0.05.
Data: The investigators extracted data and scored the abstracts independently based on CONSORT. The mean overall CONSORT score was 5.20 and 6.11 in the pre- and post-CONSORT samples, respectively. Significant changes were observed in reporting for only three items: title, conclusions, and trial registration. Most abstracts adequately reported interventions, objectives, and conclusions (>90%), but failed to report recruitment and outcome in the results section (<3%). Funding was not reported in both periods. The reporting quality was related to a higher impact factor, structured format, and published after CONSORT.
Conclusions: The quality of RCT abstracts in prosthodontics improved over time, but adherence to the CONSORT guideline for abstracts was still suboptimal.
abstract_id: PUBMED:34999245
Evaluation of reporting quality of abstracts of randomized controlled trials regarding patients with COVID-19 using the CONSORT statement for abstracts. Objective: To evaluate the reporting quality of randomized controlled trial (RCT) abstracts regarding patients with coronavirus disease 2019 (COVID-19) and to analyze the factors influencing the quality.
Methods: The PubMed, Embase, Web of Science, and Cochrane Library databases were searched to collect RCTs on patients with COVID-19. The retrieval time was from inception to December 1, 2020. The CONSORT statement for abstracts was used to evaluate the reporting quality of RCT abstracts.
Results: A total of 53 RCT abstracts were included. The CONSORT statement for abstracts showed that the average reporting rate of all items was 50.2%. The items with a lower reporting quality were mainly the trial design and the details of randomization and blinding (<10%). The mean overall adherence score across all studies was 8.68 ± 2.69 (range 4-13.5). Multivariate linear regression analysis showed that the higher reporting scores were associated with higher journal impact factor (P < 0.01), international collaboration (P = 0.04), and structured abstract format (P < 0.01).
Conclusions: Although many RCTs on patients with COVID-19 have been published in different journals, the overall quality of reporting in the included RCT abstracts was suboptimal, thus diminishing their potential usefulness, and this may mislead clinical decision-making. In order to improve the reporting quality, it is necessary to promote and actively apply the CONSORT statement for abstracts.
abstract_id: PUBMED:31791393
Completeness of reporting in abstracts of randomized controlled trials in subscription and open access journals: cross-sectional study. Background: Open access (OA) journals are becoming a publication standard for health research, but it is not clear how they differ from traditional subscription journals in the quality of research reporting. We assessed the completeness of results reporting in abstracts of randomized controlled trials (RCTs) published in these journals.
Methods: We used the Consolidated Standards of Reporting Trials Checklist for Abstracts (CONSORT-A) to assess the completeness of reporting in abstracts of parallel-design RCTs published in subscription journals (n = 149; New England Journal of Medicine, Journal of the American Medical Association, Annals of Internal Medicine, and Lancet) and OA journals (n = 119; BioMedCentral series, PLoS journals) in 2016 and 2017.
Results: Abstracts in subscription journals completely reported 79% (95% confidence interval [CI], 77-81%) of 16 CONSORT-A items, compared with 65% (95% CI, 63-67%) of these items in abstracts from OA journals (P < 0.001, chi-square test). The median number of completely reported CONSORT-A items was 13 (95% CI, 12-13) in subscription journal articles and 11 (95% CI, 10-11) in OA journal articles. Subscription journal articles had significantly more complete reporting than OA journal articles for nine CONSORT-A items and did not differ in reporting for items trial design, outcome, randomization, blinding (masking), recruitment, and conclusions. OA journals were better than subscription journals in reporting randomized study design in the title.
Conclusion: Abstracts of randomized controlled trials published in subscription medical journals have greater completeness of reporting than abstracts published in OA journals. OA journals should take appropriate measures to ensure that published articles contain adequate detail to facilitate understanding and quality appraisal of research reports about RCTs.
abstract_id: PUBMED:27365107
Reporting quality of randomized controlled trial abstracts published in leading laser medicine journals: an assessment using the CONSORT for abstracts guidelines. The objectives of this study were to assess the reporting quality of randomized controlled trial (RCT) abstracts published in leading laser medicine journals and investigate the association between potential predictors and reporting quality. The official online archives of four leading laser medicine journals were hand-searched to identify RCTs published in 2014 and 2015. A reporting quality assessment was carried out using the original 16-item CONsolidated Standards Of Reporting Trials (CONSORT) for Abstracts checklist. For each abstract, an overall CONSORT score (OCS) was calculated (score range, 0 to 16). Univariable and multivariable linear regression analyses were performed to identify significant predictors of reporting quality. Chi-square (or Fisher's exact) tests were used to analyze the adequate reporting rate of each quality item by specialty area. A total of 129 RCT abstracts were included and assessed. The mean OCS was 4.5 (standard deviation, 1.3). Only three quality items (interventions, objective, conclusions) were reported adequately in most abstracts (>80 %). No abstract adequately reported results for the primary outcome, source of funding, and status of the trial. In addition, sufficient reporting of participants, outcome in the methods section, randomization, and trial registration was rare (<5 %). According to multivariable linear regression analysis, the specialty area of RCT abstracts was significantly associated with their reporting quality (P = 0.008). The reporting quality of RCT abstracts published in leading laser medicine journals is suboptimal. Joint efforts by authors, editors, and other stakeholders in the field to improve trial abstract reporting are needed.
abstract_id: PUBMED:24861557
Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Background: We sought to determine if the publication of the Consolidated Standards of Reporting Trials (CONSORT)(1) extension for abstracts in 2008 had led to an improvement in reporting abstracts of randomized controlled trials (RCTs).(2) METHODS: We searched PubMed for RCTs published in 2007 and 2012 in top-tier general medicine journals. A random selection of 100 trial abstracts was obtained for each year. Data were extracted in duplicate on the adherence to the CONSORT extension for abstracts. The primary outcome was the mean number of items reported and the secondary outcome was the odds of reporting each item. We also estimated incidence rate ratios (IRRs).(3) RESULTS: Significantly more checklist items were reported in 2012 than in 2007: adjusted mean difference was 2.91 (95% confidence interval [CI](4) 2.35, 3.41; p<0.001). In 2012 there were significant improvements in reporting the study as randomized in the title, describing the trial design, the participants, and objectives and blinding. In the Results section, trial status and numbers analyzed were also reported better. The IRRs were significantly higher for 2012 (IRR 1.32; 95% CI 1.25, 1.39; p<0.001) and in multisite studies compared to single site studies (IRR 1.08; 95% CI 1.03, 1.15; p=0.006).
Conclusions: There was a significant improvement in the reporting of abstracts of RCTs in 2012 compared to 2007. However, there is still room for improvement as some items remain under-reported.
abstract_id: PUBMED:16464423
Randomized clinical trials (CONSORT) It has been repeatedly shown that the information supplied in publications of clinical trials is frequently insufficient or inaccurate and that some methodologic problems are associated with exaggerated estimates of the effect of healthcare interventions. To improve the quality of reports of clinical trials, a group of scientists and editors developed the CONSORT statement (Consolidated Standards of Reporting Trials), a 22-item checklist (plus flow diagram), that can be used by authors, editors, reviewers, and readers. After publication in 1996, CONSORT was adopted by several journals and editorial groups. In 1999, a second version was drawn up, which was published in 2001. This article presents the Spanish translation of the two elements integrating the revised CONSORT, the flow diagram and the 22-item checklist, and provides a short comment on each of them. Previous publications of the CONSORT statement and other useful resources such as examples of what are considered good communications may be obtained from the CONSORT web site (http://www.consort-statement.org).
Answer: Yes, the CONSORT checklist for abstracts does improve the quality of reports of randomized controlled trials (RCTs) on clinical pathways. The study by Zeng et al. (PUBMED:24916891) found that after the publication of the CONSORT for abstracts guideline, the reporting quality of RCT abstracts improved to some extent. However, the study also noted that the abstracts in Chinese journals showed non-adherence to the CONSORT for abstracts guidelines, indicating that while there is an improvement, there is still room for further enhancement in adherence to the guidelines.
Other studies have also assessed the impact of the CONSORT checklist for abstracts on the reporting quality of RCTs in various fields. For instance, the study by Janackovic and Puljak (PUBMED:31744527) evaluated the adherence of reports of trials in high-impact anesthesiology journals, suggesting the importance of accurate reporting in abstracts. Similarly, the study by Ghimire et al. (PUBMED:28109324) found that reporting of some items in abstracts for critical care trials improved over time, but adherence to the CONSORT guideline for abstracts was still suboptimal.
Furthermore, a systematic review by O'Brien et al. (PUBMED:35314346) highlighted the poor reporting quality of abstracts of RCTs presented at international cardiothoracic conferences when benchmarked against the CONSORT-A standards, indicating a need for targeted improvement. Another systematic review by Al-Amer et al. (PUBMED:29742449) showed that the quality of RCT abstracts in prosthodontics improved over time, but adherence to the CONSORT guideline for abstracts was still suboptimal.
Additionally, the study by Liu et al. (PUBMED:34999245) found that the overall quality of reporting in RCT abstracts regarding patients with COVID-19 was suboptimal, and the study by Shamseer et al. (PUBMED:31791393) reported that abstracts of RCTs published in subscription medical journals had greater completeness of reporting than those published in OA journals.
In summary, while the CONSORT checklist for abstracts has been shown to improve the reporting quality of RCT abstracts to some extent, there is evidence that adherence to the guidelines is inconsistent across different fields and types of publications, and further efforts are needed to ensure better compliance and reporting quality. |
Instruction: Circulating cytokine profile in anti-neutrophilic cytoplasmatic autoantibody-associated vasculitis: prediction of outcome?
Abstracts:
abstract_id: PUBMED:15545059
Circulating cytokine profile in anti-neutrophilic cytoplasmatic autoantibody-associated vasculitis: prediction of outcome? Aims: The anti-neutrophilic cytoplasmatic autoantibody-associated vasculitides (AASV) are diseases of relapsing-remitting inflammation. Here we explore the cytokine profile in different phases of disease, looking for pathogenic clues of possible prognostic value.
Results: Interleukin (IL)-6, IL-8 and IL-10 were significantly elevated in plasma. Patients in the stable phase who subsequently developed adverse events had higher IL-8 values. Patients in the stable phase who relapsed within 3 months had lower IL-10 values and higher IL-6 levels.
Conclusions: Patients with AASV have raised circulating cytokine levels compared with healthy controls, even during remission. Raised IL-8 seems associated with poor prognosis. Lower levels of IL-10 and higher levels of IL-6 herald a greater risk of relapse. Patients with systemic vasculitis in clinical remission have persistent disease activity, kept under control by inhibitory cytokines.
abstract_id: PUBMED:29653210
Neutrophilic dermatoses: Pathogenesis, Sweet syndrome, neutrophilic eccrine hidradenitis, and Behçet disease. Neutrophilic dermatoses are a heterogeneous group of inflammatory skin disorders that present with unique clinical features but are unified by the presence of a sterile, predominantly neutrophilic infiltrate on histopathology. The morphology of cutaneous lesions associated with these disorders is heterogeneous, which renders diagnosis challenging. Moreover, a thorough evaluation is required to exclude diseases that mimic these disorders and to diagnose potential associated infectious, inflammatory, and neoplastic processes. While some neutrophilic dermatoses may resolve spontaneously, most require treatment to achieve remission. Delays in diagnosis and treatment can lead to significant patient morbidity and even mortality. Therapeutic modalities range from systemic corticosteroids to novel biologic agents, and the treatment literature is rapidly expanding. The first article in this continuing medical education series explores the pathogenesis of neutrophilic dermatoses and reviews the epidemiology, clinical and histopathologic features, diagnosis, and management of Sweet syndrome, neutrophilic eccrine hidradenitis, and Behçet disease.
abstract_id: PUBMED:24319012
A novel autoantibody against moesin in the serum of patients with MPO-ANCA-associated vasculitis. Background: Antineutrophil cytoplasmic autoantibody (ANCA) directed against myeloperoxidase (MPO), a diagnostic criterion in MPO-ANCA-associated vasculitis (MPO-AAV), does not always correlate with disease activity. Here, we detected autoantibodies against moesin, which was located on the surface of stimulated endothelial cells, in the serum of patients.
Methods: The anti-moesin autoantibody titer was evaluated by ELISA. Seventeen kinds of cytokines/chemokines were measured by a Bio-Plex system.
Results: Serum creatinine in the anti-moesin autoantibody-positive group was higher than that in the negative group. Additionally, interferon (IFN)-γ, macrophage chemotactic peptide-1 (MCP-1), interleukin (IL)-2, IL-7, IL-12p70, IL-13, granulocyte/macrophage colony-stimulating factor (GM-CSF) and granulocyte colony-stimulating factor were significantly higher in the positive group. Furthermore, IL-7 and IL-12p70 levels correlated with the anti-moesin autoantibody titer. Based on these findings and the binding of anti-moesin IgG to neutrophils and monocytes, we detected the secretion of cytokines/chemokines such as IFN-γ, MCP-1 and GM-CSF from these cells.
Conclusions: The anti-moesin autoantibody existed in the serum of patients with MPO-AAV and was associated with the production of inflammatory cytokines/chemokines targeting neutrophils with a cytoplasmic profile, which suggests that the anti-moesin autoantibody has the possibility to be a novel autoantibody developing vasculitis via neutrophil and endothelial cell activation.
abstract_id: PUBMED:28668093
Serum cytokine profile in pediatric Sweet's syndrome: a case report. Background: Sweet's syndrome is characterized by fever, leukocytosis, and tender erythematous papules or nodules. It is a rare condition, particularly in the pediatric population, and has recently been proposed to be an autoinflammatory disease that occurs due to innate immune system dysfunction, involving several cytokines, which causes abnormally increased inflammation. To the best of our knowledge, no report has documented the cytokine profile in a pediatric patient with Sweet's syndrome.
Case Presentation: A previously healthy 34-month-old Japanese girl was hospitalized because of remittent fever and pain in her right lower extremity with erythematous nodules. A skin biopsy of the eruption revealed dermal perivascular neutrophilic infiltration with no evidence of vasculitis, which led to the diagnosis of Sweet's syndrome. She was prescribed with orally administered prednisolone and a prompt response was observed; then, the prednisolone dose was tapered. During treatment she developed upper and lower urinary tract infections, after which her cutaneous symptoms failed to improve despite increasing the prednisolone dosage. To avoid long-term use of systemic corticosteroids, orally administered potassium iodide was initiated, but it was unsuccessful. However, orally administered colchicine along with prednisolone effectively ameliorated her symptoms, and prednisolone dosage was reduced again. We analyzed the circulating levels of interleukin-1β, interleukin-6, interleukin-18, neopterin, and soluble tumor necrosis factor receptors I and II, in order to clarify the pathogenesis of Sweet's syndrome. Of these cytokines, only interleukin-6 levels were elevated prior to orally administered prednisolone therapy. Following therapy, the elevated interleukin-6 levels gradually diminished to almost normal levels; interleukin-1β and interleukin-18 stayed within normal ranges throughout the treatment. Neopterin became marginally elevated after the start of treatment. Both soluble tumor necrosis factor receptor I and soluble tumor necrosis factor receptor II levels increased shortly after the onset of urinary tract infections.
Conclusions: This is the first case report of pediatric Sweet's syndrome in which serum cytokine levels were investigated. Future studies should gather more evidence to elucidate the pathophysiology of Sweet's syndrome.
abstract_id: PUBMED:33868250
New Insights Into Novel Therapeutic Targets in ANCA-Associated Vasculitis. Biologics targeting inflammation-related molecules in the immune system have been developed to treat rheumatoid arthritis (RA), and these RA treatments have provided revolutionary advances. Biologics may also be an effective treatment for anti-neutrophil cytoplasmic autoantibody (ANCA)-associated vasculitis, particularly in patients with resistance to standard treatments. Despite the accumulation of clinical experience and the increasing understanding of the pathogenesis of vasculitis, it is becoming more difficult to cure vasculitis. The treatment of vasculitis with biologics has been examined in clinical trials, and this has also enhanced our understanding of the pathogenesis of vasculitis. A humanized anti-interleukin-5 monoclonal antibody known as mepolizumab was recently demonstrated to provide clinical benefit in the management of eosinophilic granulomatosis with polyangiitis in refractory and relapsing disease, and additional new drugs for vasculitis are being tested in clinical trials, while others are in abeyance. This review presents the new findings regarding biologics in addition to the conventional immunosuppressive therapy for ANCA-associated vasculitis.
abstract_id: PUBMED:9197831
Rheumatoid neutrophilic dermatitis. Background: Rheumatoid neutrophilic dermatitis (RND) is a recently recognized, rare cutaneous manifestation of rheumatoid arthritis. It occurs in patients with severe rheumatoid arthritis and is typically asymptomatic. Rheumatoid neutrophilic dermatitis was originally described by Ackerman in 1978. Since that time, 8 patients with this disease have been described in the literature.
Observations: We report 2 cases of RND. Findings of skin biopsy specimens from both patients revealed characteristic signs of dermal leukocytosis and leukocytoclasia without vasculitis. The pathogenesis of the neutrophilic infiltrate is unclear. Processes that may play a role in the pathogenesis of RND include immune complex activations, cell adhesion and migration, and cytokine release.
Conclusions: Rheumatoid neutrophilic dermatitis falls into the spectrum of neutrophilic vascular reactions described by Jorizzo and Daniels. Although early reports suggest that prominent leukocytoclasia is not a feature of RND, our findings confirm the observations of Lowe et al that leukocytoclasia can be seen in RND and may be striking. It is important for dermatologists to be aware of this rare manifestation of rheumatoid arthritis.
abstract_id: PUBMED:19436669
The effect of angiotensin receptor blockers on C-reactive protein and other circulating inflammatory indices in man. Anti-inflammatory properties may contribute to the pharmacological effects of angiotensin II receptor blockers (ARBs), a leading therapeutic class in the management of hypertension and related cardiovascular and renal diseases. That possibility, supported by consistent evidence from in-vitro and animal studies showing pro-inflammatory properties of angiotensin II, has been evaluated clinically by measuring the effect of ARBs on C-reactive protein and other circulating indices of inflammation (e-selectin, adhesion molecules, interleukin-6, tissue necrosis factor-alpha, monocyte chemoattractant protein-1) of potential clinical relevance, a body of evidence that this paper aims to review.
abstract_id: PUBMED:11318943
Neutrophil priming and apoptosis in anti-neutrophil cytoplasmic autoantibody-associated vasculitis. Background: Interactions between anti-neutrophil cytoplasmic autoantibody (ANCA) and primed neutrophils (PMNs) may be central to the pathogenesis of primary small vessel vasculitis. PMNs from patients are primed, expressing proteinase 3 (PR3) on the cell surface, which permits interaction with ANCA. In vitro ANCA activates primed PMN to degranulate and generate a respiratory burst. Resultant reactive oxygen species are important in triggering apoptosis, but the fate of PMN in ANCA-associated vasculitis is unknown. Failure to remove apoptotic PMN in a nonphlogistic manner may sustain the inflammatory response.
Methods: PMNs from patients or controls were isolated, and the basal production of superoxide was measured by the superoxide dismutase-inhibitable reduction of ferricytochrome C. ANCA antigen expression on apoptotic PMN was assessed at 0, 12, and 18 hours by flow cytometry using dual staining with FITC-conjugated annexin V and PE-conjugated anti-murine IgG against monoclonal ANCA. Apoptosis was also assessed by morphology. In further studies, apoptotic PMNs were opsonized with monoclonal anti-myeloperoxidase (MPO) or anti-proteinase-3 (PR3) or irrelevant isotype-matched IgG (N IgG) and phagocytosis by macrophages was measured using interaction assays. Cytokines interleukin-8 (IL-8) and interleukin-1 were measured by enzyme-linked immunosorbent assay (ELISA).
Results: Proteinase-3 expression (active 63.04 +/- 5.6% of total number of cells, remission 51.47 +/- 7.9% of total number of cells, control 17.7 +/- 4.7% of total number of cells, P < 0.05) and basal superoxide production (active 6.9 +/- 0.8 nmol/L x 10(6) cells, remission 5.15 +/- 0.4 nmol/L/10(6) cells, control 3.63 +/- 0.3 nmol/L/10(6) cells, P < 0.001) were significantly greater with freshly isolated PMN from patients than controls. PR3 expression and superoxide generation were positively correlated. PMN from patients with active disease became apoptotic at a greater rate than those of controls (at 18 hours, patients 72.3 +/- 3.9% apoptosis, controls 53.2 +/- 2.7% apoptosis, P < 0.05). PR3 and MPO expression were significantly greater on PMN isolated from patients at 12 and 18 hours. Opsonization of apoptotic PMN with ANCA significantly enhanced recognition and phagocytosis by scavenger macrophages (anti-MPO 88.95 +/- 6.27, anti-PR3 93.98 +/- 4.90, N IgG 44.89 +/- 3.44, P < 0.01) with increased secretion of IL-1 (anti-PR3 34.73 +/- 6.8 pg/mL, anti-MPO 42.01 +/- 12.3 pg/mL, N IgG 8.04 +/- 6.3 pg/mL, P < 0.05) and IL-8 (anti-PR3 8.97 +/- 0.93 ng/mL, anti-MPO 8.45 +/- 1.46 ng/mL, N IgG 0.96 +/- 0.15 ng/mL, P < 0.01).
Conclusion: In vivo circulating PMNs are primed as assessed by PR3 expression and basal superoxide production, thereby enhancing their inflammatory potential. These PMNs undergo apoptosis more readily, at which times they express PR3 and MPO on their surface. These antigens may then provide targets for ANCA. Opsonization of apoptotic PMN will enhance clearance by macrophages but will also trigger the release of pro-inflammatory cytokines that may contribute to chronic inflammation.
abstract_id: PUBMED:22152684
Decreased CXCR1 and CXCR2 expression on neutrophils in anti-neutrophil cytoplasmic autoantibody-associated vasculitides potentially increases neutrophil adhesion and impairs migration. Introduction: In anti-neutrophil cytoplasmic autoantibody (ANCA)-associated vasculitides (AAV), persistent inflammation within the vessel wall suggests perturbed neutrophil trafficking leading to accumulation of activated neutrophils in the microvascular compartment. CXCR1 and CXCR2, being major chemokine receptors on neutrophils, are largely responsible for neutrophil recruitment. We speculate that down-regulated expression of CXCR1/2 retains neutrophils within the vessel wall and, consequently, leads to vessel damage.
Methods: Membrane expression of CXCR1/2 on neutrophils was assessed by flow cytometry. Serum levels of interleukin-8 (IL-8), tumor necrosis factor alpha (TNF-α), angiopoietin 1 and angiopoietin 2 from quiescent and active AAV patients and healthy controls (HC) were quantified by ELISA. Adhesion and transendothelial migration of isolated neutrophils were analyzed using adhesion assays and Transwell systems, respectively.
Results: Expression of CXCR1 and CXCR2 on neutrophils was significantly decreased in AAV patients compared to HC. Levels of IL-8, which, as TNFα, dose-dependently down-regulated CXCR1 and CXCR2 expression on neutrophils in vitro, were significantly increased in the serum of patients with active AAV and correlated negatively with CXCR1/CXCR2 expression on neutrophils, even in quiescent patients. Blocking CXCR1 and CXCR2 with repertaxin increased neutrophil adhesion and inhibited migration through a glomerular endothelial cell layer.
Conclusions: Expression of CXCR1 and CXCR2 is decreased in AAV, potentially induced by circulating proinflammatory cytokines such as IL-8. Down-regulation of these chemokine receptors could increase neutrophil adhesion and impair its migration through the glomerular endothelium, contributing to neutrophil accumulation and, in concert with ANCA, persistent inflammation within the vessel wall.
abstract_id: PUBMED:34201078
Histologic Patterns and Clues to Autoinflammatory Diseases in Children: What a Cutaneous Biopsy Can Tell Us. Autoinflammation is defined by aberrant, antigen-independent activation of the innate immune signaling pathways. This leads to increased, pro-inflammatory cytokine expression and subsequent inflammation. In contrast, autoimmune and allergic diseases are antigen-directed immune responses from activation of the adaptive immune system. The innate and adaptive immune signaling pathways are closely interconnected. The group of 'complex multigenic diseases' are a result of mutual dysregulation of both the autoinflammatory and autoimmune physiologic components. In contrast, monogenic autoinflammatory syndromes (MAIS) result from single mutations and are exclusively autoinflammatory in their pathogenesis. Studying the clinical and histopathological findings for the various MAIS explains the phenotypical correlates of their specific mutations. This review aims to group the histopathologic clues for autoinflammation into three recognizable patterns. The presence of these histologic patterns in a pediatric patient with recurrent fevers and systemic inflammation should raise suspicion of an autoinflammatory component in MAIS, or, more frequently, in a complex multigenic disease. The three major histopathological patterns seen in autoinflammation are as follows: (i) the 'neutrophilic' pattern, seen in urticarial neutrophilic dermatosis, pustular psoriasis, aseptic neutrophilic folliculitis, and Sweet's syndrome; (ii) the 'vasculitic' pattern seen in small vessel-vasculitis (including hypersensitivity/leukocytoclastic vasculitis, thrombosing microangiopathy and lymphocytic vasculitis), and intermediate-sized vessel vasculitis, mimicking polyarteritis nodosa; and (iii) the 'granulomatous' pattern. Beyond these three patterns, there are additional histopathologic clues, which are detailed below. It is important for a dermatopathologist to recognize the patterns of autoinflammation, so that a diagnosis of MAIS or complex multigenic diseases may be obtained. Finally, careful histopathologic analyses could contribute to a better understanding of the various clinical manifestations of autoinflammation.
Answer: The circulating cytokine profile in anti-neutrophilic cytoplasmatic autoantibody-associated vasculitis (AASV) may have predictive value for patient outcomes. In AASV, cytokines such as interleukin (IL)-6, IL-8, and IL-10 are significantly elevated in plasma compared to healthy controls, even during remission phases of the disease. Notably, patients in a stable phase who subsequently developed adverse events had higher IL-8 values, suggesting that raised IL-8 is associated with a poor prognosis. Conversely, patients who relapsed within 3 months had lower IL-10 values and higher IL-6 levels, indicating that lower levels of IL-10 and higher levels of IL-6 may herald a greater risk of relapse. These findings suggest that patients with systemic vasculitis in clinical remission still have persistent disease activity, which is kept under control by inhibitory cytokines (PUBMED:15545059).
Additionally, the presence of a novel autoantibody against moesin in the serum of patients with MPO-ANCA-associated vasculitis (MPO-AAV) has been associated with the production of inflammatory cytokines/chemokines targeting neutrophils, suggesting that this autoantibody could be a novel biomarker for developing vasculitis via neutrophil and endothelial cell activation (PUBMED:24319012).
In the context of ANCA-associated vasculitis (AAV), decreased expression of CXCR1 and CXCR2 on neutrophils has been observed, potentially induced by circulating proinflammatory cytokines such as IL-8. This down-regulation could increase neutrophil adhesion and impair migration through the glomerular endothelium, contributing to neutrophil accumulation and persistent inflammation within the vessel wall (PUBMED:22152684).
Overall, these studies suggest that the circulating cytokine profile, along with the presence of specific autoantibodies and chemokine receptor expression on neutrophils, could provide valuable insights into the prognosis and underlying pathogenic mechanisms of AASV, potentially guiding more targeted therapeutic approaches. |
Instruction: 'Eruptive syringoma': a misnomer for a reactive eccrine gland ductal proliferation?
Abstracts:
abstract_id: PUBMED:12641781
'Eruptive syringoma': a misnomer for a reactive eccrine gland ductal proliferation? Background: Syringomas have traditionally been categorized as benign neoplasms of the eccrine gland ductal epithelium. However, the variety of clinical presentations reported in the literature and some cases recently observed by the authors cast doubt upon the neoplastic nature of eruptive syringomas. Our goal is to challenge the traditional notion that eruptive syringomas are neoplastic lesions.
Results: We observed two patients who presented with an eczematous process, which resolved leaving residual lesions. Biopsies of the late lesions showed features of eccrine syringoma. Yet a biopsy obtained from an incipient lesion in one of the cases showed a lymphocytic inflammatory reaction of the superficial portion of the eccrine duct resulting in tortuous hyperplastic changes.
Conclusion: Based on our observations, some of the so-called 'eruptive syringoma' may represent a hyperplastic response of the eccrine duct to an inflammatory reaction rather than a true adnexal neoplasm. We proposed the term 'syringomatous dermatitis' for such cases.
abstract_id: PUBMED:19522848
Autoimmune acrosyringitis with ductal cysts: reclassification of case of eruptive syringoma. Syringomas are architecturally complex tumors composed of small, cystically dilated segments of dermal eccrine duct. Syringomas typically form isolated flesh-colored periorbital papules, however, in a peculiar condition termed eruptive syringoma; scores develop simultaneously in near confluence over a large surface area. While traditionally regarded as a neoplasm, more recent observations indicate eruptive syringoma is a reactive proliferation secondary to autoimmune disruption of the acrosyringium. We present the case of a 44-year-old woman with eruptive syringoma of the labia majora and prominent lymphocytic inflammation in the acrosyringium. Immunohistochemical stains confirm that the infiltrate is composed of CD4+ and CD8+ T cells, without significant CD20 B cells or CD138 plasma cells. Sequential sections of the syringoma reveal a complex 3-dimensional architecture with functionally isolated cysts, not connected to adjacent cysts or ducts by a discernable epithelium. These findings support the conclusion that eruptive syringoma is a tortuous proliferation of dermal eccrine ducts and fibrous stroma secondary to autoimmune destruction of the acrosyringium. Conceptually, the disorganized expansion of an eccrine duct syringoma may be analogous to a peripheral nerve traumatic neuroma.
abstract_id: PUBMED:3351061
Proliferation of eccrine sweat ducts associated with alopecia areata. Proliferation of sweat ducts has been described as a reactive process in a variety of benign and malignant neoplasms and inflammatory conditions in the skin, including scarring alopecia. However, to our knowledge this phenomenon has not been observed in non-scarring alopecia. The following case documents such a proliferation arising in an alopecia consistent with alopecia areata. An 83-year-old female developed progressive, fairly well circumscribed patches of alopecia over a 2-3 year period. Unequivocal scarring was not present. Histopathological examination revealed non-scarring alopecia with miniaturized and telogen follicles and a proliferation of eccrine ductal structures in the reticular dermis. These ductal structures varied in size and degree of cystic dilatation and resembled a primary eccrine neoplasm, such as syringoma. Only minimal focal fibrosis was observed in association with the eccrine proliferation. In summary, this case indicates that eccrine sweat duct proliferation may occur in non-scarring alopecia and must be differentiated from a primary eccrine neoplasm.
abstract_id: PUBMED:20049275
Generalized eruptive syringomas. Generalized eruptive syringoma is a rare clinical presentation of a benign adnexal tumor that derives from the intraepidermal portion of the eccrine sweat ducts. It presents as successive crops of small flesh-colored papules on the anterior body surfaces. It generally occurs in the peripubertal period. Treatment of this benign condition is cosmetic only. A case of a 28-year-old female with an eight-year history of eruptive syringoma is presented.
abstract_id: PUBMED:30693166
Disseminated Syringomas of the Upper Extremities in a Young Woman. Syringomas are benign, eccrine sweat gland tumors frequently found on the eyelids and neck in post-pubescent women and may present in healthy individuals or be associated with various medical comorbidities. We present a case of an otherwise healthy 19-year-old female with an abrupt onset of disseminated syringomas on the bilateral forearms and dorsal hands. Eruptive acral syringomas have not been previously reported in adolescents, and this diagnosis should be considered in patients presenting with a papular eruption on the hands and forearms.
abstract_id: PUBMED:18647305
New concepts on the histogenesis of eccrine neoplasia from keratin expression in the normal eccrine gland, syringoma and poroma. Background: Peripheral and luminal layers of eccrine sweat gland ducts are self-renewing structures. Proliferation is restricted to the lowermost luminal layer, but randomly scattered in the peripheral layer. Each layer exhibits differential expression of keratins K5/K14 and K6/K16. Keratin K1 occurs only in peripheral cells and the novel keratin K77 is specific for luminal cells.
Objectives: To investigate the expression of luminal (K77), peripheral (K1) and further discriminatory keratins in two eccrine sweat gland tumours: syringoma, thought to show differentiation towards luminal cells of intraepidermal sweat ducts and eccrine poroma, considered to arise from poroid cells, i.e. peripheral duct cells; and keratinocytes of the lower acrosyringium/sweat duct ridge differentiating towards cells of intradermal/intraepidermal duct segments.
Methods: Paraffin-embedded sections were examined by immunohistochemistry using several keratin, smooth muscle actin and Ki-67 antibodies.
Results: We confirmed the ductal nature of syringomas. Despite drastic morphological alterations in both layers, their keratin patterns remained almost undisturbed compared with normal ducts. In eccrine poroma epidermal keratins K5/K14 were ubiquitously expressed in all poroid cells. Cell islands deviating morphologically from poroid cells contained epidermal keratins K1/K10. K77 expression was limited to luminal cells of intact duct structures within the tumours.
Conclusions: Syringomas are benign tumours of luminal cells of the lowermost intraglandular sweat duct. Poroid precursor cells of poromas do not comprise peripheral duct cells nor do poromas differentiate towards peripheral or luminal duct cells. Instead, poroid cells consist only of keratinocytes of the lowermost acrosyringium and the sweat duct ridge and poromas tend to differentiate towards the cells of the upper acrosyringium.
abstract_id: PUBMED:2547344
Eruptive syringoma. Eruptive syringoma is a rare variant of syringoma that appears on anterior surfaces of the body, including the neck, chest, and arms. Textbooks state that this eccrine-derived tumor arises at puberty. We describe four cases of eruptive syringoma that began in childhood. We review the literature on this entity and suggest that the disorder typically presents between the ages of 4 and 10 years. Eruptive syringoma is a benign tumor that should be considered among the papular dermatoses of childhood.
abstract_id: PUBMED:9162160
Eccrine syringofibroadenoma. Report of a case Eccrine syringofibroadenoma is an uncommon benign adnexal tumor (about thirty reported cases). Its clinical presentation is variable and non specific. The diagnostic is never evident before histological exam. Histologically it is an epithelial proliferation organized in thin cords appended to the epidermis with cuticular differentiation. Eccrine poroma and fibroepithelial tumor of Pinkus are the main histological diagnostic problems. The authors report two new cases in two women of 35 and 69 years which had been clinically considered as histiocytofibroma and molluscum pendulum.
abstract_id: PUBMED:36781459
Syringomatous dermatitis: a myth or an existing entity? Syringoma is rare disease that in classical variant occurs mainly on lower eyelids. In previously published researches, there is increasing evidence that eruptive syringomas must be discussed as an inflammatory dermal reaction pattern. And there was a proposal to change a name from eruptive syringoma to reactive syringomatous proliferation in appropriate cases. We conduct retrospective study on histopathological archived material to study the histopathological findings in cases of eruptive syringomas and correlate it with hypothesis that eruptive syringomas is not true adnexal neoplasms "de novo" but a hyperplastic response of the acrosyringium to an inflammatory process.According to obtained data and literature correlation, we can conclude that there is apparent diversity in eruptive syringomas. Part of cases can be calculated as neoplastic lesions arising "de novo," and another part as reactive syringomatous proliferation due to preceding cutaneous inflammatory process. Thus, term "eruptive syringoma" may be changed in appropriate cases to a "reactive syringomatous proliferation."Clinical variants of eruptive syringoma with evidence of underlying inflammatory process may be more responsive on types of treatments used for inflammatory disorders. Thus, more global clinicopathological correlative researches should be conducted and classification with terminology should be appropriately changed.
abstract_id: PUBMED:3890761
Familial syringoma. Case history and application of monoclonal anti-eccrine gland antibodies. We studied a family with dominantly inherited eruptive syringoma. The father and the older daughter had chest and neck as well as eyelid lesions. The chest lesions seemed to develop continuously by new formation of eccrine germlike budding from the epidermis. Monoclonal antikeratin antibody EKH4, which predominantly labels the basal layers of the epidermis, stained positively in the cordlike epithelial structure and peripheral cells of the walls of cystic lesions. Staining with EKH6, which recognizes normal eccrine secretory and ductal structures--particularly luminal borders--was positive along the luminal borders of the cystic lesions; staining with EKH5, which labels eccrine secretory portion, was entirely negative in the lesions. These findings further supported the theory that syringoma of the eyelids and eruptive syringoma, which mainly involves the anterior neck and upper chest, are essentially the same tumor and that these appendage tumors are of eccrine ductal differentiation.
Answer: The term 'eruptive syringoma' has traditionally been used to describe a benign neoplasm of the eccrine gland ductal epithelium, characterized by the appearance of multiple small, flesh-colored papules on the body, often in the peripubertal period (PUBMED:20049275). However, recent observations and studies have challenged this notion, suggesting that eruptive syringomas may not be true neoplasms but rather a reactive hyperplastic response of the eccrine duct to an inflammatory process (PUBMED:12641781; PUBMED:36781459).
For instance, one study observed patients presenting with an eczematous process that resolved, leaving behind residual lesions that, upon biopsy, showed features of eccrine syringoma. However, a biopsy from an incipient lesion in one case revealed a lymphocytic inflammatory reaction of the superficial portion of the eccrine duct, leading to tortuous hyperplastic changes (PUBMED:12641781). This suggests that some cases of 'eruptive syringoma' may be more accurately described as 'syringomatous dermatitis,' a term proposed for cases representing a hyperplastic response rather than a true adnexal neoplasm (PUBMED:12641781).
Another study presented a case of eruptive syringoma with prominent lymphocytic inflammation in the acrosyringium, supporting the idea that eruptive syringoma is a reactive proliferation secondary to autoimmune disruption of the acrosyringium (PUBMED:19522848). The study also compared the disorganized expansion of an eccrine duct in syringoma to a peripheral nerve traumatic neuroma, further emphasizing its reactive nature (PUBMED:19522848).
Additionally, a case of non-scarring alopecia areata was documented where a proliferation of eccrine ductal structures occurred, resembling a primary eccrine neoplasm such as syringoma, indicating that eccrine sweat duct proliferation may also occur as a reactive process in non-scarring alopecia (PUBMED:3351061).
In summary, the term 'eruptive syringoma' may indeed be a misnomer for some cases, as emerging evidence points towards a reactive eccrine gland ductal proliferation rather than a true neoplastic process. This has led to the suggestion that the term 'eruptive syringoma' be reconsidered or modified in appropriate cases to better reflect the reactive nature of the condition. The term "reactive syringomatous proliferation" has been proposed to describe cases where the condition arises as a response to an inflammatory process (PUBMED:36781459). This reclassification is supported by the diversity observed in eruptive syringomas, where some cases may be neoplastic lesions arising "de novo," while others are reactive proliferations due to a preceding cutaneous inflammatory process (PUBMED:36781459).
Furthermore, the application of monoclonal anti-eccrine gland antibodies in a study of a family with dominantly inherited eruptive syringoma supported the eccrine ductal differentiation of these tumors (PUBMED:3890761). The findings from this study align with the concept that eruptive syringoma and syringoma of the eyelids, which mainly involves the anterior neck and upper chest, are essentially the same tumor with eccrine ductal differentiation (PUBMED:3890761).
In light of these findings, it is evident that the traditional view of eruptive syringoma as a benign neoplasm may not fully capture the complex pathogenesis of the condition. The term 'eruptive syringoma' may indeed be a misnomer for a subset of cases that represent a reactive eccrine gland ductal proliferation, and a reevaluation of the terminology and classification may be warranted to better guide clinical diagnosis and treatment strategies. |
Instruction: Does Barrett's esophagus respond to chemoradiation therapy for adenocarcinoma of the esophagus?
Abstracts:
abstract_id: PUBMED:20003971
Does Barrett's esophagus respond to chemoradiation therapy for adenocarcinoma of the esophagus? Background: Adenocarcinoma of the esophagus is frequently associated with Barrett's esophagus (BE). The response of esophageal adenocarcinoma to chemoradiation therapy is well described; however, the effect of chemoradiation on tumor-associated BE has not been specifically reported.
Objective: To determine the response of tumor-associated BE to chemoradiation therapy.
Design: Retrospective cohort study.
Setting: A single National Cancer Institute Comprehensive Cancer Care Center experience.
Patients: The study cohort consisted of 43 patients with stage I to IVA esophageal adenocarcinoma associated with BE who received either neoadjuvant or definitive chemoradiation therapy and underwent either esophagectomy or surveillance at our institution.
Main Outcome Measurement: The presence and extent of BE after chemoradiation therapy of esophageal adenocarcinoma associated with endoscopically documented pretreatment BE.
Results: BE persisted after chemoradiation therapy in 93% (40/43) of cases (95% CI, 83%-99%). Twenty-seven patients received neoadjuvant chemoradiation therapy before esophagectomy. Persistent BE was detected in all 27 surgical specimens (100%). In 59% (16/27) of the cases, there was complete pathologic tumor response. Sixteen patients received definitive chemoradiation therapy. Persistent pretreatment BE was identified in 88% (14/16) by surveillance endoscopy (95% CI, 60%-98%). The mean length of BE before and after chemoradiation was 6.6 cm and 5.8 cm, respectively (P = .38).
Limitations: Retrospective design, small sample size, and single-site data collection.
Conclusions: Chemoradiation therapy of esophageal adenocarcinoma does not eliminate tumor-associated BE, nor does it affect the length of the BE segment.
abstract_id: PUBMED:21549371
Cryoablation of persistent Barrett's epithelium after definitive chemoradiation therapy for esophageal adenocarcinoma. Background: Dysplastic Barrett's epithelium (BE) persists after chemoradiation therapy for esophageal adenocarcinoma (EAC) arising in Barrett's esophagus. This phenomenon may present a significant risk for development of metachronous adenocarcinoma.
Objective: To analyze the safety and efficacy of endoscopic cryoablation therapy for persistent dysplastic BE in patients with complete clinical response after definitive chemoradiation therapy for EAC.
Design: Retrospective cohort study.
Setting: Single National Cancer Institute Comprehensive Cancer Center experience.
Patients: Radiation and endoscopic oncology treatment records were reviewed between January 2004 and September 2009. Fourteen patients with EAC who had been treated with definitive chemoradiation therapy followed by cryoablation were identified.
Intervention: Cryoablation therapy.
Main Outcome Measurements: Reduction in Prague Classification and dysplasia status following cryoablation therapy. Complications reported at 24 hour after the procedure telephone survey and at subsequent endoscopy.
Results: After complete clinical response of EAC to chemoradiation therapy, the median length of persistent BE was Prague classification C1M4 (C = circumferential extent, M = maximal extent). Cryoablation reduced the median length of persistent BE to Prague classification C0M1 (P = .009 with respect to circumferential extent and P = .004 with respect to maximal extent of BE). All 14 patients had dysplastic BE. Cryoablation resulted in histological downgrading in all 14 patients. Among patients with high-grade dysplasia, 20% (2/10) were reduced to low-grade dysplasia, 60% (6/10) to BE with no dysplasia, and 20% (2/10) to no BE. Among patients with low-grade dysplasia, 75% (3/4) were reduced to BE with no dysplasia, and 25% (1/4) to no BE. The median number of cryoablation treatments administered to the 14 patients evaluated was 1 (mean 1.5, range 1-5). Eighty-six percent (12/14) of patients reported no complaints during the 24 hours after cryoablation. No occurrences of perforation and no esophageal strictures were reported at surveillance endoscopy.
Limitations: Single-center, retrospective design involving a small number of patients.
Conclusion: Our observations suggest that cryoablation therapy is safe and effective for the treatment of persistent BE after definitive chemoradiation.
abstract_id: PUBMED:12934159
Current status of neoadjuvant therapy for adenocarcinoma of the distal esophagus. Prospective studies dealing with preoperative therapy in adenocarcinoma of the esophagus alone are rare. The interpretation of the preferential phase II trials and a few phase III trials is complicated, as most studies include adenocarcinoma of the esophagus (i.e., Barrett's carcinoma), adenocarcinoma of the esophagogastric junction (including cardia carcinoma and subcardia carcinoma), or squamous cell carcinoma. Preoperative chemotherapy, generally well tolerated, cannot decrease the incidence of local failure beyond the level achieved with surgery alone, but it might delay systemic relapse. Preoperative radiotherapy can enhance local control, but it fails to improve overall survival. Neoadjuvant chemoradiation was demonstrated in only one randomized trail to have a survival benefit, but survival in the surgery-alone group was unusually low. Generally, survival was ameliorated in patients responding to neoadjuvant treatment. However, preoperative chemoradiation was often accompanied by a remarkable increase in postoperative morbidity and mortality. Nonresponding patients have, in this respect, a worse prognosis than responders after resection. The prediction of responding patients to neoadjuvant therapy as well as the early identification of patients who will not respond is of utmost clinical importance. Today, there is no absolute evidence that neoadjuvant treatment for patients with potentially resectable Barrett's cancer prolongs survival. In patients with locally advanced, presumably not completely resectable adenocarcinoma of the esophagus, preoperative treatment appears to increase the chance for a curative resection and enhance survival in responding patients. Neoadjuvant treatment of adenocarcinoma of the esophagus, as a consequence, is currently not the standard treatment and should be performed only within controlled clinical trials.
abstract_id: PUBMED:28540761
Current and future treatment options for esophageal cancer in the elderly. Introduction: Esophageal cancer is the eighth most common cancer globally and has the sixth worst prognosis because of its aggressiveness and poor survival. Data regarding cancer treatment in older patients is limited because the elderly have been under-represented in clinical trials. Therefore, we reviewed the existing literature regarding treatment results for elderly patients (70+ years). Areas covered: We used pubmed to analyze the actual literature according to elderly esophageal cancer patients with subheading of incidence, esophagectomy, chemoradiation or chemotherapy. The main points of interest were treatment options for patients with Barrett's esophagus or early carcinoma, advanced tumor stages, and inoperable cancer. Expert opinion: The incidence of esophageal cancer has been increasing over the past thirty years, with a rapid increase of esophageal adenocarcinoma in Western industrialized nations. Patients aged over 60 years have been particularly affected. In this review, we have shown that elderly patients with esophageal cancer have various alternatives for adequate treatment. Clinical evaluation of comorbidity is necessary to make treatment decisions. Therapeutic options for early carcinomas are endoscopic or surgical resection. For elderly patients with advanced carcinomas, preoperative chemoradiation or chemotherapy should be discussed.
abstract_id: PUBMED:19021684
Adenocarcinoma of the lower esophagus with Barrett's esophagus or without Barrett's esophagus: differences in patients' survival after preoperative chemoradiation. It remains unclear whether the overall survival (OS) of patients with localized esophageal adenocarcinoma (LEA) with Barrett's esophagus (BE) (Barrett's-positive) and those with LEA without BE (Barrett's-negative) following preoperative chemoradiation is different. Based on the published differences in the molecular biology of the two entities, we hypothesized that the two groups will have a different clinical biology (and OS). In this retrospective analysis, all patients with LEA had surgery following preoperative chemoradiation. Apart from age, gender, baseline clinical stage, location, class of cytotoxics, post-therapy stage, and OS, LEAs were divided up into Barrett's-positive and Barrett's-negative groups based on histologic documentation of BE. The Kaplan-Meier and Cox regression analytic methods were used. We analyzed 362 patients with LEA (137 Barrett's-positive and 225 Barrett's-negative). A higher proportion of Barrett's-positive patients had (EUS)T2 cancers (27%) than those with Barrett's-negative cancer (17%). More Barrett's-negative LEAs involved gastroesophageal junction than Barrett's-positive ones (P = 0.001). The OS was significantly shorter for Barrett's-positive patients than that for Barrett's-negative patients (32 months vs. 51 months; P = 0.04). In a multivariate analysis for OS, Barrett's-positive LEA (P = 0.006), old age (P = 0.016), baseline positive nodes (P = 0.005), more than 2 positive (yp)N (P = 0.0001), higher (yp)T (P = 0.003), and the use of a taxane (0.04) were the independent prognosticators. Our data demonstrate that the clinical biology (reflected in OS) is less favorable for patients with Barrett's-positive LEA than for patients with Barrett's-negative LEA. Our intriguing findings need confirmation followed by in-depth molecular study to explain these differences.
abstract_id: PUBMED:15854962
Differential response to preoperative chemoradiation and surgery in esophageal adenocarcinomas based on presence of Barrett's esophagus and symptomatic gastroesophageal reflux. Background: Barrett's esophagus and gastroesophageal reflux disease (GERD) are recognized to predispose to esophageal adenocarcinoma. Abdel-Latif and colleagues recently suggested that esophageal adenocarcinoma patients with GERD might be resistant to multimodality treatment. In this study, we investigated potential differences in clinical outcomes in esophageal adenocarcinoma patients based on the presence of identifiable Barrett's mucosa and/or history of symptomatic GERD.
Methods: Eighty-four patients with resectable esophageal adenocarcinoma, who completed the planned preoperative chemoradiation and underwent a potentially curative esophageal resection were retrospectively evaluated. Postoperative survival was compared between patients with or without underlying Barrett's esophagus and history of symptomatic GERD. Patients with pathologic complete response (path CR) and those with partial or no response (path PR) were compared to determine if presence of Barrett's esophagus and history of symptomatic GERD influence the path CR rates.
Results: We found significantly lower postoperative survival in patients with Barrett's associated adenocarcinoma (vs adenocarcinoma arising de novo, p = 0.031) and patients with symptomatic GERD (vs patients without symptomatic GERD, p = 0.019). Furthermore, the subset of patients with path PR (vs path CR) after chemoradiation have a significantly higher proportion of patients with Barrett's esophagus (HR = 4.38, confidence interval [CI] = 1.39 to 13.83, p = 0.012) and patients with GERD (HR = 2.71, CI = 1.13 to 6.50, p = 0.026).
Conclusions: Patients with esophageal adenocarcinoma may have differences in response to preoperative chemoradiation based on the presence of Barrett's esophagus and history of symptomatic GERD.
abstract_id: PUBMED:11824282
Multimodality therapy concepts in esophageal carcinoma The role of preoperative chemotherapy for esophageal cancer still remains controversial. Only one study of the recently published, randomized controlled trials in potentially resectable esophageal cancer has shown improvement in survival by preoperative chemotherapy compared to surgery alone. Nevertheless, there has been a consistent observation that in patients who respond to preoperative therapy survival was significantly prolonged. Therefore, a diagnostic test that allows prediction of response is considered to be crucial for the future use of preoperative chemotherapy in patients with esophageal cancer. Molecular markers for response prediction and reliable non-invasive techniques such as FDG-PET are not yet established. At the moment therefore responder should undergo esophagectomy for definitive curative treatment, whereas non-responder may undergo individualized salvage therapy.
abstract_id: PUBMED:11422302
Localization of small esophageal cancers for radiation planning using endoscopic contrast injection: report on a series of eight cases. Recently, Barrett's esophagus and early adenocarcinomas have been detected increasingly frequently in routine follow-up of patients with gastroesophageal reflux. Although surgery is the treatment of choice, some patients are medically unfit for esophagectomy and, in this case, the only alternative curative therapy is radical chemoradiation therapy. In addition, some patients who present with symptoms have small tumors that cannot be localized accurately using routine imaging techniques. This report describes a series of eight patients with small esophageal cancers in whom the tumors were successfully localized following endoscopic injection of contrast, and treated with chemoradiation therapy. The treatment was successful in seven patients. This method of tumor localization demonstrated that conventional techniques are mostly unreliable when applied to very early cancers.
abstract_id: PUBMED:19902287
Neoadjuvant therapy in the upper gastro-intestinal tract. Modern strategies for Barrett's cancer While primary surgical resection with systematic lymphadenectomy remains the treatment of choice for locoregional Barrett's cancer, neoadjuvant chemotherapy is an increasingly accepted treatment modality for patients with locally advanced tumors and patients with extensive lymphatic spread. In contrast to neoadjuvant radiochemotherapy preoperative chemotherapy alone does not seem to increase peri-operative complications and mortality. Responders to pre-operative treatment clearly have a survival advantage as compared to those who do not respond. The use of positron emission tomography to measure changes in glucose metabolism of the primary tumor can predict response early after initiation of neoadjuvant chemotherapy and thus help to select patients who will or will not benefit from this approach. The best treatment strategy for non-responders to neoadjuvant therapy remains to be defined.
abstract_id: PUBMED:18477853
Epidermal growth factor receptor-directed therapy in esophageal cancer. Esophageal adenocarcinoma (EAC) is one of the fastest growing malignancies in the US. The long-term survival of patients with this cancer remains poor; only 25% of patients undergoing surgical excision are alive after 5 years. Multimodal programs that incorporate radiotherapy, chemotherapy and surgery for localized tumors may result in a modest survival advantage. However, significant strides in this disease can result from the inclusion of targeted therapies. The epidermal growth factor receptor (EGFR) family represents one such target and is receiving increasing attention due to the advent of specific inhibitors. Studies conducted by us and others have shown that the overexpression of EGFR family signaling intermediates is common in Barrett's esophagus and EAC. In the latter case, EGFR expression may have prognostic significance. EGFR inhibitors, including oral tyrosine kinase inhibitors and monoclonal antibodies, result in a synergistic antitumor effect with chemotherapeutic agents or with radiotherapy. Therefore, several ongoing studies include EGFR-directed therapy either alone or in combination with chemoradiotherapy for this disease. Our study of gefitinib, oxaliplatin and radiotherapy suggested that gefitinib can be safely incorporated into an oxaliplatin-based chemoradiation program for esophageal cancer, although the clinical activity of this combination is modest. Herein, we review the current literature on this subject.
Answer: Barrett's esophagus (BE) does not appear to respond effectively to chemoradiation therapy for esophageal adenocarcinoma. A retrospective cohort study found that BE persisted after chemoradiation therapy in 93% (40/43) of cases, and even when there was a complete pathologic tumor response in 59% (16/27) of the cases, persistent BE was detected in all 27 surgical specimens. This indicates that chemoradiation therapy of esophageal adenocarcinoma does not eliminate tumor-associated BE, nor does it affect the length of the BE segment (PUBMED:20003971).
However, cryoablation therapy has been suggested as a safe and effective treatment for persistent BE after definitive chemoradiation. In a study involving 14 patients with EAC who had been treated with definitive chemoradiation therapy followed by cryoablation, the median length of persistent BE was significantly reduced, and histological downgrading was observed in all patients. This suggests that while chemoradiation may not be effective against BE, cryoablation could be a viable post-chemoradiation therapy for persistent dysplastic BE (PUBMED:21549371). |
Instruction: Does academic assessment system type affect levels of academic stress in medical students?
Abstracts:
abstract_id: PUBMED:26112353
Does academic assessment system type affect levels of academic stress in medical students? A cross-sectional study from Pakistan. Introduction: Stress among medical students induced by academic pressures is on the rise among the student population in Pakistan and other parts of the world. Our study examined the relationship between two different systems employed to assess academic performance and the levels of stress among students at two different medical schools in Karachi, Pakistan.
Methods: A sample consisting of 387 medical students enrolled in pre-clinical years was taken from two universities, one employing the semester examination system with grade point average (GPA) scores (a tiered system) and the other employing an annual examination system with only pass/fail grading. A pre-designed, self-administered questionnaire was distributed. Test anxiety levels were assessed by The Westside Test Anxiety Scale (WTAS). Overall stress was evaluated using the Perceived Stress Scale (PSS).
Results: There were 82 males and 301 females while four did not respond to the gender question. The mean age of the entire cohort was 19.7 ± 1.0 years. A total of 98 participants were from the pass/fail assessment system while 289 were from the GPA system. There was a higher proportion of females in the GPA system (85% vs. 59%; p < 0.01). Students in the pass/fail assessment system had a lower score on the WTAS (2.4 ± 0.8 vs. 2.8 ± 0.7; p = 0.01) and the PSS (17.0 ± 6.7 vs. 20.3 ± 6.8; p < 0.01), indicating lower levels of test anxiety and overall stress than in students enrolled in the GPA assessment system. More students in the pass/fail system were satisfied with their performance than those in the GPA system.
Conclusion: Based on the present study, we suggest governing bodies to revise and employ a uniform assessment system for all the medical colleges to improve student academic performance and at the same time reduce stress levels. Our results indicate that the pass/fail assessment system accomplishes these objectives.
abstract_id: PUBMED:25812609
The effects of attribution tendencies, academic stress, and coping efficacy on academic adjustment of medical students. Purpose: This study investigated the relationship among types of attribution tendencies, academic stress, coping efficacy, and academic adjustment in medical students and identified the means by which the academic adjustment of medical students can improve.
Methods: Four hundred forty-two subjects from 2 medical schools in Korea were analyzed; 202 were male, 206 were female, and 34 did not identify their gender. We surveyed their academic adjustment, attribution tendencies, academic stress, and coping efficacy. The data were analyzed by descriptive statistics, t-test, and stepwise multiple regression analysis.
Results: The male group scored higher on academic adjustment, internal attribution tendency, and coping efficacy but lower on academic stress than the female group. Coping efficacy and internal attribution tendency affected the academic adjustment positively while academic stress influenced it negatively.
Conclusion: The study indicates that students with higher scores on coping efficacy and internal attribution tendency and who have lower scores on academic stress tend to adjust better academically in medical school. Therefore, these findings may be helpful for medical schools in designing effective academic adjustment programs to improve coping efficacy and internal attribution tendency and reduce academic stress. Further, these findings have important implications for planning learning consultation programs, especially in Year 1.
abstract_id: PUBMED:32030075
The relationship between sleep quality, stress, and academic performance among medical students. Background: Sleep is essential for the body, mind, memory, and learning. However, the relationship between sleep quality, stress, and academic performance has not been sufficiently addressed in the literature. The aim of this study was to assess the quality of sleep and psychological stress among medical students and investigate the relationship between sleep quality, stress, and academic performance.
Materials And Methods: This cross-sectional study targeted all medical students in their preclinical years at a Saudi medical college in 2019. All students were asked to complete an electronic self-administered questionnaire comprising the Pittsburgh Sleep Quality Index (PSQI), the Kessler Psychological Distress Scale (K10), questions on the students' current overall grade point average, and other demographic and lifestyle factors. The associations between categorical variables were analyzed using Pearson's Chi-squared test at 0.05 significance level.
Results: The mean PSQI score was 8.13 ± 3.46; 77% of the participants reported poor quality of sleep and 63.5% reported some level of psychological stress (mean K10 score: 23.72 ± 8.55). Poor quality of sleep was significantly associated with elevated mental stress levels (P < 0.001) and daytime naps (P = 0.035). Stepwise logistic regression model showed that stress and daytime nap were associated with poor sleep quality. Whereas, poor sleep or stress did not show any significant association with academic performance.
Conclusion: Poor sleep quality was significantly associated with elevated levels of strees. However, they did not show any statistically significant relationship with academic performance.
abstract_id: PUBMED:35782431
Family and Academic Stress and Their Impact on Students' Depression Level and Academic Performance. Current research examines the impact of academic and familial stress on students' depression levels and the subsequent impact on their academic performance based on Lazarus' cognitive appraisal theory of stress. The non-probability convenience sampling technique has been used to collect data from undergraduate and postgraduate students using a modified questionnaire with a five-point Likert scale. This study used the SEM method to examine the link between stress, depression, and academic performance. It was confirmed that academic and family stress leads to depression among students, negatively affecting their academic performance and learning outcomes. This research provides valuable information to parents, educators, and other stakeholders concerned about their childrens' education and performance.
abstract_id: PUBMED:31960979
Gender-specific effects of raising Year-1 standards on medical students' academic performance and stress levels. Context: Medical schools are challenged to create academic environments that stimulate students to improve their study progress without compromising their well-being.
Objectives: This prospective comparative cohort study investigated the effects of raising Year-1 standards on academic performance and on students' chronic psychological and biological stress levels.
Methods: In a Dutch medical school, students within the last Bachelor's degree cohort (n = 410) exposed to the 40/60 (67%) credit Year-1 standard (67%-credit cohort) were compared with students within the first cohort (n = 413) exposed to a 60/60 (100%) credit standard (100%-credit cohort). Main outcome measures were Year-1 pass rate (academic performance), mean score on the Perceived Stress Scale (PSS, psychological stress) and hair cortisol concentration (HCC, biological stress).
Results: Year-1 pass rates were significantly higher in the 100%-credit cohort (odds ratio [OR] 4.65). Interestingly, there was a significant interaction effect (OR 0.46), indicating that raising the standard was more effective for male than for female students. PSS scores (n = 234 [response rate [RR]: 57%] and n = 244 [RR: 59%] in the 67%- and 100%-credit cohorts, respectively) were also significantly higher in the 100%-credit cohort (F(1,474) = 15.08, P < .001). This applied specifically to female students in the 100%-credit cohort. Levels of HCC (n = 181 [RR: 44%] and n = 162 [RR: 39%] respectively) did not differ between cohorts, but were significantly higher in female students (F(1,332) = 7.93, P < .01). In separate models including cohort and gender, both PSS score (OR 0.91) and HCC (OR 0.38) were significantly associated with Year-1 performance. Only students with both high PSS scores and high HCC values were significantly at risk of lower Year-1 pass rates (OR 0.27), particularly male students.
Conclusions: Raising the Year-1 performance standard increased academic performance, most notably in male students. However, it also increased levels of perceived stress, especially in female students. In particular, the combination of high levels of perceived stress and biological stress, as measured by long-term cortisol, was related to poor academic performance. The study suggests a relationship between raising performance standards and student well-being, with differential effects in male and female students.
abstract_id: PUBMED:25802809
Association of academic stress with sleeping difficulties in medical students of a Pakistani medical school: a cross sectional survey. Introduction. Medicine is one of the most stressful fields of education because of its highly demanding professional and academic requirements. Psychological stress, anxiety, depression and sleep disturbances are highly prevalent in medical students. Methods. This cross-sectional study was undertaken at the Combined Military Hospital Lahore Medical College and the Institute of Dentistry in Lahore (CMH LMC), Pakistan. Students enrolled in all yearly courses for the Bachelor of Medicine and Bachelor of Surgery (MBBS) degree were included. The questionnaire consisted of four sections: (1) demographics (2) a table listing 34 potential stressors, (3) the 14-item Perceived Stress Scale (PSS-14), and (4) the Pittsburgh Quality of Sleep Index (PSQI). Logistic regression was run to identify associations between group of stressors, gender, year of study, student's background, stress and quality of sleep. Results. Total response rate was 93.9% (263/280 respondents returned the questionnaire). The mean (SD) PSS-14 score was 30 (6.97). Logistic regression analysis showed that cases of high-level stress were associated with year of study and academic-related stressors only. Univariate analysis identified 157 cases with high stress levels (59.7%). The mean (SD) PSQI score was 8.1 (3.12). According to PSQI score, 203/263 respondents (77%) were poor sleepers. Logistic regression showed that mean PSS-14 score was a significant predictor of PSQI score (OR 1.99, P < 0.05). Conclusion. We found a very high prevalence of academic stress and poor sleep quality among medical students. Many medical students reported using sedatives more than once a week. Academic stressors contributed significantly to stress and sleep disorders in medical students.
abstract_id: PUBMED:36710743
Anxiety, depression, and academic stress among medical students during the COVID-19 pandemic. Background: The social distancing policies implemented by the health authorities during the COVID-19 pandemic in Mexico and elsewhere led to major changes in teaching strategies for college undergraduates. So far, there is limited data regarding the impact of the lockdown on the academic stress and mental health of these students.
Objective: To assess the occurrence of academic difficulties, anxiety, depression, and academic stressors resulting in somatization with subsequent coping strategies linked to the pandemic.
Materials And Methods: A cross-sectional study was conducted with 728 medical students (years 1-5). A purposely designed questionnaire to assess academic difficulties associated with the pandemic was administered electronically. The validated Goldberg anxiety and depression scale was also used, as well as the SISCO-II inventory on academic stress.
Results: Screening for anxiety and depression led to a prevalence of 67.9 and 81.3%, respectively. Most relevant stressors, reported always or nearly always, included professors' evaluations (63.9%), and reading overload of academic papers (50.6%). Factorial analyses showed that women were more prone to stress than men (p < 0.001). Somatization symptomatology included drowsiness or increased need of sleep, anxiety, anguish, desperation, chronic fatigue, and sleep disorders. Common coping strategies included practicing a hobby, done always or nearly always by 65% of students with high stress, and 34% of those with low stress (p < 0.001).
Conclusion: There was a relevant impact of the mandatory lockdown during COVID-19 pandemic on the mental health of medical students reflected in the high prevalence rates of anxiety, depression, and stressors in the studied population pointing to the need for designing and implementing preventive strategies to deal with the effects of lockdowns.
abstract_id: PUBMED:33573501
Online and Academic Procrastination in Students With Learning Disabilities: The Impact of Academic Stress and Self-Efficacy. The steady growth in the number of college students with learning disabilities (LD) increases the need to investigate their unique characteristics and behaviors in academia. The present study examined the differences in academic and online procrastination, academic stress, and academic self-efficacy between college students with and without LD. In addition, the relationship between these variables was examined. It was assumed that the difficulties experienced by college students with LD would lead them to increased levels of academic stress, and academic and online procrastination. The results showed significant differences in the levels of all variables except online procrastination between students with (n = 77) and without (n = 98) LD. Further analysis indicated that academic stress and academic self-efficacy mediated the link between LD and academic and online procrastination. These findings support the notion that during higher education, students with LD experience more difficulties than students without LD, which at times will lead them to increased levels of procrastination. However, further studies are needed to understand the nature of online procrastination in students with LD in higher education.
abstract_id: PUBMED:23555107
Motivation and academic achievement in medical students. Background: Despite their ascribed intellectual ability and achieved academic pursuits, medical students' academic achievement is influenced by motivation. This study is an endeavor to examine the role of motivation in the academic achievement of medical students.
Materials And Methods: In this cross-sectional correlational study, out of the total 422 medical students, from 4th to final year during the academic year 2007-2008, at School of Medicine, Isfahan University of Medical Sciences, 344 participated in completion of the Inventory of School Motivation (ISM), comprising 43 items and measuring eight aspects of motivation. The gold standard for academic achievement was their average academic marks at pre-clinical and clinical levels. Data were computer analyzed by running a couple of descriptive and analytical tests including Pearson Correlation and Student's t-student.
Results: Higher motivation scores in areas of competition, effort, social concern, and task were accompanied by higher average marks at pre-clinical as well as clinical levels. However, the latter ones showed greater motivation for social power as compared to the former group. Task and competition motivation for boys was higher than for girls.
Conclusion: In view of our observations, students' academic achievement requires coordination and interaction between different aspects of motivation.
abstract_id: PUBMED:34408710
The Association Among Achievement Goal Orientations, Academic Performance, and Academic Well-Being Among Chinese Medical Students: A Cross-Sectional Study. Learning motivation is a significant factor that ensures quality in medical education, and might affect the academic performance and well-being of medical students. This study aimed to explore the status of achievement goal orientations among medical students in China and to further identify the association among academic performance, academic well-being, and achievement goal orientations. Data were collected through a cross-sectional, anonymous survey conducted with 3,511 respondents (effective response rate = 81.7%), from four medical universities in China, and demographic factors, achievement goal orientations, academic performance, and academic well-being were assessed. The average score of achievement goal orientations of Chinese medical students suggested a difference in demographic factors, including sex, year of study, experience of leadership cadre, and family income. Both mastery and performance-avoidance goals were associated with academic performance, subjective academic stress, subjective learning adaptability, subjective sleep quality, and subjective well-being. Performance-approach goals were related to academic performance, subjective academic stress, and subjective learning adaptability. The achievement goal orientations of the medical students in this study were at a middle level. The findings emphasize the importance of mastery goals for promoting the academic performance and well-being of medical students. More care and attention toward achievement goal orientations can be beneficial for the improvement of the academic performance and well-being among medical students.
Answer: Yes, the type of academic assessment system does affect levels of academic stress in medical students. A study conducted in Pakistan found that medical students enrolled in a pass/fail assessment system exhibited lower levels of test anxiety and overall stress compared to those in a grade point average (GPA) system. Students in the pass/fail system were also more satisfied with their performance (PUBMED:26112353). This suggests that the type of assessment system can significantly impact the stress levels experienced by medical students. |
Instruction: Testicular growth arrest and adolescent varicocele: does varicocele size make a difference?
Abstracts:
abstract_id: PUBMED:12352335
Testicular growth arrest and adolescent varicocele: does varicocele size make a difference? Purpose: We assessed whether testicular growth arrest is related to varicocele size in adolescents. We also determined whether adolescents with a varicocele and testes of equal size treated nonoperatively are at significant risk for growth arrest and, if so, whether this risk is related to varicocele size.
Materials And Methods: We retrospectively reviewed the records of boys with a varicocele. Testis volume was measured with calipers and computed into cc as (length x width x breadth) x 0.521. Testicular growth arrest was defined as left testis at least 15% smaller than the right testis. Varicocele size was graded 1-barely palpable, 2-palpable but not visible, 3a-visible and, 1 to 1.5 times the size of the ipsilateral testis, 3b-1.5 to 2 times the size of the ipsilateral testis and 3c-greater than 2 times the size of the ipsilateral testis. Boys with a grade 1 varicocele and those treated with previous inguinal or testicular surgery were excluded from study. Repair was recommended for testicular growth arrest or discomfort. Data were analyzed with chi-square and Fisher's exact test.
Results: The records of 124 boys 7 to 18 years old (mean age 13) with a varicocele were reviewed. Seven patients were excluded from analysis, yielding a total of 117 boys. Testicular growth arrest was observed at initial visit in 10 of 33 (30.3%) grade 2, 18 of 37 (48.6%) grade 3a, 14 of 31 (45.2%) grade 3b and 6 of 16 (37.5%) grade 3c cases (p not significant), or a total of 38 of 84 (45.2%) grade 3 cases (p <0.01) plus grade 2. Followup ranged from 1 to 5 years. Of the cases of equal sized testes at presentation growth arrest was observed in 3 of 16 (18.8%) grade 2, 2 of 11 (18.2%) grade 3a, 4 of 14 (28.6%) grade 3b and 3 of 9 (33.3%) grade 3c (p not significant), or a total of 9 of 34 (26.5%) grade 3 cases (p not significant) plus grade 2. Overall, testicular growth arrest was found in 13 of 33 (39%) grade 2 and 47 of 84 (56%) grade 3 varicoceles (p <0.01).
Conclusions: Boys with a varicocele are at significant risk for testicular growth arrest, irrespective of varicocele size, and those with a grade 3 varicocele have a higher risk of testicular growth arrest than those with a grade 2 varicocele. Of boys with testes of equal size at diagnosis growth arrest is observed during adolescence in approximately 25% irrespective of varicocele size.
abstract_id: PUBMED:9120980
Varicocele treatment in pubertal boys prevents testicular growth arrest. Purpose: There is evidence that varicocele damage, as reflected by loss of testicular mass, is most striking in the pubertal age group. We attempted to evaluate the long-term effect of early varicocele treatment on testicular growth and sperm count and, thus, determine its prophylactic value.
Materials And Methods: We compared testicular mass and sperm count in 32 men (mean age 28 years) who underwent surgery for varicocele at 11 to 15 years old (mean age 13) to those in 26 untreated, age matched men (mean age 30 years) with varicocele and 27 male controls (mean age 25 years). Mean followup in the treated group was 14.5 years (range 12 to 20). Testicular volumes were measured by ultrasonography.
Results: There was no significant difference between left and right testicular volumes in the treated or control group, in contrast to the untreated group, in which the left testicles were significantly smaller. Comparison of testicular mass showed a striking similarity between the treated and control groups, while there was a significant difference when the untreated group was compared to the control and operated groups. Total sperm counts were significantly less in the untreated than the treated and control groups.
Conclusions: These data support the notion that testicular hypotrophy related to varicocele may be reversed by early intervention and they further strengthen the indication for varicocelectomy in children.
abstract_id: PUBMED:18421464
Relationship between varicocele grade, vein reflux and testicular growth arrest. The development of testicular hypotrophy (or testicular growth arrest) in pediatric patients with varicocele is the first indication for surgery. The aim of this study is to identify the correlation between grade of varicocele, grade of vein reflux and testicular growth arrest. Between 2000 and 2001, we recruited 226 patients affected by varicocele without testicular hypotrophy and with grades 2-3 spermatic vein reflux observed during Doppler velocimetry. Medical examinations carried out every 6 months allowed the assessment of varicocele grade, testicular volume, and grade of vein reflux. Other parameters considered in the study were: mean time of grade deterioration, mean time to onset of testicular growth arrest and the relationship between varicocele grade and testicular growth arrest. Deterioration of the condition was experienced in 92 patients (40%) in which 60 patients showed higher varicocele grades without testicular growth arrest, while 32 patients developed testicular growth arrest. There was a statistically significant relationship between testicular growth arrest and varicocele grades (grade 2 and 3) and between grade of reflux and testicular growth arrest. Although it is not possible to determine which patients will develop testicular growth arrest, the assessment of vein reflux allows the identification of those subjects who may potentially develop such a condition.
abstract_id: PUBMED:37394305
Testicular catch-up growth in the non-operative management of the adolescent varicocele. Introduction: Adolescent varicocele is a common urologic condition with a spectrum of outcomes, leading to variations in management. Testicular hypotrophy is a common indication for surgery Routine monitoring may be an appropriate form of management for many adolescents with testicular hypotrophy, as studies have shown that a large proportion of these patients may experience catch-up growth of the ipsilateral testis. Furthermore, there are few longitudinal studies which have correlated patient specific factors to catch-up growth. We aimed to determine the frequency of testicular catch up-growth in adolescents with varicocele while also examining if patient specific factors such as BMI, BMI percentile, or height correlated with testicular catch-up growth.
Methods: A retrospective chart review found adolescent patients who presented to our institution with varicocele from 1997 to 2019. Patients between the ages of 9 and 20 years with left-sided varicocele, a clinically significant testicular size discrepancy, and at least two scrotal ultrasounds at least one year apart were included in analysis. Testicular size discrepancy of greater than 15% on scrotal ultrasound was considered clinically significant. Testicular size was estimated in volume (mL) via the Lambert formula. Statistical relationships between testicular volume differential and height, body mass index (BMI), and age were described with Spearman correlation coefficients (ρ).
Results: 40 patients had a testicular volume differential of greater than 15% at some point during their clinical course and were managed non-operatively with observation and serial testicular ultrasounds. On follow-up ultrasound, 32/40 (80%) had a testicular volume differential of less than 15%, with a mean age of catch up growth at 15 years (SD 1.6, range 11-18 years). There were no significant correlations between baseline testicular volume differential and baseline BMI (ρ = 0.00, 95% CI [-0.32, 0.32]), baseline BMI percentile (ρ = 0.03, 95% CI [-0.30, 0.34]), or change in height over time (ρ = 0.05, 95% CI [-0.36, 0.44]).
Discussion: The majority of adolescents with varicocele and testicular hypotrophy exhibited catch-up growth with observation, suggesting that surveillance is an appropriate form of management in many adolescents. These findings are consistent with previous studies and further indicate the importance of observation for the adolescent varicocele. Further research is warranted to determine patient specific factors that correlate with testicular volume differential and catch up growth in the adolescent varicocele.
abstract_id: PUBMED:18817958
Testicular catch-up growth after varicocelectomy: does surgical technique make a difference? Objectives: Catch-up growth of the affected testis in adolescents after varicocele repair has been well documented. Many investigators have found evidence that testicular hypotrophy related to varicocele can be reversed by early intervention. The aim of this study was to analyze the testicular catch-up growth rate in pediatric patients, correlating it with patient age at surgery, varicocele size, procedures used, and semen quality.
Methods: Between March 1990 and September 2006, a total of 465 varicocelectomies were performed at our department. We evaluated the mean testicular volume before and after varicocelectomy in patients aged 9-14 years. Two procedures were used: laparoscopic artery-preserving varicocelectomy (group 1) and open inguinal microscopic artery-preserving varicocelectomy with a venous-venous bypass (group 2). The testicular volume was measured before and after surgery using ultrasonography, and the mean testicular catch-up growth was recorded.
Results: Although the overall catch-up growth rate for both groups was 80%, after 18 months, only 45% of patients in group 1 and 34% of patients in group 2 had equal bilateral testicular volume. None of these procedures showed a statistically significant correlation with age at surgery, varicocele size, or catch-up rate. The semen analysis results did not show statistically significant differences between the 2 groups.
Conclusions: Although 80% of patients demonstrated testicular catch-up, with a different distribution depending on the procedure type used but without statistically significant differences, only 32% of patients had complete and real testicular volume catch-up.
abstract_id: PUBMED:30604693
Effects of percutaneous varicocele repair on testicular volume: results from a 12-month follow-up. Varicocele is a common finding in men. Varicocele correction has been advocated for young patients with testicular hypotrophy, but there is a lack of morphofunctional follow-up data. We assessed whether percutaneous treatment of left varicocele is associated with testicular "catch-up growth" in the following 12 months by retrospectively reviewing data from an electronic database of 10 656 patients followed up in our clinic between 2006 and 2016. We selected all young adults (<35 years) with left varicocele who underwent percutaneous treatment, had a minimum of 12 months' ultrasound imaging follow-up, and had no other conditions affecting testicular volume. One hundred and fourteen men (mean±standard deviation [s.d.] of age: 22.8 ± 5.4 years) met the inclusion and exclusion criteria. Left testicular hypotrophy (LTH), defined as a ≥20% difference between left and right testicular volume at baseline, was observed in 26 (22.8%) men. Participants with LTH (mean±s.d.: 14.5 ± 2.7 ml) had lower baseline testicular volume compared to those without LTH (mean±s.d.: 15.7 ± 3.8 ml; P = 0.032). Repeated measures mixed models showed a significant interaction between LTH and time posttreatment when correcting for baseline left testicular volume (β = 0.114, 95% confidence interval [CI]: 0.018-0.210, P = 0.020), resulting in a catch-up growth of up to 1.37 ml per year (95% CI: 0.221-2.516). Age at intervention was also associated with reduced testicular volume (-0.072 ml per year, 95% CI: -0.135--0.009; P = 0.024). Percutaneous treatment of left varicocele in young adults with LTH can result in catch-up growth over 1 year of follow-up. The reproductive and psychological implications of these findings need to be confirmed in longer and larger prospective studies.
abstract_id: PUBMED:36510859
Cremaster muscle thickening: the anatomic difference in men with testicular retraction due to hyperactive cremaster muscle reflex. The objective was to assess whether men suffering from testicular retraction secondary to hyperactive cremaster muscle reflex have an anatomic difference in the thickness of the cremaster muscle in comparison to men who do not have retraction. From March 2021 to December 2021, 21 men underwent microsurgical subinguinal cremaster muscle release (MSCMR) on 33 spermatic cord units, as 12 of them had bilateral surgery, at Surgicare of South Austin Ambulatory Surgery Center in Austin, TX, USA. During that same time frame, 36 men underwent subinguinal microsurgical varicocele repair on 41 spermatic cord units, as 5 were bilateral for infertility. The thickness of cremaster muscles was measured by the operating surgeon in men undergoing MSCMR and varicocele repair. Comparison was made between the cremaster muscle thickness in men with testicular retraction due to a hyperactive cremaster muscle reflex undergoing MSCMR and the cremaster muscle thickness in men undergoing varicocele repair for infertility with no history of testicular retraction, which served as an anatomic control. The mean cremaster muscle thickness in men who underwent MSCMR was significantly greater than those undergoing varicocele repair for infertility, with a mean cremaster muscle thickness of 3.9 (standard deviation [s.d.]: 1.2) mm vs 1.0 (s.d.: 0.4) mm, respectively. Men with testicular retraction secondary to a hyperactive cremaster muscle reflex demonstrate thicker cremaster muscles than controls, those undergoing varicocele repair. An anatomic difference may be a beginning to understanding the pathology in men who struggle with testicular retraction.
abstract_id: PUBMED:9580463
Testicular microlithiasis: diagnosis associated with orchialgia Objectives: To analyze the possible association between orchidalgia and testicular microlithiasis and to determine if this condition has a negative effect on fertility.
Methods: Two male patients with similar findings of microlithiasis on the testicular ultrasound were studied. One patient had a history of thalassemia and the other patient had intermittent episodes of testicular torsion. A histological study was performed in both patients.
Results: The testicular pain remitted spontaneously in the first case and after orchidopexy in the other patient. Biopsy disclosed a diminished spermatogenesis and no anomaly, respectively.
Conclusion: Our findings and the reports published in the literature indicate that testicular microlithiasis cannot be considered to be an etiological factor in orchidalgia or infertility.
abstract_id: PUBMED:28481072
Evaluation of testicular growth after varicocele treatment in early childhood and adolescence based on the technique used (Palomo, Ivanissevich and embolization) Objectives: To analyze, depending on the technique employed, recurrence, symptomatic improvement and testicular growth following treatment of testicular varicocele.
Material And Methods: Descriptive retrospective study of 69 pediatric and adolescent males diagnosed with varicocele treated in our center by open technique according Ivanissevich technique (IT), Palomo (PT) and percutaneous embolization (PE) between 2000-2014. Variables analyzed were age, symptoms, differential testicular volume (RV), employed technique, recurrence, symptomatic improvement and RV after treatment. Association between qualitative variables was evaluated (chi-square test or Fisher's exact test).
Results: 69 patients with a median age of 14 years (7-19) were studied. PE was performed in 37 patients (53,6%), PT in 23 (33,3%) and IT in 9 (13%). Recurrence occurred in 16 patients (23,2%), 80% of them had been treated with PE. Eleven patients had pain (15.9%), there was improvement in 100% of patients treated with PE, but none of those treated by PT or IT improved. At diagnosis, 37 patients (53.6%) had decreased testicular volume (left testicular hypotrophy), in 28 cases the RV was >20%. After treatment, the RV was normalized in 11 cases (39,2%).
Conclusions: The choice of therapeutic technique in pediatric varicocele should be based on patient characteristics, symptoms, experience center for embolization and previous recurrence. Regardless of the chosen technique, 39,2% of testicular hypotrophy with VD >20% present at diagnosis normalized after treatment.
abstract_id: PUBMED:21346830
The validity of testicular catch-up growth and serum FSH levels in the long-term postoperative assessment of laparoscopic varicocele correction in adolescents. Background: Postoperative assessment after varicocele surgery in adolescence is commonly centred around catch-up growth of the testis. There is paucity of evidence on the correlation of catch-up growth with underlying testicular function in these patients.
Aims: To assess the reliability of catch-up growth of the testis as an indicator of normalization of testicular function and the utility of serum FSH levels in the long-term postoperative assessment of varicocele surgery in adolescence.
Materials And Methods: Prospective cohort study of young adults (18-27 years) who had laparoscopic varicocele correction in adolescence (11-16 years). Evaluation included serum FSH levels, scrotal ultrasonography and semen analysis.
Analysis: Anatomical and functional parameters of participants with equal and normal testicular size were compared to those of participants with persistent testicular hypotrophy or hypertrophy. Sensitivity and positive predictive value of postoperative serum FSH levels were estimated and elevated levels of serum FSH were checked for association with suboptimal outcomes of varicocele correction.
Results: The serum FSH levels of participants with unequal testicular sizes (n=6, median 6.65 IU/l), which included testicular hypertrophy (n=3, median 7.2 IU/l) and persistent testicular hypotrophy (n=3, median 6.1 IU/l), were significantly higher than the group with equal testicular sizes (n=8, median 3.5 IU/l; P=0.014, Mann-Whitney U test). Postoperative elevated serum FSH levels were significantly associated with suboptimal outcomes of varicocele surgery (P=0.015, Fisher's exact test). The test also had a high positive predictive value.
Conclusions: Testicular catch-up growth may not be a reliable postoperative assessment criterion by itself. Serum FSH levels may be of value in detecting suboptimal outcomes of varicocele surgery in adolescents.
Answer: Testicular growth arrest in adolescents with varicocele does appear to be influenced by varicocele size. A study found that boys with a varicocele are at significant risk for testicular growth arrest, irrespective of varicocele size, but those with a grade 3 varicocele have a higher risk of testicular growth arrest than those with a grade 2 varicocele (PUBMED:12352335). Additionally, the risk of testicular growth arrest was observed in approximately 25% of boys with testes of equal size at diagnosis during adolescence, regardless of varicocele size (PUBMED:12352335).
Another study showed that varicocele treatment in pubertal boys prevents testicular growth arrest, suggesting that early intervention can reverse testicular hypotrophy related to varicocele (PUBMED:9120980). Furthermore, a significant relationship was found between testicular growth arrest and varicocele grades (grade 2 and 3) and between grade of vein reflux and testicular growth arrest (PUBMED:18421464).
In terms of non-operative management, a study indicated that the majority of adolescents with varicocele and testicular hypotrophy exhibited catch-up growth with observation, suggesting that surveillance is an appropriate form of management in many adolescents (PUBMED:37394305). However, another study noted that while 80% of patients demonstrated testicular catch-up after varicocelectomy, only a portion had complete and real testicular volume catch-up, and the surgical technique did not show a statistically significant correlation with the catch-up rate (PUBMED:18817958).
Percutaneous treatment of left varicocele in young adults with left testicular hypotrophy (LTH) can result in catch-up growth over 1 year of follow-up (PUBMED:30604693). This suggests that intervention can lead to positive outcomes in terms of testicular volume.
In summary, while varicocele size does have an impact on the risk of testicular growth arrest, with larger varicoceles posing a higher risk, treatment and management strategies, including both surgical and observational approaches, can influence the outcome and potentially lead to testicular catch-up growth. |
Instruction: Can Zipf's law be adapted to normalize microarrays?
Abstracts:
abstract_id: PUBMED:15727680
Can Zipf's law be adapted to normalize microarrays? Background: Normalization is the process of removing non-biological sources of variation between array experiments. Recent investigations of data in gene expression databases for varying organisms and tissues have shown that the majority of expressed genes exhibit a power-law distribution with an exponent close to -1 (i.e. obey Zipf's law). Based on the observation that our single channel and two channel microarray data sets also followed a power-law distribution, we were motivated to develop a normalization method based on this law, and examine how it compares with existing published techniques. A computationally simple and intuitively appealing technique based on this observation is presented.
Results: Using pairwise comparisons using MA plots (log ratio vs. log intensity), we compared this novel method to previously published normalization techniques, namely global normalization to the mean, the quantile method, and a variation on the loess normalization method designed specifically for boutique microarrays. Results indicated that, for single channel microarrays, the quantile method was superior with regard to eliminating intensity-dependent effects (banana curves), but Zipf's law normalization does minimize this effect by rotating the data distribution such that the maximal number of data points lie on the zero of the log ratio axis. For two channel boutique microarrays, the Zipf's law normalizations performed as well as, or better than existing techniques.
Conclusion: Zipf's law normalization is a useful tool where the Quantile method cannot be applied, as is the case with microarrays containing functionally specific gene sets (boutique arrays).
abstract_id: PUBMED:36321174
Applicability of Zipf's Law in Traditional Chinese Medicine Prescriptions. Objective Traditional Chinese medicine (TCM) prescriptions have been used to cure diseases in China for thousands of years, in which many TCM herbs have no definite common quantity. Some key TCM herbs are commonly used and thus deserve in-depth investigations based on a more acceptable classification method. This study analyzes whether TCM prescriptions follow Zipf's law and attempts to obtain the thresholds of key TCM herbs based on the application of Zipf's law. Methods A total of 84,418 TCM prescriptions were collected and standardized. We tested whether Zipf's law and Zipf's distribution fit the Chinese herb distributions. A linear fitting experiment was performed to verify the relationship between the frequency distribution and frequency of TCM herbs. Results The distribution of TCM herbs in TCM prescriptions conformed to Zipf's law. Accordingly, the thresholds were obtained for the key TCM herbs. Conclusion The distribution of TCM herbs in TCM prescriptions follows Zipf's law.
abstract_id: PUBMED:32102480
Does China's Urban Development Satisfy Zipf's Law? A Multiscale Perspective from the NPP-VIIRS Nighttime Light Data. Currently, whether the urban development in China satisfies Zipf's law across different scales is still unclear. Thus, this study attempted to explore whether China's urban development satisfies Zipf's law across different scales from the National Polar-Orbiting Partnership's Visible Infrared Imaging Radiometer Suite (NPP-VIIRS) nighttime light data. First, the NPP-VIIRS data were corrected. Then, based on the Zipf law model, the corrected NPP-VIIRS data were used to evaluate China's urban development at multiple scales. The results showed that the corrected NPP-VIIRS data could effectively reflect the state of urban development in China. Additionally, the Zipf index (q) values, which could express the degree of urban development, decreased from 2012 to 2018 overall in all provinces, prefectures, and counties. Since the value of q was relatively close to 1 with an R2 value > 0.70, the development of the provinces and prefectures was close to the ideal Zipf's law state. In all counties, q > 1 with an R2 value > 0.70, which showed that the primate county had a relatively stronger monopoly capacity. When the value of q < 1 with a continuous declination in the top 2000 counties, the top 250 prefectures, and the top 20 provinces in equilibrium, there was little difference in the scale of development at the multiscale level with an R2 > 0.90. The results enriched our understanding of urban development in terms of Zipf's law and had valuable implications for relevant decision-makers and stakeholders.
abstract_id: PUBMED:33286187
Zipf's Law of Vasovagal Heart Rate Variability Sequences. Cardiovascular self-organized criticality (SOC) has recently been demonstrated by studying vasovagal sequences. These sequences combine bradycardia and a decrease in blood pressure. Observing enough of these sparse events is a barrier that prevents a better understanding of cardiovascular SOC. Our primary aim was to verify whether SOC could be studied by solely observing bradycardias and by showing their distribution according to Zipf's law. We studied patients with vasovagal syncope. Twenty-four of them had a positive outcome to the head-up tilt table test, while matched patients had a negative outcome. Bradycardias were distributed according to Zipf's law in all of the patients. The slope of the distribution of vasovagal sequences and bradycardia are slightly but significantly correlated, but only in cases of bradycardias shorter than five beats, highlighting the link between the two methods (r = 0.32; p < 0.05). These two slopes did not differ in patients with positive and negative outcomes, whereas the distribution slopes of bradycardias longer than five beats were different between these two groups (-0.187 ± 0.004 and -0.213 ± 0.006, respectively; p < 0.01). Bradycardias are distributed according to Zipf's law, providing clear insight into cardiovascular SOC. Bradycardia distribution could provide an interesting diagnosis tool for some cardiovascular diseases.
abstract_id: PUBMED:30958235
Zipf's Law, unbounded complexity and open-ended evolution. A major problem for evolutionary theory is understanding the so-called open-ended nature of evolutionary change, from its definition to its origins. Open-ended evolution (OEE) refers to the unbounded increase in complexity that seems to characterize evolution on multiple scales. This property seems to be a characteristic feature of biological and technological evolution and is strongly tied to the generative potential associated with combinatorics, which allows the system to grow and expand their available state spaces. Interestingly, many complex systems presumably displaying OEE, from language to proteins, share a common statistical property: the presence of Zipf's Law. Given an inventory of basic items (such as words or protein domains) required to build more complex structures (sentences or proteins) Zipf's Law tells us that most of these elements are rare whereas a few of them are extremely common. Using algorithmic information theory, in this paper we provide a fundamental definition for open-endedness, which can be understood as postulates. Its statistical counterpart, based on standard Shannon information theory, has the structure of a variational problem which is shown to lead to Zipf's Law as the expected consequence of an evolutionary process displaying OEE. We further explore the problem of information conservation through an OEE process and we conclude that statistical information (standard Shannon information) is not conserved, resulting in the paradoxical situation in which the increase of information content has the effect of erasing itself. We prove that this paradox is solved if we consider non-statistical forms of information. This last result implies that standard information theory may not be a suitable theoretical framework to explore the persistence and increase of the information content in OEE systems.
abstract_id: PUBMED:35840837
Zipf's law revisited: Spoken dialog, linguistic units, parameters, and the principle of least effort. The ubiquitous inverse relationship between word frequency and word rank is commonly known as Zipf's law. The theoretical underpinning of this law states that the inverse relationship yields decreased effort in both the speaker and hearer, the so-called principle of least effort. Most research has focused on showing an inverse relationship only for written monolog, only for frequencies and ranks of one linguistic unit, generally word unigrams, with strong correlations of the power law to the observed frequency distributions, with limited to no attention to psychological mechanisms such as the principle of least effort. The current paper extends the existing findings, by not focusing on written monolog but on a more fundamental form of communication, spoken dialog, by not only investigating word unigrams but also units quantified on syntactic, pragmatic, utterance, and nonverbal communicative levels by showing that the adequacy of Zipf's formula seems ubiquitous, but the exponent of the power law curve is not, and by placing these findings in the context of Zipf's principle of least effort through redefining effort in terms of cognitive resources available for communication. Our findings show that Zipf's law also applies to a more natural form of communication-that of spoken dialog, that it applies to a range of linguistic units beyond word unigrams, that the general good fit of Zipf's law needs to be revisited in light of the parameters of the formula, and that the principle of least effort is a useful theoretical framework for the findings of Zipf's law.
abstract_id: PUBMED:29456419
Zipf-Mandelbrot law, f-divergences and the Jensen-type interpolating inequalities. Motivated by the method of interpolating inequalities that makes use of the improved Jensen-type inequalities, in this paper we integrate this approach with the well known Zipf-Mandelbrot law applied to various types of f-divergences and distances, such are Kullback-Leibler divergence, Hellinger distance, Bhattacharyya distance (via coefficient), [Formula: see text]-divergence, total variation distance and triangular discrimination. Addressing these applications, we firstly deduce general results of the type for the Csiszár divergence functional from which the listed divergences originate. When presenting the analyzed inequalities for the Zipf-Mandelbrot law, we accentuate its special form, the Zipf law with its specific role in linguistics. We introduce this aspect through the Zipfian word distribution associated to the English and Russian languages, using the obtained bounds for the Kullback-Leibler divergence.
abstract_id: PUBMED:33285998
The Brevity Law as a Scaling Law, and a Possible Origin of Zipf's Law for Word Frequencies. An important body of quantitative linguistics is constituted by a series of statistical laws about language usage. Despite the importance of these linguistic laws, some of them are poorly formulated, and, more importantly, there is no unified framework that encompasses all them. This paper presents a new perspective to establish a connection between different statistical linguistic laws. Characterizing each word type by two random variables-length (in number of characters) and absolute frequency-we show that the corresponding bivariate joint probability distribution shows a rich and precise phenomenology, with the type-length and the type-frequency distributions as its two marginals, and the conditional distribution of frequency at fixed length providing a clear formulation for the brevity-frequency phenomenon. The type-length distribution turns out to be well fitted by a gamma distribution (much better than with the previously proposed lognormal), and the conditional frequency distributions at fixed length display power-law-decay behavior with a fixed exponent α ≃ 1.4 and a characteristic-frequency crossover that scales as an inverse power δ ≃ 2.8 of length, which implies the fulfillment of a scaling law analogous to those found in the thermodynamics of critical phenomena. As a by-product, we find a possible model-free explanation for the origin of Zipf's law, which should arise as a mixture of conditional frequency distributions governed by the crossover length-dependent frequency.
abstract_id: PUBMED:37364507
Zipf's Law of Abbreviation holds for individual characters across a broad range of writing systems. Zipf's Law of Abbreviation - the idea that more frequent symbols in a code are simpler than less frequent ones - has been shown to hold at the level of words in many languages. We tested whether it holds at the level of individual written characters. Character complexity is similar to word length in that it requires more cognitive and motor effort for producing and processing more complex symbols. We built a dataset of character complexity and frequency measures covering 27 different writing systems. According to our data, Zipf's Law of Abbreviation holds for every writing system in our dataset - the more frequent characters have lower degrees of complexity and vice-versa. This result provides further evidence of optimization mechanisms shaping communication systems.
abstract_id: PUBMED:28904518
Majorization, Csiszár divergence and Zipf-Mandelbrot law. In this paper we show how the Shannon entropy is connected to the theory of majorization. They are both linked to the measure of disorder in a system. However, the theory of majorization usually gives stronger criteria than the entropic inequalities. We give some generalized results for majorization inequality using Csiszár f-divergence. This divergence, applied to some special convex functions, reduces the results for majorization inequality in the form of Shannon entropy and the Kullback-Leibler divergence. We give several applications by using the Zipf-Mandelbrot law.
Answer: Yes, Zipf's law can be adapted to normalize microarrays. This is supported by a study that developed a normalization method based on the observation that single channel and two channel microarray data sets followed a power-law distribution, which is consistent with Zipf's law. The study compared this novel method to previously published normalization techniques, such as global normalization to the mean, the quantile method, and a variation on the loess normalization method. The results indicated that for single channel microarrays, the quantile method was superior in eliminating intensity-dependent effects, but Zipf's law normalization minimized this effect by rotating the data distribution such that the maximal number of data points lie on the zero of the log ratio axis. For two channel boutique microarrays, Zipf's law normalizations performed as well as, or better than, existing techniques. Therefore, Zipf's law normalization is a useful tool, especially where the Quantile method cannot be applied, as is the case with microarrays containing functionally specific gene sets (boutique arrays) (PUBMED:15727680). |
Instruction: Does parental smoking cessation encourage their young adult children to quit smoking?
Abstracts:
abstract_id: PUBMED:15733251
Does parental smoking cessation encourage their young adult children to quit smoking? A prospective study. Aims: To investigate the extent to which parental early and late smoking cessation predicts their young adult children's smoking cessation.
Design: Parental early smoking cessation status was assessed when children were in 3rd grade, parental late smoking cessation was assessed when children were in 11th grade, and young adult children's smoking cessation was assessed 2 years after high school.
Setting: Forty Washington State school districts participated in the Hutchinson Smoking Prevention Project.
Participants And Measurements: Participants were the 1553 families in which parents were ever regular smokers who had a young adult child smoking at least weekly at 12th grade who also reported their smoking status 2 years later. Questionnaire data were gathered on parents and their young adult children (49% female and 91% Caucasian) in a cohort with a 94% retention rate.
Findings: Parents who quit early had children with 1.8 (OR = 1.80; 95% CI = 1.22, 2.64) times higher odds of quitting smoking for at least 1 month in young adulthood compared to those whose parents did not quit early. In contrast, there was no association (OR = 0.84; 95% CI = 0.47, 1.51) between parents quitting late and their young adult children's smoking cessation.
Conclusions: Parental early smoking cessation is associated with increased odds of their young adult children's smoking cessation. Parents who smoke should be encouraged to quit when their children are young.
abstract_id: PUBMED:19392909
Parents who quit smoking and their adult children's smoking cessation: a 20-year follow-up study. Aims: Extending our earlier findings from a longitudinal cohort study, this study examines parents' early and late smoking cessation as predictors of their young adult children's smoking cessation.
Design: Parents' early smoking cessation status was assessed when their children were aged 8 years; parents' late smoking cessation was assessed when their children were aged 17 years. Young adult children's smoking cessation, of at least 6 months duration, was assessed at age 28 years.
Setting: Forty Washington State school districts.
Participants And Measurements: Participants were 991 at least weekly smokers at age 17 whose parents were ever regular smokers and who also reported their smoking status at age 28. Questionnaire data were gathered on parents and their children (49% female and 91% Caucasian) in a longitudinal cohort (84% retention).
Findings: Among children who smoked daily at age 17, parents' quitting early (i.e. by the time their children were aged 8) was associated with a 1.7 times higher odds of these children quitting by age 28 compared to those whose parents did not quit [odds ratio (OR) 1.70; 95% confidence interval (CI) 1.23, 2.36]. Results were similar among children who smoked weekly at age 17 (OR 1.91; 95% CI 1.41, 2.58). There was a similar, but non-significant, pattern of results among those whose parents quit late.
Conclusions: Supporting our earlier findings, results suggest that parents' early smoking cessation has a long-term influence on their adult children's smoking cessation. Parents who smoke should be encouraged to quit when their children are young.
abstract_id: PUBMED:31802718
Prevalence and Factors Associated With Attempts to Quit and Smoking Cessation in Malaysia. Smoking cessation significantly reduces risk of smoking-related diseases and mortality. This study aims to determine the prevalence and factors associated with attempts to quit and smoking cessation among adult current smokers in Malaysia. Data from the National E-Cigarette Survey 2016 were analyzed. Forty nine percent of current smokers had attempted to quit at least once in the past 12 months and 31.4% of the respondents were former smokers. Multivariable analysis revealed that current smokers with low nicotine addiction and aged below 45 years were more likely to attempt to quit smoking. Being married, older age group, and having tertiary education were significantly associated with smoking cessation. Only half of the current smokers ever attempted to quit smoking and only a third of smokers quit. Stronger tobacco control policies are needed in Malaysia to encourage more smokers to quit smoking. Improved access to cessation support for underprivileged smokers is also needed.
abstract_id: PUBMED:16869849
Parental tobacco smoking behaviour and their children's smoking and cessation in adulthood. Aims: To examine the extent to which childhood exposure to parental tobacco smoking, smoking cessation and parental disapproval of smoking predicts daily smoking and attempts to quit in adulthood.
Design: A longitudinal prospective design was used to examine the possible association between parental smoking variables in childhood and adolescence and subsequent smoking and cessation by age 26 years.
Participants: Interview data were collected as part of a longitudinal study of some 950 individuals followed from birth to age 26 years. Outcome measures were daily smoking and self-reported attempts to quit smoking.
Findings: Less daily smoking among the participants at age 26 was related more strongly to parental smoking cessation in the adolescent years than the childhood years. By contrast, inconsistent advice about smoking in childhood and adolescence predicted later daily smoking. Cessation attempts to age 26 were unrelated to earlier parental quitting but were related to consistent advice in adolescence from both parents about smoking.
Conclusions: Encouraging parents to voice consistent messages about their disapproval of smoking has a significant role to play in discouraging smoking in their adult children and promoting attempt to quit where their children are smokers.
abstract_id: PUBMED:23327264
Young adult smoking cessation: predictors of quit attempts and abstinence. We examined young adult smoking cessation behaviors, coding cessation behavior as no attempt, quit attempt (< 30 days), or abstinence (≥ 30 days) during follow-up from July 2005 through December 2008, observed in 592 young adult smokers from the Ontario Tobacco Survey. One in 4 young adults made an attempt; 14% obtained 30-day abstinence. Cessation resources, prior attempts, and intention predicted quit attempts, whereas high self-efficacy, using resources, having support, and low addiction predicted abstinence, indicating that young adult smokers require effective and appropriate cessation resources.
abstract_id: PUBMED:35558421
The Relationship Between Smoker Identity and Smoking Cessation Among Young Smokers: The Role of Smoking Rationalization Beliefs and Cultural Value of Guanxi. Although the relationship between smoker identity and smoking cessation behavior has been confirmed, the role of smoking-related beliefs and cultural values in this relationship for young smokers is little known. The present study aimed to examine whether the relationship between smoker identity and smoking cessation behavior would be mediated by smoking rationalization beliefs and/or intention to quit smoking and whether the effect of smoker identity on smoking cessation behavior was moderated by cultural value of guanxi. A total of 708 young smokers participated in the study and completed questionnaires that measured smoker identity, smoking rationalization beliefs, intention to quit smoking, smoking cessation behavior and cultural value of guanxi. The results showed: (1) the relationship between smoker identity and smoking cessation behavior was negative and significant. (2) The mediating effect of intention to quit smoking and the serial mediating effect of "smoking rationalization beliefs → intention to quit smoking" on the relationship between smoker identity and smoking cessation behavior was significant. (3) Both the serial mediating effect of "smoking rationalization beliefs → intention to quit smoking" and the direct effect of smoker identity on smoking cessation behavior were moderated by cultural value of guanxi. The current findings increased understanding of psychosocial mechanisms underlying the hindering effect of smoker identity on smoking cessation and suggested the role of smoking rationalization beliefs and cultural value of guanxi should be considered in smoking cessation interventions for young smokers.
abstract_id: PUBMED:22201152
Parental smoking cessation to protect young children: a systematic review and meta-analysis. Background: Young children can be protected from much of the harm from tobacco smoke exposure if their parents quit smoking. Some researchers encourage parents to quit for their children's benefit, but the evidence for effectiveness of such approaches is mixed.
Objective: To perform a systematic review and meta-analysis to quantify the effects of interventions that encourage parental cessation.
Methods: We searched PubMed, the Cochrane Library, Web of Science, and PsycINFO. Controlled trials published before April 2011 that targeted smoking parents of infants or young children, encouraged parents to quit smoking for their children's benefit, and measured parental quit rates were included. Study quality was assessed. Relative risks and risk differences were calculated by using the DerSimonian and Laird random-effects model.
Results: Eighteen trials were included. Interventions took place in hospitals, pediatric clinical settings, well-baby clinics, and family homes. Quit rates averaged 23.1% in the intervention group and 18.4% in the control group. The interventions successfully increased the parental quit rate. Subgroups with significant intervention benefits were children aged 4 to 17 years, interventions whose primary goal was cessation, interventions that offered medications, and interventions with high follow-up rates (>80%).
Conclusions: Interventions to achieve cessation among parents, for the sake of the children, provide a worthwhile addition to the arsenal of cessation approaches, and can help protect vulnerable children from harm due to tobacco smoke exposure. However, most parents do not quit, and additional strategies to protect children are needed.
abstract_id: PUBMED:37247291
Smoking Cessation, Quit Attempts and Predictive Factors among Vietnamese Adults in 2020. Objective: This study aims to describe the updated smoking cessation and quit attempt rates and associated factors among Vietnamese adults in 2020.
Methods: Data on tobacco use among adults in Vietnam in 2020 was derived from the Provincial Global Adult Tobacco Survey. The participants in the study were people aged 15 and older. A total of 81,600 people were surveyed across 34 provinces and cities. Multi-level logistic regression was used to examine the associations between individual and province-level factors on smoking cessation and quit attempts.
Results: The smoking cessation and quit attempt rates varied significantly across the 34 provinces. The average rates of people who quit smoking and attempted to quit were 6.3% and 37.2%, respectively. The factors associated with smoking cessation were sex, age group, region, education level, occupation, marital status, and perception of the harmful effects of smoking. Attempts to quit were significantly associated with sex, education level, marital status, perception of the harmful effects of smoking, and visiting health facilities in the past 12 months.
Conclusions: These results may be useful in formulating future smoking cessation policies and identifying priority target groups for future interventions. However, more longitudinal and follow-up studies are needed to prove a causal relationship between these factors and future smoking cessation behaviors.
abstract_id: PUBMED:34501966
Feasibility of a Smoking Cessation Smartphone App (Quit with US) for Young Adult Smokers: A Single Arm, Pre-Post Study. While smartphone applications (apps) have been shown to enhance success with smoking cessation, no study has been conducted among young adult smokers aged 18-24 years in Thailand. Quit with US was developed based on the 5 A's model and self-efficacy theory. This single arm, pre-post study was conducted aiming to assess results after using Quit with US for 4 weeks. The primary outcome was a biochemically verified 7-day point prevalence of smoking abstinence. The secondary outcomes included smoking behaviors, knowledge and attitudes toward smoking and smoking cessation, and satisfaction and confidence in the smartphone app. A total number of 19 young adult smokers were included; most participants were males (68.4%) with the mean (SD) age of 20.42 (1.46) years. After 4 weeks of study, the primary outcome demonstrated a smoking cessation rate of 31.6%. All 19 participants expressed better smoking behaviors and better knowledge and attitudes toward smoking and smoking cessation. Further, they were satisfied with the smartphone app design and content and expressed confidence in using it. These findings provided preliminary evidence that Quit with US was found to be a potentially effective smoking cessation smartphone app for young adult smokers.
abstract_id: PUBMED:25222850
Promoting smoking cessation among parents: effects on smoking-related cognitions and smoking initiation in children. Background: Parental smoking is associated with an increased risk of smoking among youth. Epidemiological research has shown that parental smoking cessation can attenuate this risk. This study examined whether telephone counselling for parents and subsequent parental smoking cessation affect smoking-related cognitions and smoking initiation among children of smoking parents.
Methods: Data of a two-arm randomized controlled trial were used in which 512 smoking parents were recruited into cessation support through their children's primary schools. After the baseline assessment, smoking parents were randomly assigned to tailored telephone counselling or a standard self-help brochure. Parental cessation was measured as 6-month prolonged abstinence at the 12-month follow-up. Children's smoking-related cognitions and smoking initiation were examined at 3-month, 12-month, and 30-month follow-up.
Results: No statistical evidence was found that children of parents who received telephone counselling tailored to smoking parents or children of parents who achieved prolonged abstinence differ in smoking-related cognitions (i.e., smoking outcome expectancies, perceived safety of smoking, self-efficacy to refrain from smoking, susceptibility to smoking) or smoking initiation rate on any follow-up assessment.
Conclusions: This study is the first to examine the effects of an evidence-based smoking cessation treatment for parents and treatment-induced parental smoking cessation on cognitive and behavioural outcomes among children. Although descriptive statistics showed lower smoking initiation rates among children of parents who achieved prolonged abstinence, there was no statistical evidence that telephone counselling tailored to parents or treatment-induced parental smoking cessation affects precursors of smoking or smoking initiation among youth.
Answer: Yes, parental smoking cessation does encourage their young adult children to quit smoking. Studies have shown that parents who quit smoking early, when their children were young, have a positive influence on their young adult children's smoking cessation. Specifically, parents who quit smoking by the time their children were in 3rd grade were found to have children with 1.8 times higher odds of quitting smoking for at least 1 month in young adulthood compared to those whose parents did not quit early (PUBMED:15733251). This association was also observed in a 20-year follow-up study, where parents' early smoking cessation (by the time their children were aged 8) was associated with a 1.7 times higher odds of their children quitting by age 28 (PUBMED:19392909). However, there was no significant association found between parents quitting late and their young adult children's smoking cessation (PUBMED:15733251). These findings suggest that parental smoking cessation, particularly when done early, has a long-term influence on their adult children's smoking cessation, and parents who smoke should be encouraged to quit when their children are young (PUBMED:19392909). |
Instruction: Are there bi-directional associations between depressive symptoms and C-reactive protein in mid-life women?
Abstracts:
abstract_id: PUBMED:19683568
Are there bi-directional associations between depressive symptoms and C-reactive protein in mid-life women? Objective: To test whether depressive symptoms are related to subsequent C-reactive protein (CRP) levels and/or whether CRP levels are related to subsequent depressive symptoms in mid-life women.
Methods: Women enrolled in the Study of Women's Health Across the Nation (SWAN) were followed for 7years and had measures of CES-Depression scores and CRP seven times during the follow-up period. Women were pre- or early peri-menopausal at study entry and were of Caucasian, African American, Hispanic, Japanese, or Chinese race/ethnicity. Analyses were restricted to initially healthy women.
Results: Longitudinal mixed linear regression models adjusting for age, race, site, time between exams, and outcome variable at year X showed that higher CES-D scores predicted higher subsequent CRP levels and vice versa over a 7-year period. Full multivariate models adjusting for body mass index, physical activity, medications, health conditions, and other covariates showed that higher CRP levels at year X predicted higher CES-D scores at year X+1, p=0.03. Higher depressive symptoms predicted higher subsequent CRP levels at marginally significant levels, p=0.10.
Conclusions: Higher CRP levels led to higher subsequent depressive symptoms, albeit the effect was small. The study demonstrates the importance of considering bi-directional relationships for depression and other psychosocial factors and risk for heart disease.
abstract_id: PUBMED:24076375
Child abuse is related to inflammation in mid-life women: role of obesity. Objective: Elevated inflammation biomarkers are associated with incident cardiovascular disease. Several studies suggest that childhood abuse may be associated with inflammation later in life. This study examined whether childhood abuse predicted elevated levels of C-reactive protein (CRP) and whether the association was due to body size.
Methods: Participants were 326 (104 Black, 222 White) women from the Pittsburgh site of the Study of Women's Health Across the Nation (SWAN). SWAN included a baseline assessment of pre-menopausal or early peri-menopausal women in mid-life (mean age=45.7), and CRP, depressive symptoms, body mass index (BMI), and other covariates were measured over 7 annual follow-up visits. The Childhood Trauma Questionnaire, a standardized measure that retrospectively assesses abuse and neglect in childhood and adolescence, was administered at year 8 or 9 of follow-up.
Results: Approximately 37% of the participants reported a history of abuse or neglect. Generalized estimating equations showed that sexual and emotional abuse, emotional and physical neglect, and the total number of types of abuse were associated with higher CRP levels over 7 years, adjusting for race, age, education, smoking status, use of hormone therapy, depressive symptoms, occurrence of heart attack or stroke, and medications for hypertension. The coefficients for indirect effects for emotional and sexual abuse, physical neglect, and total number of types of abuse on CRP levels through BMI were significant. A history of emotional abuse and neglect was related to percent change in CRP over the 7 years but not through percent change in BMI over the 7 years.
Conclusion: A history of childhood abuse and neglect retrospectively reported is related to overall elevated inflammation in mid-life women, perhaps through obesity. A history of some types of abuse and neglect (emotional) may be related to change in inflammation, independent of simultaneously measured change in BMI.
abstract_id: PUBMED:32960533
Bidirectional longitudinal associations between loneliness and pain, and the role of inflammation. Abstract: Pain and loneliness are consistently associated, but the direction of the relationship is uncertain. We assessed bidirectional associations over a 4-year period in a sample of 4906 men and women (mean 65.1 ± 8.72 years) who were participants in the English Longitudinal Study of Ageing. The role of inflammation in these links was also investigated. Pain was defined by reports of being often troubled by pain at a moderate or severe intensity, whereas loneliness was measured using the shortened UCLA scale. Age, sex, ethnicity, educational attainment, wealth as a marker of socioeconomic resources, marital status, physical activity, and depressive symptoms were included as covariates. We found that baseline loneliness was associated with pain 4 years later after adjusting for baseline pain and other covariates (odds ratio [OR] = 1.25, 95% confidence interval [CI] 1.06-1.47, P = 0.007). Similarly, baseline pain independently predicted loneliness 4 years later (OR = 1.34, 95% CI 1.14-1.58, P = 0.001). Associations remained significant after additional adjustment for baseline mobility impairment. Likelihood of pain on follow-up was heightened when baseline loneliness was accompanied by elevated C-reactive protein concentration (OR = 1.50, 95% CI 1.13-2.00, P = 0.006), whereas inflammation did not predict future loneliness or contribute to the association between baseline pain and future loneliness. Both pain and loneliness are distressing experiences that impact well-being and quality of life. We conclude that there were bidirectional longitudinal relationships between pain and loneliness in this representative sample of older men and women, but that the mechanisms underlying these processes may differ.
abstract_id: PUBMED:26349616
Prospective data from the Women's Health Initiative on depressive symptoms, stress, and inflammation. This study examined the longitudinal association of depressive symptoms and stressful life events with inflammation in the Women's Health Initiative. Women aged 50 years and older ( N = 7477) completed questionnaires assessing depressive symptoms and stressful life events at baseline and 15 years later. Serum measures of C-reactive protein were collected at both assessments. In bivariate analyses, C-reactive protein predicted 15-year depressive symptoms and stressful life events ( ps < .03) and baseline depressive symptoms and stressful life events predicted later C-reactive protein ( ps < .03). These longitudinal relationships were not maintained in multivariate adjusted analyses. Combined with previous research, this suggests the relationship between depression, stressful life events and inflammation attenuates with time.
abstract_id: PUBMED:25019974
Sex-specific associations between Neutrophil Gelatinase-Associated Lipocalin (NGAL) and cognitive domains in late-life depression. Background: Although it is well established that late-life depression is associated with both systemic low-graded inflammation and cognitive impairment, the relation between inflammation and cognition in depressed older persons is still equivocal. The objective of this study is to examine the association between plasma Neutrophil Gelatinase-Associated Lipocalin (NGAL) concentrations and cognitive functioning in late-life depression, including the potentially moderating role of sex.
Methods: A total of 369 depressed older persons (≥60 years) from The Netherlands study of Depression in Older persons (NESDO) were included. Four cognitive domains, i.e. verbal memory, processing speed, interference control and attention were assessed with three cognitive tests (Stroop test, Wais Digit span test, and Rey's verbal learning test). Multiple linear regression analyses were applied with the four cognitive domains as dependent variables adjusted for confounders.
Results: The association between NGAL levels and specific cognitive domains were sex-specific. In women, higher NGAL levels were associated with impaired verbal memory and lower processing speed. In men, higher NGAL levels were associated with worse interference control. Higher NGAL levels were not associated with attention. No sex-specific associations of either high sensitivity C-reactive protein (hsCRP) or interleukin-6 (IL-6) with cognitive functioning were found.
Conclusion: This study shows sex-specific association of NGAL with cognitive functioning in late-life depression.
abstract_id: PUBMED:35316423
C-Reactive protein concentrations in reproductive-aged women with major mood disorders. To examine associations between high sensitivity C-reactive protein (CRP) concentrations and depressive symptoms in reproductive-aged women with mood disorders. Women (N = 86) with major depressive or bipolar disorder in a specialized mood disorders program provided plasma samples which were analyzed for CRP concentrations and categorized by tertiles (T1, low; T2, middle; T3 high). Depressive symptoms were assessed with the Inventory of Depressive Symptoms. We hypothesized that CRP concentrations would be significantly associated with the following: (1) depressive symptoms; (2) pregnancy, (3) body mass index, and (4) counts of white blood cells and absolute neutrophils and percentage of segmented neutrophils. The distribution of CRP concentrations was highly skewed with a median of 2.45 mg/L and an interquartile range 0.90 - 8.17 mg/L. Elevated plasma levels of CRP were not associated with depressive symptoms, which did not differ by tertile group either before or after adjusting for BMI, pregnancy status, and their interactions. Women in T3 had 5 times greater odds of pregnancy compared to women in T1 (p = .021). However, women in T2 had 11% greater BMI on average (p = 0.023), and women in T3 had 47% greater BMI compared to those in T1 (p < 0.001). Women in T3 had higher mean white blood cell counts than those in T1 and T2, the percentage of neutrophils was higher in T2 and T3 compared to T1, and women in T3 had higher absolute neutrophil counts compared to T1. CRP concentrations varied widely and were significantly elevated in reproductive-aged women with high BMI and current pregnancy, but not with depressive symptoms in this sample of depressed women.
abstract_id: PUBMED:30154326
Allostatic Load Biomarker Associations with Depressive Symptoms Vary among US Black and White Women and Men. The prevalence and severity of depression differ in women and men and across racial groups. Psychosocial factors such as chronic stress have been proposed as contributors, but causes of this variation are not fully understood. Allostatic load, a measure of the physiological burden of chronic stress, is known to be associated with depression. Using data from the National Health and Nutrition Examination Survey 2005⁻2010, we examined the associations of nine allostatic load biomarkers with depression among US black and white adults aged 18⁻64 years (n = 6431). Depressive symptoms were assessed using the Patient Health Questionaire-9; logistic models estimated adjusted odds of depression based on allostatic load biomarkers. High-risk levels of c-reactive protein were significantly associated with increased odds of depression among white women (adjusted odds ratio (aOR) = 1.7, 95% CI: 1.1⁻2.5) and men (aOR = 1.8, 95% CI: 1.1⁻2.8) but not black women (aOR = 0.8, 95% CI: 0.6⁻1.1) or men (aOR = 0.9, 95% CI: 0.5⁻1.5). Among black men, hypertension (aOR = 1.7, 95% CI: 1.1⁻2.7) and adverse serum albumin levels (aOR = 1.7, 95% CI: 1.0⁻2.9) predicted depression, while high total cholesterol was associated with depression among black women (aOR = 1.6, 95% CI: 1.0⁻2.7). The associations between allostatic load biomarkers and depression varies with gendered race, suggesting that, despite consistent symptomatology, underlying disease mechanisms may differ between these groups.
abstract_id: PUBMED:32289550
Depression, changes in peripheral blood cell count, and changes in selected biochemical parameters related to lead concentration in whole blood (Pb-B) of women in the menopausal period. The Aim: The aim of this study was to assess the severity of depression, vasomotor symptoms, changes in peripheral blood cell count, and selected biochemical parameters in relation to the concentration of lead in whole blood of women in the perimenopausal period.
Methods: The study sample consisted of 233 women from the general population of the West Pomeranian Province (Poland) in age between 44-65 years. The intensity of menopausal symptoms was examined using the Blatt-Kupperman Index, and the severity of depression using the Beck Depression Inventory. The following biochemical data were evaluated: concentrations of glucose, triglycerides, HDL, C-reactive protein, glycated haemoglobin, cortisol, insulin, blood cell count, and lead concentration in whole blood (Pb-B).
Results: A whole blood Pb concentration below 5 μg/dl was found in 55 subjects (23.61 %), in 142 women (60.94 %) it ranged from 5 to 10 μg/dl, while in 36 women (15.45 %) was higher than 10 μg/dl. There was a strong positive correlation between Pb concentration in the blood of the examined women and the severity of depressive symptoms (Rs=+0.60, p = 0.001). The lowest mean values for total leukocytes (5.07 ± 1.22 thousand/μl) and neutrophils (2.76 ± 0.86 thousand/μl) were found in women with Pb concentration above 10 μg/dl (p < 0.05). There was a significant negative correlation between the number of total leukocytes (r=-0.45, p = 0.002) and neutrophils (r=-0.50, p = 0.001) and blood Pb concentration. Analysis showed statistically significant differences in glucose concentration (p < 0.05) between groups. Blood glucose was higher in women with Pb-B <5 and between 5-10 μg/dl than in women with Pb-B >10 μg/dl.
Conclusion: Exposure to Pb may be a factor playing a significant role in the development of depressive symptoms in menopausal women. It may also be associated with glucose metabolism disorders and immunosuppression in women during this period of life.
abstract_id: PUBMED:26213963
Higher Body Iron Is Associated with Greater Depression Symptoms among Young Adult Men but not Women: Observational Data from the Daily Life Study. Studies investigating possible associations between iron status and mood or depressive symptoms have reported inconsistent results. However, they have neither used body iron to measure iron status nor measured mood using daily measures. We investigated whether body iron was associated with depressive symptoms, daily mood, daily tiredness, difficulty concentrating, and stress in young adult women and men. Young adult (17-25 years) women (n = 562) and men (n = 323) completed the Center for Epidemiologic Studies Depression Scale, then reported negative and positive mood, and other states daily for 13 days. Non-fasting venous blood was collected to determine hemoglobin, serum ferritin and soluble transferrin receptor (to calculate body iron), C-reactive protein, and alpha-1-acid glycoprotein concentration. Regression models tested linear associations between body iron and the outcome variables, controlling for possible confounders. No associations were found between body iron and the outcome variables in women. However, higher body iron was associated with more depressive symptoms in men (3.4% more per body iron mg/kg; 95% confidence intervals (CI): 0.8%, 5.9%). In young adult women, body iron is unlikely to be associated with significant deficits in mood or depressive symptoms. However, higher body iron may be associated with more depressive symptoms in young adult men.
abstract_id: PUBMED:17515020
Infection, depression, and immunity in women after coronary artery bypass: a pilot study of cognitive behavioral therapy. Context: Depression is common after coronary artery bypass graft (CABG) surgery, but little is known about its effect on post-CABG inflammation or infection or about the most effective treatment for post-CABG depression.
Objectives: (1) To determine ifpost-CABG depression is associated with increased infectious illness and (2) to test effects of cognitive behavioral therapy (CBT) on depressive symptoms, inflammatory biomarkers, and post-CABG infections in depressed post-CABG women.
Design: Randomized, controlled trial.
Setting: Two urban tertiary care centers.
Patients: Fifteen clinically depressed women in the first month after CABG, along with a comparison group of 37 non-depressed postCABG women, were studied. Inclusion criteria were: < or = 75 years old, English-speaking, undergoing first-time CABG, available for 6 months offollow-up, and without malignancy or autoimmune disorders.
Intervention: Eight weeks of individual home-based CBT.
Main Outcome Measures: (1) Depressive symptoms measured by the Beck Depression Inventory, (2) natural killer cell cytotoxicity (NKCC) measured by 51Cr-release assay, (3) infectious illness episodes measured by the Modified Health Review, (4) interleukin (IL)-6 and C reactive protein (CRP) measured by enzyme immunoabsorbent assay.
Results: Clinically depressed post-CABG women exhibited decreased NKCC and a higher incidence of in-hospital fevers and infectious illness in the first 6 months after CABG. Among depressed women, CBT yielded moderate to large effects for improved NKCC (D=0.67) and decreased IL-6 (D=0.61), CRP (D=0.85), and postoperative infectious illnesses (D=0.93). CBT holds promise for improving depression and immunity and reducing infection and inflammation after CABG.
Answer: Yes, there are bi-directional associations between depressive symptoms and C-reactive protein (CRP) in mid-life women. A study that followed women enrolled in the Study of Women's Health Across the Nation (SWAN) for 7 years found that higher CES-D scores, which measure depressive symptoms, predicted higher subsequent CRP levels, and higher CRP levels at year X predicted higher CES-D scores at year X+1 (PUBMED:19683568). This suggests that not only can depressive symptoms lead to increased levels of CRP, but elevated CRP can also predict future depressive symptoms, indicating a bi-directional relationship. |
Instruction: Do parotid duct abnormalities occur in patients with chronic alcoholic pancreatitis?
Abstracts:
abstract_id: PUBMED:9468241
Do parotid duct abnormalities occur in patients with chronic alcoholic pancreatitis? Objective: Several studies have suggested that ethanol affects the pancreas and parotid gland. We performed a prospective study to determine whether ductal lesions of ethanol-induced chronic pancreatitis occur in the parotid.
Methods: Parotid sialograms were performed in 11 alcoholic patients who had endoscopic retrograde pancreatograms. Sialograms and pancreatograms were examined in all subjects for ductal abnormalities.
Results: Seven of nine patients (77.8%) with ductal lesions of the pancreas had coexistent ductal abnormalities of the parotid gland (Kendall's tau = 0.578, p = 0.035).
Conclusions: Chronic ethanol intake induces ductal alterations in the parotid gland similar to those seen in the pancreas. These results suggest a common histopathological effect of alcohol in the ductal system of the parotid gland and pancreas and raise the possibility that the parotid sialogram could be useful as an adjunct in the diagnosis of ethanol-induced chronic pancreatitis.
abstract_id: PUBMED:28391078
Calcifications of the parotid space. A review Introduction: Parotid lithiasis is the main cause of calcifications in the parotid space. However, there are many other less known causes. The aim of our study was to point out the non-lithiasic causes of calcifications in the parotid space.
Material And Methods: We conducted an exhaustive review of the literature by mean of PubMed, using the keywords "parotid" and "calcification" and limiting our analysis to the original articles in humans published in English and in French. Articles reporting about microscopic calcifications and who were not dealing with parotid calcifications were excluded.
Results: Twenty articles met the inclusion criterions. Tumoral and non-tumoral local causes and systemic causes of parotid calcification were found. The way they revealed was variable. The main tumoral local causes were pleomorphic adenomas, salivary duct carcinomas and adenocarcinomas. The main non-tumoral local causes included vascular malformations and calcified parotid lymph nodes. The main systemic causes were chronic kidney diseases, HIV infection, chronic alcoholism, elevated levels of alkaline phosphatase and auto-immune diseases.
Discussion: Eighteen different etiologies of parotid space calcifications could be identified. First line exploration of these lesions relies mainly on conventional radiography and ultrasound examination that are easily available. CT scan remains the reference examination.
abstract_id: PUBMED:514065
Common bile duct stenosis from chronic pancreatitis: a clinical and pathologic spectrum. The chronic pancreatitis population of Wadsworth VA Hospital over the past five years was screened for two-fold or greater alkaline phosphatase elevation at any time during their course, as a marker for either distal common bile duct stenosis or other hepatobiliary disease. Forty-seven of 207 patients screened met this criterion and are reviewed in detail. Of the 16 patients with persistent alkaline phosphatase elevation (group B), 15 had proven common bile duct stenosis, demonstrating a clear pathophysiologic role of partial bile duct obstruction in their liver disease. Three had developed secondary biliary cirrhosis, marking this entity the commonest cause of secondary biliary cirrhosis at our hospital. Of the remaining 31 patients with transient alkaline phosphatase elevation (group A), only 4 had proven duct abnormalities which may resolve during recovery. Alcoholic liver disease was demonstrated with normal extrahepatic ducts in the remainder in group A adequately studies. Persistent greater than two-fold alkaline phosphatase elevation in pancreatitis thus represents a reliable marker of distal common bile duct stenosis, whose sequelae may include cholangitis and secondary biliary cirrhosis and which requires operative intervention in these cases. When a persistent alkaline phosphatase elevation greater than two-fold is encountered in a chronic pancreatitis patient, adequate cholangiography and liver histology are both necessary to confirm and grade this frequent and treatable complication.
abstract_id: PUBMED:7286890
Liver histopathology in chronic common bile duct stenosis due to chronic alcoholic pancreatitis. The liver histopathology in 40 liver biopsies from 24 patients with verified chronic common bile duct stenosis due to chronic alcoholic pancreatitis has been reviewed code-blinded. This represents an 8% prevalence of this complication in approximately 300 patients with alcoholic pancreatitis screened biochemically for alkaline phosphatase greater than two-fold for less than 1 month. The majority were anicteric with no symptoms other than from acute exacerbations of chronic pancreatitis. Biliary obstructive liver histopathology of varying severity was diagnosed in 19 patients (79%), seven of whom (29%) had secondary biliary cirrhosis. In 3 of these 7 cases, progression to biliary cirrhosis was documented with sequential biopsies. The remainder demonstrated this histologic picture when first diagnosed, supporting this insidious nature of this process. Stromal edema of the portal tracts, increased portal connective tissue, and marked proliferation of interlobular bile ducts and ductules were the most striking histologic features. Histologic cholangitis, although frequent, was generally mild or absent, reflecting the incomplete nature of the duct obstruction. Features of alcoholic liver disease were observed in only two cases. The results indicate that (1) chronic alcoholic pancreatitis with incomplete duct obstruction frequently causes secondary biliary cirrhosis, (2) significant alcoholic liver disease very infrequently coexists with persistent common bile duct stricture from alcoholic pancreatitis, and (3) surgical biliary decompression should be considered in any patient with documented persistent common bile duct stenosis from alcoholic pancreatitis.
abstract_id: PUBMED:1708258
Functional and structural adaptation of the parotid gland to medium-term chronic ethanol exposure in the rat. Male rats were maintained on a regimen of twice daily intragastric administration of ethanol or a calorifically equivalent sucrose solution for thirty days. A second control group received no intragastric solution and all groups received chow and water ad libitum. Parotid saliva elicited by pilocarpine was collected by unilateral duct cannulation. The parotid flow rate over the initial post-stimulatory five minute period was raised by 44% in ethanol-dosed rats and the salivary sodium concentration was also raised, in line with higher flow rate. There were no histopathological changes related to ethanol or sucrose dosing, but stereological analysis showed a 64% increase in the proportional volume of intralobular vascular tissue in ethanol-dosed rats. These quantified histological findings suggest that parotid intralobular haemodynamics may be altered after chronic ethanol-dosing and this may contribute to the hypersecretory response exhibited by the ethanol-dosed rats.
abstract_id: PUBMED:11584367
The risk of liver and bile duct cancer in patients with chronic viral hepatitis, alcoholism, or cirrhosis. No prospective study has analyzed simultaneously chronic viral hepatitis and alcoholism as risk factors for liver carcinogenesis, while taking into consideration the role of cirrhosis. Nor has the risk for hepatocellular carcinoma among patients with chronic viral hepatitis been prospectively evaluated in a low-risk Western population. Last, the relationship between hepatocellular carcinoma risk factors and bile duct cancer remains to be clarified. We analyzed prospectively the risk for primary liver and extrahepatic biliary tract cancer among 186,395 patients hospitalized with either chronic viral hepatitis, alcoholism, cirrhosis, or any combination of these conditions through linkages between national Swedish registers. Compared with the general population, the relative risk of hepatocellular carcinoma was 34.4 for chronic viral hepatitis alone, 2.4 for alcoholism alone, and 40.7 for cirrhosis alone. Among patients with combinations of these risk conditions, the relative risk of hepatocellular carcinoma was 27.3 for chronic viral hepatitis and alcoholism, 118.5 for chronic viral hepatitis and cirrhosis, 22.4 for alcoholism and cirrhosis, and 171.4 for all 3 conditions. We found limited evidence for an excess risk of intrahepatic, but not for extrahepatic, biliary duct cancer. Cirrhosis amplifies the risk of hepatocellular carcinoma among patients with chronic viral hepatitis, but it is not a prerequisite for liver carcinogenesis. In contrast, cirrhosis may be a necessary intermediate for the development of hepatocellular carcinoma among alcoholics.
abstract_id: PUBMED:15239276
Clinical features of patients with chronic pancreatitis complicated by bile duct strictures. Background/aims: Distal bile duct stenosis is relatively rare in patients with non-alcoholic chronic pancreatitis.
Methodology: The clinical features of eight patients who had chronic pancreatitis complicated by bile duct strictures who underwent surgical treatments were reviewed.
Results: Ages ranged from 38 to 80 years, with a mean of 53.4 years. All but one patient were male. Six patients had moderate or slight epigastric pain. Five patients had obstructive jaundice and underwent biliary drainage. All patients had liver dysfunction due to biliary obstruction. Although four of the eight patients were heavy or moderate drinkers, none of the patients had a history of chronic pancreatitis. Stricture shapes of the common bile ducts were smooth and tapering in five patients, funnel-shaped in two, and rat-tail in one. Four patients underwent a pancreatoduodenectomy and one patient underwent a pylorus-preserving pancreatoduodenectomy for clinically suspected pancreatic malignancy that was later proven histopathologically to be chronic pancreatitis. The other three patients underwent a choledochoduodenostomy. There were no postoperative complications or deaths. During the follow-up period, all patients were asymptomatic.
Conclusions: In conclusion, bile duct stricture potentially occurs not only in patients with alcoholic chronic pancreatitis but also in patients with nonalcoholic chronic pancreatitis. Furthermore, in some cases, it is impossible to differentiate chronic pancreatitis from pancreatic or periampullary malignancy.
abstract_id: PUBMED:2818030
The spectrum and natural history of common bile duct stenosis in chronic alcohol-induced pancreatitis. Sixty patients with chronic alcohol-induced pancreatitis with endoscopic retrograde cholangiopancreatography evidence of common bile duct stenosis were studied to determine the clinical spectrum and natural history of this complication, as well as the indications for biliary bypass. In 17% of patients, common bile duct stenosis (CBDS) was an incidental finding at ERCP, while in the remaining cases pain and jaundice were the predominant symptoms in 35% and 48%, respectively. Biliary drainage was performed in 38% of patients for persistent or recurrent jaundice, cholangitis, and while undergoing pancreatic duct or cyst drainage procedures for pain. The benign nature of CBDS in chronic alcohol-induced pancreatitis (CAIP) in patients without persistent jaundice is emphasized. In particular, no histologically proved cases of secondary biliary cirrhosis were noted. The majority of patients with CBDS due to CAIP may be safely managed without biliary bypass but require close follow-up.
abstract_id: PUBMED:30136791
REACTION OF PAROTID GLAND MAST CELLS TO CHRONIC ALCOHOL INTOXICATION The goal of this study was to examine the localization and the structural and functional features of mast cells (MC) in the parotid gland in chronic alcohol intoxication. The study was conducted on 15 adult outbred albino male rats receiving 20% ethanol solution as the sole source of drinking for 2 months. The control group included 10 intact animals. Structural changes in parotid salivary glands were studied in paraffin sections, stained with hematoxylin–eosin. MC were demonstrated in cryostat sections stained by Unna’s method; their topography, degranulationwere evaluated and their number per field of vision was counted. Serotonin content was assessed quantitatively by using fluorescent microscopy and cytospectrophotometry. In chronic alcohol intoxication, marked variability was demonstrated in the shape of the secretory portions and the size of their glandular cells, which often showed unstained vacuoles. Interlobular ducts are unevenly dilated, their cells had variable height. The number of MC in the connective tissue layer around the interlobular excretory ducts and blood vessels was increased, most of them were in a state of degranulation. However, the content of serotonin in these areas was not changed significantly compared with that in the control group, presumably due to the fact that serotonin released from MC during degranulation, was actively interacting with numerous fibers and terminals of the autonomic nervous system located here, and was quickly trapped by them. Within the lobules, the amount of MC was increased to a lesser extent than in the area of interlobular ducts, but 80% of them were in a state of pronounced degranulation, often with complete disintegration of the cytoplasm. These cells apparently served as the sources of serotonin, the number of which significantly increased in the area of secretory portions. It is suggested that the increased concentrations of serotonin in the area of the secretory portions indicates that under the influence of alcohol intoxication the additional paracrine regulatory mechanisms were activated in the gland, which contributed to its functional activity, aimed at accelerating the excretion of ethanol and its toxic products of metabolism.
abstract_id: PUBMED:11172178
Regression of liver fibrosis after biliary drainage in patients with chronic pancreatitis and stenosis of the common bile duct. Background: Chronic obstruction of the common bile duct may cause hepatic fibrosis and secondary biliary cirrhosis.
Methods: We studied liver-biopsy specimens from 11 patients with chronic stenosis of the common bile duct due to chronic pancreatitis; all the patients had undergone liver biopsy before or at the time of surgical biliary decompression and underwent a subsequent liver biopsy for various clinical reasons. The patients were followed as part of a prospective study of 501 patients who had been treated for chronic pancreatitis. Two pathologists, who were unaware of the sequence of specimens, graded fibrosis on a scale of 0 (none) to 3 (cirrhosis).
Results: The 11 patients were all men. Chronic pancreatitis was due to alcohol abuse in 10 of the men; 1 had idiopathic disease. The median age at diagnosis was 38 years. The median interval between the first and second liver biopsies was 2.5 years (range, 0.3 to 9.0). The two patients who had restenosis of the biliary anastomosis were excluded from the analysis of fibrosis. In the group of nine patients without restenosis, the second specimen showed significant improvement in fibrosis (P=0.01). The fibrosis improved by two grades in two patients and by one grade in four patients; in three patients, the grade did not change. The pathologists agreed on the grading of specimens from 10 of the 11 patients.
Conclusions: In patients with chronic pancreatitis and stenosis of the common bile duct, liver fibrosis may regress after biliary drainage.
Answer: Yes, parotid duct abnormalities do occur in patients with chronic alcoholic pancreatitis. A prospective study found that 77.8% of patients with ductal lesions of the pancreas also had coexistent ductal abnormalities of the parotid gland. This suggests a common histopathological effect of alcohol on the ductal system of both the parotid gland and pancreas, indicating that chronic ethanol intake induces ductal alterations in the parotid gland similar to those seen in the pancreas (PUBMED:9468241). |
Instruction: Is there a difference in cognitive development between preschool singletons and twins born after intracytoplasmic sperm injection or in vitro fertilization?
Abstracts:
abstract_id: PUBMED:24390744
Is there a difference in cognitive development between preschool singletons and twins born after intracytoplasmic sperm injection or in vitro fertilization? Objective: To explore whether there exist differences in cognitive development between singletons and twins born after in vitro fertilization (IVF) or intracytoplasmic sperm injection (ICSI).
Methods: A total of 566 children were recruited for the study, including 388 children (singletons, n=175; twins, n=213) born after IVF and 178 children (singletons, n=87; twins, n=91) born after ICSI. The cognitive development was assessed using the Chinese-Wechsler Intelligence Scale for Children (C-WISC).
Results: For all pre-term offspring, all the intelligence quotient (IQ) items between singletons and twins showed no significant differences no matter if they were born after IVF or ICSI. There was a significant difference in the cognitive development of IVF-conceived full-term singletons and twins. The twins born after IVF obtained significantly lower scores than the singletons in verbal IQ (containing information, picture & vocabulary, arithmetic, picture completion, comprehension, and language), performance IQ (containing maze, visual analysis, object assembly, and performance), and full scale IQ (P<0.05). The cognitive development of full-term singletons and twins born after ICSI did not show any significant differences. There was no significant difference between the parents of the singletons and twins in their characteristics where data were collected, including the age of the mothers, the current employment status, the educational backgrounds, and areas of residence. There were also no consistent differences in the duration of pregnancy, sex composition of the children, age, and height between singletons and twins at the time of our study although there existed significant differences between the two groups in the sex composition of the full-term children born after ICSI (P<0.05).
Conclusions: Compared to the full-term singletons born after IVF, the full-term twins have lower cognitive development. The cognitive development of full-term singletons and twins born after ICSI did not show any significant differences. For all pre-term offspring, singletons and twins born after IVF or ICSI, the results of the cognitive development showed no significant differences.
abstract_id: PUBMED:17980875
Cognitive development of singletons born after intracytoplasmic sperm injection compared with in vitro fertilization and natural conception. Objective: To investigate cognitive development of singletons conceived by intracytoplasmic sperm injection (ICSI) at 5-8 years of age.
Design: Follow-up study.
Setting: University medical center, assessments between March 2004 and May 2005.
Patient(s): Singletons born between June 1996 and December 1999 after ICSI at the Leiden University Medical Center were compared with matched singletons born after IVF and natural conception (NC).
Intervention(s): Mode of conception.
Main Outcome Measure(s): Intelligence quotient (IQ) was measured with the Revised Amsterdam Child Intelligence Test (short form). The investigators were blinded to conception mode.
Result(s): Singletons conceived by ICSI (n = 83) achieved lower IQ scores than IVF singletons (n = 83) (adjusted mean difference IQ: 3.6 [95% confidence interval (CI) -0.8, 8.0]). After categorizing IQ outcomes (<85, 85-115, >115), no significant difference in the distribution of IQ was found. Singletons conceived by ICSI (n = 86) achieved lower IQ scores than NC singletons (n = 85); the adjusted mean difference varied between 5 and 7 points (5.6 [95% CI 0.9, 10.3]; 7.1 [95% CI 1.7, 12.5]) depending on the covariates included in the model. Adjustment for prematurity did not change the results. Percentages in IQ categories <85, 85-115, and >115 were 12%, 64%, and 24% for ICSI and 6%, 54%, and 40% for NC, respectively.
Conclusion(s): In the relatively limited sample investigated, cognitive development among ICSI singletons was lower than among IVF and NC singletons. Infertility factors or unmeasured confounders may play a role.
abstract_id: PUBMED:19036784
Cognitive development of singletons conceived by intracytoplasmic sperm injection or in vitro fertilization at age 5 and 10 years. Objective: To investigate the cognitive functioning of low-risk singletons born after intracytoplasmic sperm injection (ICSI) or in vitro fertilization (IVF) at the age of 5 or 10 years.
Methods: Sixty-nine children (35 ICSI, 34 IVF) participated voluntarily in the study that had been approved by the local IRB. Their intellectual functioning was examined by the Kaufmann Assessment Battery for Children.
Results: The IQ of the study group fell in the normal range (mean = 98.2; SD = 12.2). ICSI children (IQ = 94.1, SD = 13.8) had statistically lower intellectual abilities compared to IVF children (IQ = 102.0, SD = 9.1; t = -2.81, p = .005), especially in simultaneous mental processing. 23.5% ICSI children, but only 2.9% IVF children (p = .011) had at least borderline delayed cognitive development.
Conclusions: Most artificially conceived singletons show a normal cognitive development, however the method of fertilization seems to have an impact on their IQ. ICSI might be associated with the risk for a slightly delayed cognitive development compared to IVF.
abstract_id: PUBMED:35568190
Embryo morphologic quality in relation to the metabolic and cognitive development of singletons conceived by in vitro fertilization and intracytoplasmic sperm injection: a matched cohort study. Background: Embryos with higher morphologic quality grading may have a greater potential to achieve clinical pregnancy that leads to a live birth regardless of the type of cleavage-stage embryos or blastocysts. Few studies have investigated the impacts of embryo grading on the long-term health of the offspring.
Objective: This pilot study aimed to examine the associations between embryo morphologic quality and the physical, metabolic, and cognitive development of singletons conceived by in vitro fertilization and intracytoplasmic sperm injection at preschool age.
Study Design: This matched cohort study included singletons born to infertile couples who underwent fresh cleavage-stage embryo transfer cycles with good- or poor-quality embryos from 2014 to 2016 at the reproductive center of the Women's Hospital, School of Medicine, Zhejiang University. A total of 144 children, aged 4 to 6 years, participated in the follow-up assessment from 2020 to 2021, and the response rate of poor-quality embryo offspring was 39%. Singletons in the good-quality embryo group were matched with singletons in the poor-quality embryo group at a 2:1 ratio according to the fertilization method and the children's age (±1 year). We measured the offspring's height, weight, body mass index, blood pressure, thyroid hormone levels, and metabolic indicators. Neurodevelopmental assessments were performed using the Chinese version of the Wechsler Preschool and Primary Scale of Intelligence, Fourth Edition, and the Adaptive Behavior Assessment System, Second Edition. We also collected data from the medical records. A linear regression model was used to analyze the association between embryo morphologic quality and offspring health outcomes.
Results: A total of 48 singletons conceived with poor-quality embryo transfer and 96 matched singletons conceived with good-quality embryo transfer were included in the final analysis. Age, sex, height, weight, body mass index, blood pressure, thyroid function, and metabolic indicators were comparable between the 2 groups. After adjustment for potential risk factors by linear regression model 1 and model 2, poor-quality embryo offspring exhibited a tendency toward higher free thyroxine levels than offspring of good-quality embryo transfers (beta, 0.22; 95% confidence interval, 0.09-0.90; beta, 0.22; 95% confidence interval, 0.09-0.91, respectively), but this difference was not clinically significant. Regarding neurodevelopmental assessments, there was no difference in the full-scale intelligence quotient based on the Wechsler Preschool and Primary Scale of Intelligence (109.96±12.42 vs 109.60±14.46; P=.88) or the general adaptive index based on the Adaptive Behavior Assessment System (108.26±11.70 vs 108.08±13.44; P=.94) between the 2 groups. The subindices of the 2 tests were also comparable. These findings remained after linear regression analysis.
Conclusion: At 4 to 6 years of age, singletons born from poor-quality embryo transfers have comparable metabolic and cognitive development as those born from good-quality embryo transfers using fresh cleavage-stage embryos. The results of this pilot study indicate that poor-quality embryos that can survive implantation and end in live birth are likely to have a developmental potential comparable to that of good-quality embryos.
abstract_id: PUBMED:16109250
Neurological late sequelae in twins born after in vitro fertilisation--secondary publication. A national cohort study Currently 4% of the Danish national birth cohort is born after in vitro fertilisation techniques and almost 40% of these children are twins. In this register-based national cohort study, twins from assisted conception (n = 3393) had a similar risk of neurological sequelae and cerebral palsy as their naturally conceived peers (n = 10.239) and assisted conceived singletons (n = 5130). Further children born after intracytoplasmic sperm injection (ICSI) had the same risk of neurological sequelae as children born after conventional IVF.
abstract_id: PUBMED:16169409
Behavioral and cognitive development as well as family functioning of twins conceived by assisted reproduction: findings from a large population study. Objective: To establish the nature and extent of difficulties in parenting and child development in families with twins conceived by assisted reproduction.
Design: Comparisons were carried out between a representative sample of 344 families with 2- to 5-year-old twins conceived by IVF/intracytoplasmic sperm injection (ICSI) and a matched comparison group of 344 families with singletons from IVF/ICSI. One twin was randomly selected for data analysis to avoid the bias associated with nonindependence of measures.
Setting: A general population sample of IVF/ICSI families.
Patient(s): Mothers and children.
Intervention(s): Mothers completed a questionnaire booklet.
Main Outcome Measure(s): Standardized measures of the mother's psychological well-being (parenting stress, depression, and quality of marriage) and standardized measures of the child's psychological development (emotional/behavioral problems and cognitive development).
Result(s): Mothers of twins showed significantly higher levels of parenting stress and depression than mothers of singletons and were significantly more likely to find parenting difficult and significantly less likely to obtain pleasure from their child. Regarding the children, there was no difference in the level of emotional or behavioral problems between twins and singletons. However, twins showed significantly lower levels of cognitive functioning.
Conclusion(s): Greater difficulties in parenting and child development were experienced by IVF/ICSI families with twins than by IVF/ICSI families with singletons.
abstract_id: PUBMED:24397702
A comparison of perinatal outcomes in singletons and multiples born after in vitro fertilization or intracytoplasmic sperm injection stratified for neonatal risk criteria. Objective: To compare perinatal singleton and multiple outcomes in a large Dutch in vitro fertilization (IVF)/intracytoplasmic sperm injection (ICSI) population and within risk subgroups. Newborns were assigned to a risk category based on gestational age, birthweight, Apgar score and congenital malformation.
Design: Register-based retrospective cohort study.
Setting: Netherlands Perinatal Registry data.
Sample: A total of 3041 singletons and 1788 multiple children born from IVF/ICSI in 2003-2005.
Methods: Student's t-test or Mann-Whitney U-test was used to analyze continuous data, chi-squared analyses were used for categorical data. Multivariate logistic and linear regression analysis was performed to analyze whether the risk stratification criteria were associated with neonatal hospital admission and length of stay.
Main Outcome Measures: Start of labor, mode of delivery, gestational age, birthweight, 5-min Apgar score, congenital malformation, neonatal hospital admission, neonatal intensive care unit admission and mortality.
Results: IVF/ICSI-conceived multiples had considerably poorer outcomes than singletons in terms of cesarean section rate, preterm birth, birthweight, being small-for-gestational-age, Apgar score, neonatal hospital admission, neonatal intensive care unit admission and neonatal mortality. As opposed to the results found in the total study population and the low-risk and moderate-risk populations, high-risk multiples showed better outcomes than high-risk singletons regarding cesarean section rate, birthweight and Apgar score. All risk stratification variables were associated with being hospitalized after birth. Length of stay was associated with all risk stratification criteria except Apgar score.
Conclusions: Perinatal outcomes in IVF/ICSI-conceived multiples are considerably poorer than in singletons. This finding mainly pertains to low-risk children. High-risk multiples had significantly better perinatal outcomes than high-risk singletons.
abstract_id: PUBMED:21269612
Weight of in vitro fertilization and intracytoplasmic sperm injection singletons in early childhood. Birth weight and longitudinal growth in the first 4 years of life of term singletons conceived with the use of IVF and intracytoplasmic sperm injection (ICSI) were compared with those of naturally conceived singletons. Although IVF and ICSI singletons had a statistically significantly lower birth weight than naturally conceived singletons, the average individual weight curves showed that this difference was lost before the age of 4 years in all subgroups: IVF, ICSI, boys, and girls.
abstract_id: PUBMED:37442533
Risk of congenital malformations in live-born singletons conceived after intracytoplasmic sperm injection: a Nordic study from the CoNARTaS group. Objective: To investigate whether the risk of major congenital malformations is higher in live-born singletons conceived with intracytoplasmic sperm injection (ICSI) compared with in vitro fertilization (IVF)?
Design: Nordic register-based cohort study.
Setting: Cross-linked data from Medical Birth Registers and National ART and Patient Registers in Denmark, Norway and Sweden. Data were included from the year the first child conceived using ICSI was born: Sweden, 1992; Denmark, 1994; and Norway, 1996. Data were included until 2014 for Denmark and 2015 for Norway and Sweden.
Patient(s): All live-born singletons conceived using fresh ICSI (n = 32,484); fresh IVF (n = 47,178); without medical assistance (n = 4,804,844); and cryo-ICSI (n = 7,200) during the study period.
Intervention(s): Different in vitro conception methods, and cryopreservation of embryos.
Main Outcome Measure(s): Risk of major congenital malformations on the basis of International Classification of Diseases codes. The European Concerted Action on Congenital Anomalies and Twins was used to differentiate between major and minor malformations.
Result(s): Among singletons conceived using fresh ICSI, 6.0% had a major malformation, compared with 5.3% of children conceived using fresh IVF; 4.2% of children conceived without medical assistance; and 4.9% of children conceived using cryo-ICSI; adjusted odds ratio (AOR) 1.07 (95% confidence interval [CI] 1.01-1.14) in ICSI vs. IVF; and AOR 1.28 (95% CI, 1.23-1.35) in ICSI vs. no medical assistance; and AOR 1.11 (95% CI, 0.99-1.26) in ICSI fresh vs. cryo-ICSI. When malformations were grouped by different organ systems, children conceived using ICSI had a higher risk of respiratory and chromosomal malformations compared with children conceived using IVF, but there were very few cases in each group. When categorizing children conceived using ICSI according to treatment indication (male factor infertility only vs. other indications), we found a higher risk of hypospadias when ICSI was performed because of male factor infertility only (AOR 1.85 [95% CI 1.03-332]). The indications for ICSI changed over time, as male factor infertility did not remain the primary indication for ICSI throughout the study period.
Conclusion(s): In this large cohort study, we found the risk of major malformations in live-born singletons to be slightly higher after fresh ICSI compared with fresh IVF. These findings should be considered when choosing the assisted reproductive technology method for couples without male factor infertility.
abstract_id: PUBMED:15488125
Neonatal outcome in a Danish national cohort of 8602 children born after in vitro fertilization or intracytoplasmic sperm injection: the role of twin pregnancy. Background: In Denmark, 4% of all infants are born after in vitro fertilization (IVF) or intracytoplasmic sperm injection (ICSI) and 40% of these children are twins.
Methods: We investigated neonatal outcome in a complete Danish IVF/ICSI birth cohort including 8602 infants born between 1995 and 2000: 3438 twins (40%) and 5164 singletons (60%). Births conceived after IVF or ICSI were identified by record linkage with the Danish IVF Registry and the National Medical Birth Registry. Data on neonatal outcome were collected from the National Patient Registry.
Results: IVF/ICSI twins had a 10-fold increased age- and parity-adjusted risk of delivery before 37 completed weeks [odds ratio (OR) 9.9, 95% confidence interval (95% CI) 8.7-11.3] and a 7.4-fold increased risk of delivery before 32 completed weeks (OR 7.4, 95% CI 5.6-9.8) compared with singletons. Correspondingly, ORs of birthweight <2500 g and birthweight <1500 g in twins were 11.8 (95% CI 10.3-13.6) and 5.4 (95% CI 4.1-7.0), respectively. The stillbirth rate was doubled in twins (13.1/1000) compared with singletons (6.6/1000) (p = 0.002). The risk of cesarean section and of admittance to a neonatal intensive care unit (NICU) was 4.6- and 1.8-fold higher in IVF/ICSI twins than in singletons. The rate of major malformations was 40.4/1000 in twins and 36.8/1000 in singletons (p = 0.4), whereas the total malformation rate (major + minor) was higher in twins (73.7/1000) than in singletons (55.0/1000) (p = 0.001). After exclusion of patent ductus arteriosus (PDA), which is strongly associated with preterm birth, no significant differences in any malformation rates were observed between twins and singletons. Apart from the frequency of ICSI children with hypospadias, which reached a significance level of p = 0.05, malformation rates in ICSI children were similar to those in IVF children.
Conclusions: This study indicates that neonatal outcome in IVF/ICSI twins is considerably poorer than in singletons. Thus, the impact is to draw the attention of clinicians to the benefit of elective single embryo transfer (eSET).
Answer: Yes, there is evidence of a difference in cognitive development between preschool singletons and twins born after intracytoplasmic sperm injection (ICSI) or in vitro fertilization (IVF). According to the study with PUBMED:24390744, full-term twins born after IVF showed significantly lower cognitive development scores in verbal IQ, performance IQ, and full-scale IQ compared to full-term IVF singletons. However, no significant differences were found in the cognitive development of full-term singletons and twins born after ICSI. For all pre-term offspring, whether singletons or twins born after IVF or ICSI, there were no significant differences in cognitive development.
Another study with PUBMED:17980875 found that singletons conceived by ICSI achieved lower IQ scores than both IVF singletons and naturally conceived (NC) singletons. This suggests that cognitive development among ICSI singletons was lower than among IVF and NC singletons.
Additionally, the study with PUBMED:19036784 reported that ICSI children had statistically lower intellectual abilities compared to IVF children, particularly in simultaneous mental processing, and a higher percentage of ICSI children had at least borderline delayed cognitive development compared to IVF children.
However, the study with PUBMED:35568190 found no difference in the full-scale intelligence quotient or general adaptive index between singletons born from poor-quality embryo transfers and those born from good-quality embryo transfers, regardless of whether they were conceived by IVF or ICSI.
In contrast, the study with PUBMED:16169409 reported that twins conceived by assisted reproduction showed significantly lower levels of cognitive functioning compared to singletons from IVF/ICSI.
Overall, these findings suggest that there can be differences in cognitive development between preschool singletons and twins born after ICSI or IVF, with some evidence indicating lower cognitive outcomes for twins and ICSI singletons compared to IVF singletons and naturally conceived children. However, the impact of embryo quality on cognitive development appears to be minimal, as indicated by the study with PUBMED:35568190. |
Instruction: Can a multisensory teaching approach impart the necessary knowledge, skills, and confidence in final year medical students to manage epistaxis?
Abstracts:
abstract_id: PUBMED:24479815
Can a multisensory teaching approach impart the necessary knowledge, skills, and confidence in final year medical students to manage epistaxis? Objective: The purpose of this study is to evaluate the efficacy of a multisensory teaching approach in imparting the knowledge, skills, and confidence to manage epistaxis in a cohort of fourth year medical students.
Methods: One hundred and thirty four fourth year medical students were recruited into the study from Aug 2011 to February 2012 in four groups. Students listened to an audio presentation (PODcast) about epistaxis and viewed a video presentation on the technical skills (VODcast). Following this, students completed a 5-minute Individual Readiness Assessment Test (IRAT) to test knowledge accrued from the PODcast and VODcast. Next, students observed a 10-minute expert demonstration of the technical skills on a human cadaver and spent half an hour practicing these techniques on cadaver simulators with expert guidance. The students' confidence was assessed with Confidence Level Questionnaires (CLQs) before and after their laboratory session. The skill level of a subset of students was also assessed with a pre- and post-laboratory Objective Structured Assessment of Technical Skills (OSATS).
Results: Eighty two percent of the participants achieved a score of at least 80% on the IRAT. The CLQ instrument was validated in the study. There was a statistically significant improvement between the pre- and post-laboratory CLQ scores (p<0.01) and also between pre- and post-laboratory OSATS scores (p<0.01). Qualitative feedback suggested a student preference for this teaching approach.
Conclusions: This study provides further evidence that a multisensory teaching intervention effectively imparts the necessary knowledge, skill and confidence in fourth year medical students to manage epistaxis.
abstract_id: PUBMED:24761231
Knowledge of first aid skills among students of a medical college in mangalore city of South India. Background: The adequate knowledge required for handling an emergency without hospital setting at the site of the accident or emergency may not be sufficient as most medical schools do not have formal first aid training in the teaching curriculum.
Aim: The aim of this study is to assess the level of knowledge of medical students in providing first aid care.
Subjects And Methods: This cross-sectional study was conducted during May 2011 among 152 medical students. Data was collected using a self-administered questionnaire. Based on the scores obtained in each condition requiring first aid, the overall knowledge was graded as good, moderate and poor.
Results: Only 11.2% (17/152) of the total student participants had previous exposure to first aid training. Good knowledge about first aid was observed in 13.8% (21/152), moderate knowledge in 68.4% (104/152) and poor knowledge in 17.8% (27/152) participants. Analysis of knowledge about first aid management in select conditions found that 21% (32/152) had poor knowledge regarding first aid management for shock and for gastro esophageal reflux disease and 20.4% (31/152) for epistaxis and foreign body in eyes. All students felt that first aid skills need to be taught from the school level onwards and all of them were willing to enroll in any formal first aid training sessions.
Conclusion: The level of knowledge about first aid was not good among majority of the students. The study also identified the key areas in which first aid knowledge was lacking. There is thus a need for formal first aid training to be introduced in the medical curriculum.
abstract_id: PUBMED:31143081
Knowledge and attitude of Saudi female university students about first aid skills. Background: First aid is the first and most essential life saving care that can reduce the morbidity of an individual in a health-threatening circumstance. The aim of this study was to assess the knowledge and attitude toward the provision of first aid among students attending Princess Norah University (PNU).
Materials And Methods: A cross-sectional study was conducted at PNU in Riyadh, Saudi Arabia, from October 2017 to December 2017. A total of 1000 female students from 15 different colleges completed a self-administered questionnaire.
Results: The mean age was 21 years (range 18-26); 36% study participants were from health colleges and remaining from other colleges. Only 34.7% had good knowledge, 57.5% had moderate knowledge, and 7.8% had poor knowledge on first aid skills. Analysis of knowledge in specific emergency situations showed that the students were more knowledgeable in cases of epistaxis, ingestion of toxins, burns, hypoglycemia, and loss of consciousness. However, they were found to be less knowledgeable in handling situations of seizures, choking, and snake bite. About 20.2% of the students had encountered situations where cardiopulmonary resuscitation was required and 65.3% of these students had not provided first aid because of the lack of knowledge, nervousness, and other issues. Good knowledge was associated with previous first aid training and being a student in a health college.
Conclusion: Overall, students had a positive attitude toward first aid; however, they still did not have the knowledge necessary to be able to act in emergency situations. There is a need for increased public health awareness. It is also advisable to introduce first aid courses in all universities and secondary schools.
abstract_id: PUBMED:19235749
Computer-assisted teaching of epistaxis management: a Randomized Controlled Trial. Objectives: To determine whether computer-assisted learning (CAL) is an effective tool for the instruction of technical skills.
Study Design: Prospective blinded randomized-control trial conducted on a cohort of 47 first-year medical students.
Methods: Students were instructed on two techniques of nasal packing (formal nasal pack and nasal tampon) for the management of epistaxis, using either a standard text-based article or a novel computer-based learning module. Students were evaluated on proper nasal packing technique using standardized subjective and objective outcome measures by three board-certified otolaryngologists. Blind assessments took place prior to and following instruction, using the assigned learning modality.
Results: There were 47 participants enrolled in the study. Both groups demonstrated improvement in performance of both packing procedures following training. A significant post-training difference favoring CAL learners over text-based learners was observed, using the global assessment of skill for both packing techniques (P < .001). Additionally, a significant post-training difference favoring CAL learners over text-based learners was observed for all checklist items for the tampon pack and five of eight items on the formal pack checklist. The vast majority of students (94.6%) indicated that if given the choice, they would prefer to learn using CAL rather than by using text-based learning materials.
Conclusions: CAL learners demonstrated significantly greater improvement across both subjective and objective outcome measures when compared to the text-based group. Additionally, students favored learning via the CAL modality, which further suggests that CAL is a valuable means of imparting procedural knowledge to novice medical trainees.
abstract_id: PUBMED:27274331
A comparison of teaching three common ear, nose, and throat conditions to medical students through video podcasts and written handouts: a pilot study. Background: This pilot study conducted at the Peninsula Medical School is one of very few studies to compare the use of video podcasts to traditional learning resources for medical students.
Methods: We developed written handouts and video podcasts for three common ear, nose, and throat conditions; epistaxis, otitis media, and tonsillitis. Forty-one second-year students were recruited via email. Students completed a 60-item true or false statement test written by the senior author (20 questions per subject). Students were subsequently randomized to podcast or handouts. Students were able to access their resource via their unique university login on the university homepage and were given 3 weeks to use their resource. They then completed the same 60-item test.
Results: Both podcasts and handouts demonstrated a statistically significant increase in student scores (podcasts mean increase in scores 4.7, P=0.004, 95% confidence interval =0.07). Handout mean increase in scores 5.3, P=0.015, 95% confidence interval =0.11). However, there was no significant difference (P=0.07) between the two, with the handout group scoring fractionally higher (podcasts average post-exposure score =37.3 vs handout 37.8) with a larger average improvement. A 5-point Likert scale questionnaire demonstrated that medical students enjoy using reusable learning objects such as podcasts and feel that they should be used more in their curriculum.
Conclusion: Podcasts are as good as traditional handouts in teaching second-year medical students three core ear, nose, and throat conditions and enhance their learning experience.
abstract_id: PUBMED:22844212
Clinical experience of medical students at university sains malaysia. Experience of acute medical, surgical conditions, and clinical procedures of undergraduate students were assessed via a questionnaire survey during the final week of the 1993/1998 programme at the School of Medical Sciences, Univestiti Sains Malaysia. Individual performances were assessed by a scoring system. One hundred and twenty four students responded, (response rate 97%). More than 90% had seen myocardial infarction, cerebrovascular accident, pneumonia, respiratory distress, gastroenteritis, coma, and snake bite. Less than 33% had witnessed acute psychosis, diabetic ketoacidosis, acute hepatic failure, status epilepticus, near drowning, hypertensive encephalopathy, acute haemolysis or child abuse.Acute surgical/obstetrics cases, seen by >90% students, included fracture of long bones, head injury, acute abdominal pain, malpresentation and foetal distress. Less than 33% had observed epistaxis, sudden loss of vision, peritonitis or burns. Among operations only herniorrhaphy, Caesarian section, internal fixation of fracture and cataract extraction were seen by >80% students. The main deficits in clinical procedures are in rectal and vaginal examinations, urine collection and microscopic examinations. The performance of individual students, assessed by a scoring system, showed 15 students had unacceptably low scores (<149/230, 50%), 37 had good scores (>181.4/230, 70%) and 5 had superior scores (197.6/230, 80%).
abstract_id: PUBMED:8245743
Assessment of physical education faculty students' knowledge about first aid. The Physical Education Faculty students are highly exposed to different types of accidents, mainly fracture, sprains, strain, cramp, bleeding and wounds. The present study revealed that more than half of the students had correct knowledge about first aid of three injuries only out of eight namely fracture, cramp and bleeding. Lack of knowledge noted regarding cut wound, penetrating wound, falling, sprains and epistaxis. It is recommended that first aid course should be taught by specialized medical personal and the course content should include theoretical knowledge plus clinical practice of first aid in correct ways.
abstract_id: PUBMED:25649902
Surgical exploration and discovery program: inaugural involvement of otolaryngology - head and neck surgery. Background: There is significant variability in undergraduate Otolaryngology - Head and Neck Surgery (OTOHNS) curricula across Canadian medical schools. As part of an extracurricular program delivered jointly with other surgical specialties, the Surgical Exploration and Discovery (SEAD) program presents an opportunity for medical students to experience OTOHNS. The purpose of this study is to review the participation and outcome of OTOHNS in the SEAD program.
Methods: The SEAD program is a two-week, 80-hour, structured curriculum that exposes first-year medical students to nine surgical specialties across three domains: (1) operating room observerships, (2) career discussions with surgeons, and (3) simulation workshops. During observerships students watched or assisted in surgical cases over a 4-hour period. The one-hour career discussion provided a specialty overview and time for students' questions. The simulation included four stations, each run by a surgeon or resident; students rotated in small groups to each station: epistaxis, peritonsillar abscess, tracheostomy, and ear examination. Participants completed questionnaires before and after the program to evaluate changes in career interests; self-assessment of knowledge and skills was also completed following each simulation. Baseline and final evaluations were compared using the Wilcoxon Signed-Rank test.
Results: SEAD participants showed significant improvement in knowledge and confidence in surgical skills specific to OTOHNS. The greatest knowledge gain was in ear examination, and greatest gain in confidence was in draining peritonsillar abscesses. The OTOHNS session received a mean rating of 4.8 on a 5-point scale and was the most popular surgical specialty participating in the program. Eight of the 18 participants were interested in OTOHNS as a career at baseline; over the course of the program, two students gained interest and two lost interest in OTOHNS as a potential career path, demonstrating the potential for helping students refine their career choice.
Conclusions: Participants were able to develop OTOHNS knowledge and surgical skills as well as refine their perspective on OTOHNS as a potential career option. These findings demonstrate the potential benefits of OTOHNS departments/divisions implementing observerships, simulations, and career information sessions in pre-clerkship medical education, either in the context of SEAD or as an independent initiative.
abstract_id: PUBMED:31041224
Awareness about first aid management of epistaxis among medical students in Kingdom of Saudi Arabia. Background: Epistaxis is the bleeding from nose or nasal cavity and it is considered as one of the most common emergencies presenting in ear, nose, and throat department and accident and emergency department worldwide.
Objective And Aim: The aim of this study was to assess and evaluate knowledge, attitude, and practice of first aid management of epistaxis among medical students in the Kingdom of Saudi Arabia.
Materials And Methods: A cross-sectional community-based studies were collected using electronic questionnaire distributed among medical students all over the Kingdom of Saudi Arabia. The study was conducted between September and January 2018.
Results: Data were collected from 300 medical students from all over the Kingdom of Saudi Arabia using questionnaires, which were filled electronically. Majority of the respondents were females (75.7%), whereas 24.3% of the respondents were males. Most of the participants were from fourth and fifth year with 25.0 and 24.3%, respectively. 39.7% of the participants responded that fingernail trauma as the commonest cause of the epistaxis, followed by bleeding disorder in 17.3%. 64% of the respondents think that epistaxis is an emergency condition that requires early intervention. 71% of the respondents demonstrated the correct position as first aid measure of epistaxis and only 41.3% of respondents demonstrated the correct site for pinching the nose. The main source of the respondent's knowledge regarding first aid management of epistaxis was self-taught (53.67%) followed by medical books (23.33%).
Conclusion: Medical students in Saudi Arabia have an adequate knowledge about epistaxis and first aid measure that can be used to manage epistaxis.
abstract_id: PUBMED:19344612
Otolaryngology in undergraduate medical education. Objectives: Otolaryngologic problems are common in primary care but are underrepresented in undergraduate medical education (UME). A significant portion of the total structured teaching received by physicians in primary care specialties is spent in UME, but teaching of otolaryngology in UME is limited and not standardized across Canadian medical schools. This study assessed the content and structure of otolaryngology UME at Canadian undergraduate medical programs and postgraduate primary care programs and surveyed community otolaryngologists to identify important clinical topics for inclusion in an otolaryngology curriculum.
Methods: A survey was created based on previously validated and published curriculum needs assessment instruments for medical education. Surveys were distributed to (1) undergraduate directors of otolaryngology, (2) postgraduate program directors of family and emergency medicine at English-language medical schools, and (3) community otolaryngologists in Canada.
Results: There is large variation in the quantity of otolaryngology teaching in UME. Knowledge of otolaryngology is formally evaluated at half of the responding programs, although skills in otolaryngology are rarely tested. Clerkship rotations are not uniformly offered, and the length of these placements is limited. Opportunities in postgraduate training for formal education in otolaryngology are rare. Several important topics that are not uniformly taught include sudden sensorineural hearing loss, sleep apnea, and nasal trauma. Important skills that are not uniformly taught include treatment of benign paroxysmal positional vertigo and epistaxis.
Conclusions: There is limited time for teaching otolaryngology in UME; therefore, it is essential to emphasize teaching key topics and skills, which are highlighted in the results of this study.
Answer: Yes, a multisensory teaching approach can impart the necessary knowledge, skills, and confidence in final year medical students to manage epistaxis. A study evaluating the efficacy of a multisensory teaching approach, which included an audio presentation (PODcast), a video presentation on technical skills (VODcast), an Individual Readiness Assessment Test (IRAT), an expert demonstration on a human cadaver, and hands-on practice on cadaver simulators with expert guidance, showed that 82% of the participants achieved a score of at least 80% on the IRAT. There was a statistically significant improvement in both Confidence Level Questionnaires (CLQs) scores and Objective Structured Assessment of Technical Skills (OSATS) scores after the laboratory session. Qualitative feedback indicated a student preference for this teaching approach, suggesting that the multisensory intervention was effective in teaching fourth year medical students to manage epistaxis (PUBMED:24479815). |
Instruction: Do mammographic technologists affect radiologists' diagnostic mammography interpretative performance?
Abstracts:
abstract_id: PUBMED:25794085
Do mammographic technologists affect radiologists' diagnostic mammography interpretative performance? Objective: The purpose of this study was to determine whether the technologist has an effect on the radiologists' interpretative performance of diagnostic mammography.
Materials And Methods: Using data from a community-based mammography registry from 1994 to 2009, we identified 162,755 diagnostic mammograms interpreted by 286 radiologists and performed by 303 mammographic technologists. We calculated sensitivity, false-positive rate, and positive predictive value (PPV) of the recommendation for biopsy from mammography for examinations performed (i.e., images acquired) by each mammographic technologist, separately for conventional (film-screen) and digital modalities. We assessed the variability of these performance measures among mammographic technologists, using mixed effects logistic regression and taking into account the clustering of examinations within women, radiologists, and radiology practices.
Results: Among the 291 technologists performing conventional examinations, mean sensitivity of the examinations performed was 83.0% (95% CI, 80.8-85.2%), mean false-positive rate was 8.5% (95% CI, 8.0-9.0%), and mean PPV of the recommendation for biopsy from mammography was 27.1% (95% CI, 24.8-29.4%). For the 45 technologists performing digital examinations, mean sensitivity of the examinations they performed was 79.6% (95% CI, 73.1-86.2%), mean false-positive rate was 8.8% (95% CI, 7.5-10.0%), and mean PPV of the recommendation for biopsy from mammography was 23.6% (95% CI, 18.8-28.4%). We found significant variation by technologist in the sensitivity, false-positive rate, and PPV of the recommendation for biopsy from mammography for conventional but not digital mammography (p < 0.0001 for all three interpretive performance measures).
Conclusion: Our results suggest that the technologist has an influence on radiologists' interpretive performance for diagnostic conventional but not digital mammography. Future studies should examine why this difference between modalities exists and determine if similar patterns are observed for screening mammography.
abstract_id: PUBMED:25435185
The influence of mammographic technologists on radiologists' ability to interpret screening mammograms in community practice. Rationale And Objectives: To determine whether the mammographic technologist has an effect on the radiologists' interpretative performance of screening mammography in community practice.
Materials And Methods: In this institutional review board-approved retrospective cohort study, we included Carolina Mammography Registry data from 372 radiologists and 356 mammographic technologists from 1994 to 2009 who performed 1,003,276 screening mammograms. Measures of interpretative performance (recall rate, sensitivity, specificity, positive predictive value [PPV1], and cancer detection rate [CDR]) were ascertained prospectively with cancer outcomes collected from the state cancer registry and pathology reports. To determine if the mammographic technologist influenced the radiologists' performance, we used mixed effects logistic regression models, including a radiologist-specific random effect and taking into account the clustering of examinations across women, separately for screen-film mammography (SFM) and full-field digital mammography (FFDM).
Results: Of the 356 mammographic technologists included, 343 performed 889,347 SFM examinations, 51 performed 113,929 FFDM examinations, and 38 performed both SFM and FFDM examinations. A total of 4328 cancers were reported for SFM and 564 cancers for FFDM. The technologists had a statistically significant effect on the radiologists' recall rate, sensitivity, specificity, and CDR for both SFM and FFDM (P values <.01). For PPV1, variability by technologist was observed for SFM (P value <.0001) but not for FFDM (P value = .088).
Conclusions: The interpretative performance of radiologists in screening mammography varies substantially by the technologist performing the examination. Additional studies should aim to identify technologist characteristics that may explain this variation.
abstract_id: PUBMED:27438069
Radiologists' interpretive skills in screening vs. diagnostic mammography: are they related? Purpose: This study aims to determine whether radiologists who perform well in screening also perform well in interpreting diagnostic mammography.
Materials And Methods: We evaluated the accuracy of 468 radiologists interpreting 2,234,947 screening and 196,164 diagnostic mammograms. Adjusting for site, radiologist, and patient characteristics, we identified radiologists with performance in the highest tertile and compared to those with lower performance.
Results: A moderate correlation was noted for radiologists' accuracy when interpreting screening versus their accuracy on diagnostic examinations: sensitivity (rspearman=0.51, 95% CI: 0.22, 0.80; P=.0006) and specificity (rspearman=0.40, 95% CI: 0.30, 0.49; P<.0001).
Conclusion: Different educational approaches to screening and diagnostic imaging should be considered.
abstract_id: PUBMED:27062490
Impact of Breast Reader Assessment Strategy on mammographic radiologists' test reading performance. Introduction: The detection of breast cancer is somewhat limited by human factors, and thus there is a need to improve reader performance. This study assesses whether radiologists who regularly undertake the education in the form of the Breast Reader Assessment Strategy (BREAST) demonstrate any changes in mammography interpretation performance over time.
Methods: In 2011, 2012 and 2013, 14 radiologists independently assessed a year-specific BREAST mammographic test-set. Radiologists read a different single test-set once each year, with each comprising 60 digital mammogram cases. Radiologists marked the location of suspected lesions without computer-aided diagnosis (CAD) and assigned a confidence rating of 2 for benign and 3-5 for malignant lesions. The mean sensitivity, specificity, location sensitivity, JAFROC FOM and ROC AUC were calculated. A Kruskal-Wallis test was used to compare the readings for the 14 radiologists across the 3 years. Wilcoxon signed rank test was used to assess comparison between pairs of years. Relationships between changes in performance and radiologist characteristics were examined using a Spearman's test.
Results: Significant increases were noted in mean sensitivity (P = 0.01), specificity (P = 0.01), location sensitivity (P = 0.001) and JAFROC FOM (P = 0.001) between 2011 and 2012. Between 2012 and 2013, significant improvements were noted in mean sensitivity (P = 0.003), specificity (P = 0.002), location sensitivity (P = 0.02), JAFROC FOM (P = 0.005) and ROC AUC (P = 0.008). No statistically significant correlations were shown between the levels of improvement and radiologists' characteristics.
Conclusion: Radiologists' who undertake the BREAST programme demonstrate significant improvements in test-set performance during a 3-year period, highlighting the value of ongoing education through the use of test-set.
abstract_id: PUBMED:12202726
Performance parameters for screening and diagnostic mammography: specialist and general radiologists. Purpose: To evaluate performance parameters for radiologists in a practice of breast imaging specialists and general diagnostic radiologists who interpret a large series of consecutive screening and diagnostic mammographic studies.
Materials And Methods: Data (ie, patient age; family history of breast cancer; availability of previous mammograms for comparison; and abnormal interpretation, cancer detection, and stage 0-I cancer detection rates) were derived from review of mammographic studies obtained from January 1997 through August 2001. The breast imaging specialists have substantially more initial training in mammography and at least six times more continuing education in mammography, and they interpret 10 times more mammographic studies per year than the general radiologists. Differences between specialist and general radiologist performances at both screening and diagnostic examinations were assessed for significance by using Student t and chi(2) tests.
Results: The study involved 47,798 screening and 13,286 diagnostic mammographic examinations. Abnormal interpretation rates for screening mammography (ie, recall rate) were 4.9% for specialists and 7.1% for generalists (P <.001); and for diagnostic mammography (ie, recommended biopsy rate), 15.8% and 9.9%, respectively (P <.001). Cancer detection rates at screening mammography were 6.0 cancer cases per 1,000 examinations for specialists and 3.4 per 1,000 for generalists (P =.007); and at diagnostic mammography, 59.0 per 1,000 and 36.6 per 1,000, respectively (P <.001). Stage 0-I cancer detection rates at screening mammography were 5.3 cancer cases per 1,000 examinations for specialists and 3.0 per 1,000 for generalists (P =.012); and at diagnostic mammography, 43.9 per 1,000 and 27.0 per 1,000, respectively (P <.001).
Conclusion: Specialist radiologists detect more cancers and more early-stage cancers, recommend more biopsies, and have lower recall rates than general radiologists.
abstract_id: PUBMED:27130056
The inter-observer variability of breast density scoring between mammography technologists and breast radiologists and its effect on the rate of adjuvant ultrasound. Purpose: This study assesses the inter-observer variability of mammographic breast density scoring (BDS) between technologists and radiologists and evaluates the effect of technologist patient referral on the load of adjuvant ultrasounds.
Materials And Methods: In this IRB approved study, a retrospective analysis of 503 prospectively acquired, random mammograms was performed between January and March 2014. Each mammogram was evaluated for BDS independently and blindly by both the performing technologist and the interpreting radiologist. Statistical calculation of the Spearman correlation coefficient and weighted kappa were obtained to evaluate the inter-observer variability between technologists and radiologists and to examine whether it relates to the technologist's seniority or women's age. The effect on the load of adjuvant ultrasounds was evaluated.
Results: 10 mammography technologists and 7 breast radiologists participated in this study. BDS agreement levels between technologists and radiologists were in the fair to moderate range (kappa values: 0.3-0.45, Spearman coefficient values: 0.59-0.65). The technologists markedly over-graded the density compared to the radiologists in all the subsets evaluated. Comparison between low and high-density groups demonstrated a similar trend of over-grading by technologists, who graded 51% of the women as having dense breasts (scores 3-4) compared to 27% of the women graded as such by the radiologists. This trend of over grading breast density by technologists was unrelated to the women's age or to the technologists' seniority.
Conclusion: Mammography technologists over-grade breast density. Technologists' referral to an adjuvant ultrasound leads to redundant ultrasound studies, unnecessary breast biopsies, costs and increased patient anxiety.
abstract_id: PUBMED:32623914
Does mammographic density remain a radiological challenge in the digital era? Background: The low subject contrast between cancerous and fibroglandular tissue could obscure breast abnormalities.
Purpose: To investigate radiologists' performance for detection of breast cancer in low and high mammographic density (MD) when cases are digitally acquired.
Material And Methods: A test set of 60 digital mammography cases, of which 20 were cancerous, were examined by 17 radiologists. Mammograms were categorized as low (≤50%) or high (>50%) MD and rated for suspicion of malignancy using the Royal Australian and New Zealand College of Radiology (RANZCR) classification system. Radiologist demographics including cases read per year, age, subspecialty, and years of reporting were recorded. Radiologist performance was analyzed by the following metrics: sensitivity; specificity; area under the receiver operating characteristic (ROC) curve (AUC), location sensitivity, and jackknife free-response ROC (JAFROC) figure of merit (FOM).
Results: Comparing high to low MD cases, radiologists showed a significantly higher sensitivity (P = 0.015), AUC (P = 0.003), location sensitivity (P = 0.002), and JAFROC FOM (P = 0.001). In high compared to low MD cases, radiologists with <1000 annual reads and radiologists with no mammographic subspecialty had significantly higher AUC, location sensitivity, and JAFROC FOM. Radiologists with ≥1000 annual reads and radiologists with mammography subspecialty demonstrated a significant increase in location sensitivity in high compared to low MD cases.
Conclusion: In this experimental situation, radiologists' performance was higher when reading cases with high compared to low MD. Experienced radiologists were able to precisely localize lesions in breasts with higher MD. Further studies in unselected screening materials are needed to verify the results.
abstract_id: PUBMED:30803217
Radiologists’ Performance at Reduced Recall Rates in Mammography: A Laboratory Study Rationale and objectives: Target recall rates are often used as a performance indicator in mammography screening programs with the intention of reducing false positive decisions, over diagnosis and anxiety for participants. However, the relationship between target recall rates and cancer detection is unclear, especially when readers are directed to adhere to a predetermined rate. The purpose of this study was to explore the effect of setting different recall rates on radiologist’s performance. Materials and Methods: Institutional ethics approval was granted and informed consent was obtained from each participating radiologist. Five experienced breast imaging radiologists read a single test set of 200 mammographic cases (20 abnormal and 180 normal). The radiologists were asked to identify each case that they required to be recalled in three different recall conditions; free recall, 15% and 10% and mark the location of any suspicious lesions. Results: Wide variability in recall rates was observed when reading at free recall, ranging from 18.5% to 34.0%. Readers demonstrated significantly reduced performance when reading at prescribed recall rates, with lower sensitivity (H=12.891, P=0.002), case location sensitivity (H=12.512, P=0.002) and ROC AUC (H=11.601, P=0.003) albeit with an increased specificity (H=12.704, P=0.002). However, no significant changes were evident in lesion location sensitivity (H=1.982, P=0.371) and JAFROC FOM (H=1.820, P=0.403). Conclusion: In this laboratory study, reducing the number of recalled cases to 10% significantly reduced radiologists’ performance with lower detection sensitivity, although a significant improvement in specificity was observed.
abstract_id: PUBMED:36611409
Comparison of Diagnostic Performance in Mammography Assessment: Radiologist with Reference to Clinical Information Versus Standalone Artificial Intelligence Detection. We compared diagnostic performances between radiologists with reference to clinical information and standalone artificial intelligence (AI) detection of breast cancer on digital mammography. This study included 392 women (average age: 57.3 ± 12.1 years, range: 30−94 years) diagnosed with malignancy between January 2010 and June 2021 who underwent digital mammography prior to biopsy. Two radiologists assessed mammographic findings based on clinical symptoms and prior mammography. All mammographies were analyzed via AI. Breast cancer detection performance was compared between radiologists and AI based on how the lesion location was concordant between each analysis method (radiologists or AI) and pathological results. Kappa coefficient was used to measure the concordance between radiologists or AI analysis and pathology results. Binominal logistic regression analysis was performed to identify factors influencing the concordance between radiologists’ analysis and pathology results. Overall, the concordance was higher in radiologists’ diagnosis than on AI analysis (kappa coefficient: 0.819 vs. 0.698). Impact of prior mammography (odds ratio (OR): 8.55, p < 0.001), clinical symptom (OR: 5.49, p < 0.001), and fatty breast density (OR: 5.18, p = 0.008) were important factors contributing to the concordance of lesion location between radiologists’ diagnosis and pathology results.
abstract_id: PUBMED:29322305
Construction of mammographic examination process ontology using bottom-up hierarchical task analysis. Describing complex mammography examination processes is important for improving the quality of mammograms. It is often difficult for experienced radiologic technologists to explain the process because their techniques depend on their experience and intuition. In our previous study, we analyzed the process using a new bottom-up hierarchical task analysis and identified key components of the process. Leveraging the results of the previous study, the purpose of this study was to construct a mammographic examination process ontology to formally describe the relationships between the process and image evaluation criteria to improve the quality of mammograms. First, we identified and created root classes: task, plan, and clinical image evaluation (CIE). Second, we described an "is-a" relation referring to the result of the previous study and the structure of the CIE. Third, the procedural steps in the ontology were described using the new properties: "isPerformedBefore," "isPerformedAfter," and "isPerformedAfterIfNecessary." Finally, the relationships between tasks and CIEs were described using the "isAffectedBy" property to represent the influence of the process on image quality. In total, there were 219 classes in the ontology. By introducing new properties related to the process flow, a sophisticated mammography examination process could be visualized. In relationships between tasks and CIEs, it became clear that the tasks affecting the evaluation criteria related to positioning were greater in number than those for image quality. We developed a mammographic examination process ontology that makes knowledge explicit for a comprehensive mammography process. Our research will support education and help promote knowledge sharing about mammography examination expertise.
Answer: Yes, mammographic technologists do affect radiologists' diagnostic mammography interpretative performance. A study using data from a community-based mammography registry found significant variation by technologist in the sensitivity, false-positive rate, and positive predictive value (PPV) of the recommendation for biopsy from mammography for conventional but not digital mammography (PUBMED:25794085). Another study in community practice also showed that the interpretative performance of radiologists in screening mammography varies substantially by the technologist performing the examination, with a statistically significant effect on the radiologists' recall rate, sensitivity, specificity, and cancer detection rate (CDR) for both screen-film mammography (SFM) and full-field digital mammography (FFDM) (PUBMED:25435185). These findings suggest that the technologist's role in performing the mammogram can influence the outcomes of the radiologist's interpretation. |
Instruction: Health services changes: is a run-in period necessary before evaluation in randomised clinical trials?
Abstracts:
abstract_id: PUBMED:24479729
Health services changes: is a run-in period necessary before evaluation in randomised clinical trials? Background: Most randomised clinical trials (RCTs) testing a new health service do not allow a run-in period of consolidation before evaluating the new approach. Consequently, health professionals involved may feel insufficiently familiar or confident, or that new processes or systems that are integral to the service are insufficiently embedded in routine care prior to definitive evaluation in a RCT. This study aimed to determine the optimal run-in period for a new physiotherapy-led telephone assessment and treatment service known as PhysioDirect and whether a run-in was needed prior to evaluating outcomes in an RCT.
Methods: The PhysioDirect trial assessed whether PhysioDirect was as effective as usual care. Prior to the main trial, a run-in of up to 12 weeks was permitted to facilitate physiotherapists to become confident in delivering the new service. Outcomes collected from the run-in and main trial were length of telephone calls within the PhysioDirect service and patients' physical function (SF-36v2 questionnaire) and Measure Yourself Medical Outcome Profile v2 collected at baseline and six months. Joinpoint regression determined how long it had taken call times to stabilise. Analysis of covariance determined whether patients' physical function at six months changed from the run-in to the main trial.
Results: Mean PhysioDirect call times (minutes) were higher in the run-in (31 (SD: 12.6)) than in the main trial (25 (SD: 11.6)). Each physiotherapist needed to answer 42 (95% CI: 20,56) calls for their mean call time to stabilise at 25 minutes per call; this took a minimum of seven weeks. For patients' physical function, PhysioDirect was equally clinically effective as usual care during both the run-in (0.17 (95% CI: -0.91,1.24)) and main trial (-0.01 (95% CI: -0.80,0.79)).
Conclusions: A run-in was not needed in a large trial testing PhysioDirect services in terms of patient outcomes. A learning curve was evident in the process measure of telephone call length. This decreased during the run-in and stabilised prior to commencement of the main trial. Future trials should build in a run-in if it is anticipated that learning would have an effect on patient outcome.
Trial Registration: Current Controlled Trials, ISRCTN55666618.
abstract_id: PUBMED:7596208
Methods of randomized controlled clinical trials in health services research. The randomized controlled clinical trial is an increasingly used method in health services research. Analysis of methodology is needed to accelerate practical implementation of trial results, select trials for meta-analysis, and improve trial quality in health services research. The objectives of this study are to explore the methodology of health services research trials, create and validate a streamlined quality evaluation tool, and identify frequent quality defects and confounding effects on quality. The authors developed a quality questionnaire that contained 20 evaluation criteria for health services research trials. One hundred one trials from the Columbia Registry of Controlled Clinical Trials were evaluated using the new quality tool. The overall agreement between independent reviewers, Cohen's kappa, was 0.94 (+/- 0.01). Of a possible score of 100, the trials received an average score of 54.8 (+/- 12.5). Five evaluation criteria indicated significant quality deficiencies (sample size, description of case selection, data on possible adverse effects, analysis of secondary effect variables, and retrospective analysis). The quality of study characteristics was significantly weaker than the quality of reporting characteristics (P < 0.001). The total average scores of Medline-indexed journals were better than the non-Medline-indexed journals (P < 0.001). There was a positive correlation between the overall quality and year of publication (R = 0.21, P < 0.05). The authors conclude that the new quality evaluation tool leads to replicable results and there is an urgent need to improve several study characteristics of clinical trials. In comparison to drug trials, site selection, randomization, and blinding often require different approaches in health services research.
abstract_id: PUBMED:33168067
Process evaluation within pragmatic randomised controlled trials: what is it, why is it done, and can we find it?-a systematic review. Background: Process evaluations are increasingly conducted within pragmatic randomised controlled trials (RCTs) of health services interventions and provide vital information to enhance understanding of RCT findings. However, issues pertaining to process evaluation in this specific context have been little discussed. We aimed to describe the frequency, characteristics, labelling, value, practical conduct issues, and accessibility of published process evaluations within pragmatic RCTs in health services research.
Methods: We used a 2-phase systematic search process to (1) identify an index sample of journal articles reporting primary outcome results of pragmatic RCTs published in 2015 and then (2) identify all associated publications. We used an operational definition of process evaluation based on the Medical Research Council's process evaluation framework to identify both process evaluations reported separately and process data reported in the trial results papers. We extracted and analysed quantitative and qualitative data to answer review objectives.
Results: From an index sample of 31 pragmatic RCTs, we identified 17 separate process evaluation studies. These had varied characteristics and only three were labelled 'process evaluation'. Each of the 31 trial results papers also reported process data, with a median of five different process evaluation components per trial. Reported barriers and facilitators related to real-world collection of process data, recruitment of participants to process evaluations, and health services research regulations. We synthesised a wide range of reported benefits of process evaluations to interventions, trials, and wider knowledge. Visibility was often poor, with 13/17 process evaluations not mentioned in the trial results paper and 12/16 process evaluation journal articles not appearing in the trial registry.
Conclusions: In our sample of reviewed pragmatic RCTs, the meaning of the label 'process evaluation' appears uncertain, and the scope and significance of the term warrant further research and clarification. Although there were many ways in which the process evaluations added value, they often had poor visibility. Our findings suggest approaches that could enhance the planning and utility of process evaluations in the context of pragmatic RCTs.
Trial Registration: Not applicable for PROSPERO registration.
abstract_id: PUBMED:30027875
A realist approach to the evaluation of complex mental health interventions. Conventional approaches to evidence that prioritise randomised controlled trials appear increasingly inadequate for the evaluation of complex mental health interventions. By focusing on causal mechanisms and understanding the complex interactions between interventions, patients and contexts, realist approaches offer a productive alternative. Although the approaches might be combined, substantial barriers remain.Declaration of interestAll authors had financial support from the National Institute for Health Research Health Services and Delivery Research Programme while completing this work. The views and opinions expressed therein are those of the authors and do not necessarily reflect those of the National Health Service, the National Institute for Health Research, the Medical Research Council, Central Commissioning Facility, National Institute for Health Research Evaluation, Trials and Studies Coordinating Centre, the Health Services and Delivery Research Programme or the Department of Health. S.P.S. is part funded by Collaboration for Leadership in Applied Health Research and Care West Midlands. K.B. is editor of the British Journal of Psychiatry.
abstract_id: PUBMED:10576327
Clinical trials versus mental health services research: contributions and connections. The growing emphasis on using empirical data to guide mental health policy decision making has contributed, in part, to a developing dichotomy along the continuum of research on mental health interventions. At one end of the continuum is research on the efficacy of mental health interventions, traditionally referred to as clinical trials research. The goal of clinical trials research is to determine whether or not a specific intervention can be shown to be efficacious for a specific problem. At the other end of the continuum is research on the implementation and evaluation of mental health interventions, traditionally referred to as mental health services research. The goals of mental health services research are to understand the access to, organization and financing of, and outcomes of mental health interventions. The conceptual, methodological, and measurement features of both types of research are presented and suggestions are offered to bridge the gap between the two paradigms. The purpose of this article is to highlight each discipline's unique contributions to mental health research and, in so doing, facilitate a discussion that fosters scientific integration and collaboration between clinical trials and mental health services investigators.
abstract_id: PUBMED:27514851
A systematic review of randomised control trials of sexual health interventions delivered by mobile technologies. Background: Sexually transmitted infections (STIs) pose a serious public health problem globally. The rapid spread of mobile technology creates an opportunity to use innovative methods to reduce the burden of STIs. This systematic review identified recent randomised controlled trials that employed mobile technology to improve sexual health outcomes.
Methods: The following databases were searched for randomised controlled trials of mobile technology based sexual health interventions with any outcome measures and all patient populations: MEDLINE, EMBASE, PsycINFO, Global Health, The Cochrane Library (Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials, Cochrane Methodology Register, NHS Health Technology Assessment Database, and Web of Science (science and social science citation index) (Jan 1999-July 2014). Interventions designed to increase adherence to HIV medication were not included. Two authors independently extracted data on the following elements: interventions, allocation concealment, allocation sequence, blinding, completeness of follow-up, and measures of effect. Trials were assessed for methodological quality using the Cochrane risk of bias tool. We calculated effect estimates using intention to treat analysis.
Results: A total of ten randomised trials were identified with nine separate study groups. No trials had a low risk of bias. The trials targeted: 1) promotion of uptake of sexual health services, 2) reduction of risky sexual behaviours and 3) reduction of recall bias in reporting sexual activity. Interventions employed up to five behaviour change techniques. Meta-analysis was not possible due to heterogeneity in trial assessment and reporting. Two trials reported statistically significant improvements in the uptake of sexual health services using SMS reminders compared to controls. One trial increased knowledge. One trial reported promising results in increasing condom use but no trial reported statistically significant increases in condom use. Finally, one trial showed that collection of sexual health information using mobile technology was acceptable.
Conclusions: The findings suggest interventions delivered by SMS interventions can increase uptake of sexual health services and STI testing. High quality trials of interventions using standardised objective measures and employing a wider range of behavioural change techniques are needed to assess if interventions delivered by mobile phone can alter safer sex behaviours carried out between couples and reduce STIs.
abstract_id: PUBMED:9551277
Health services clinical trials: design, conduct, and cost methodology. Randomized clinical trials and their developing methodology have had substantial impact on the advancement of medical practice. With the emergence of managed care and increased emphasis on the reduction of medical care expenditures, cost evaluation is now becoming a part of clinical trial research. The papers by Henderson et al. and Manheim that follow address the evolution of health services research, its application to multicenter clinical trials in a major U.S. health-care system, and methods of assessing costs in health services clinical trials.
abstract_id: PUBMED:12459946
Randomised controlled trials in mental health services research: practical problems of implementation This article outlines problems of implementation and clinical practice of randomised controlled trials in mental health services. Furthermore, it offers practical solutions taking into account the experiences with a randomisation process in a multi-site EC-funded (EDEN-) study on the evaluation of acute treatment in psychiatric day hospitals. Identification of the problems follows the time-course of a research project: 1. Problems to be solved prior to the study's commencement: Definition of the eligibility criteria, information of clinically working colleagues. 2. Problems referring to the process of randomisation: Influence of clinical experience of the research fellows, precise time-point of implementing the randomisation into the process of admission, assessment of the patient's ability to give informed consent, patient's refusal of randomisation but agreement to study participation, availability of treatment places. 3. Problems which might occur after randomisation: Early break-off of treatment, transfer from one treatment setting to another. General conclusion: Detailed definitions of the randomisation procedure do not guarantee high performance quality and randomisation rates. Continuous precise assessment of the implementation into the clinical routines of every study centre, adaptation according to specific conditions and personal discussions with all participants are obligatory to establish and maintain a high quality of this important research procedure.
abstract_id: PUBMED:12458241
Involving users in the delivery and evaluation of mental health services: systematic review. Objectives: To identify evidence from comparative studies on the effects of involving users in the delivery and evaluation of mental health services.
Data Sources: English language articles published between January 1966 and October 2001 found by searching electronic databases.
Study Selection: Systematic review of randomised controlled trials and other comparative studies of involving users in the delivery or evaluation of mental health services.
Data Extraction: Patterns of delivery of services by employees who use or who used to use the service and professional employees and the effects on trainees, research, or clients of mental health services.
Results: Five randomised controlled trials and seven other comparative studies were identified. Half of the studies considered involving users in managing cases. Involving users as employees of mental health services led to clients having greater satisfaction with personal circumstances and less hospitalisation. Providers of services who had been trained by users had more positive attitudes toward users. Clients reported being less satisfied with services when interviewed by users.
Conclusions: Users can be involved as employees, trainers, or researchers without detrimental effect. Involving users with severe mental disorders in the delivery and evaluation of services is feasible.
abstract_id: PUBMED:29673378
Are randomised controlled trials positivist? Reviewing the social science and philosophy literature to assess positivist tendencies of trials of social interventions in public health and health services. Background: We have previously proposed that trials of social interventions can be done within a "realist" research paradigm. Critics have countered that such trials are irredeemably positivist and asked us to explain our philosophical position.
Methods: We set out to explore what is meant by positivism and whether trials adhere to its tenets (of necessity or in practice) via a narrative literature review of social science and philosophical discussions of positivism, and of the trials literature and three case studies of trials.
Results: The philosophical literature described positivism as asserting: (1) the epistemic primacy of sensory information; (2) the requirement that theoretical terms equate with empirical terms; (3) the aim of developing universal laws; and (4) the unity of method between natural and social sciences. Regarding (1), it seems that rather than embodying the epistemic primacy of sensory data, randomised controlled trials (RCTs) of social interventions in health embrace an anti-positivist approach aiming to test hypotheses derived deductively from prior theory. Considering (2), while some RCTs of social interventions appear to limit theorisation to concepts with empirical analogues, others examine interventions underpinned by theories engaging with mechanisms and contextual contingencies not all of which can be measured. Regarding (3), while some trialists and reviewers in the health field do limit their role to estimating statistical trends as a mechanistic form of generalisation, this is not an inevitable feature of RCT-based research. Trials of social interventions can instead aim to generalise at the level of theory which specifies how mechanisms are contingent on context. In terms of (4), while RCTs are used to examine biomedical as well as social interventions in health, RCTs of social interventions are often distinctive in using qualitative analyses of data on participant accounts to examine questions of meaning and agency not pursued in the natural sciences.
Conclusion: We conclude that the most appropriate paradigm for RCTs of social interventions is realism not positivism.
Answer: The necessity of a run-in period before evaluating health services changes in randomized clinical trials (RCTs) is a topic of debate. A study aimed to determine the optimal run-in period for a new physiotherapy-led telephone assessment and treatment service known as PhysioDirect. The PhysioDirect trial allowed a run-in of up to 12 weeks to facilitate physiotherapists to become confident in delivering the new service. The outcomes measured were the length of telephone calls within the PhysioDirect service and patients' physical function. The results showed that mean PhysioDirect call times were higher in the run-in period than in the main trial, with each physiotherapist needing to answer 42 calls for their mean call time to stabilize at 25 minutes per call, which took a minimum of seven weeks. However, in terms of patient outcomes, PhysioDirect was equally clinically effective as usual care during both the run-in and main trial. The study concluded that a run-in was not needed in terms of patient outcomes, but a learning curve was evident in the process measure of telephone call length, which decreased during the run-in and stabilized prior to the commencement of the main trial. Future trials should build in a run-in if it is anticipated that learning would have an effect on patient outcome (PUBMED:24479729).
This finding is consistent with the broader literature on RCTs in health services research, which often grapples with methodological challenges such as the need for quality evaluation tools, addressing quality defects, and adapting to the specific requirements of health services research compared to drug trials (PUBMED:7596208). Process evaluations within pragmatic RCTs are also conducted to enhance understanding of RCT findings, although their visibility is often poor and the scope and significance of the term 'process evaluation' warrant further research and clarification (PUBMED:33168067).
In the context of complex mental health interventions, conventional approaches to evidence that prioritize RCTs are seen as increasingly inadequate, with realist approaches offering a productive alternative by focusing on causal mechanisms and understanding complex interactions (PUBMED:30027875). The dichotomy between clinical trials research and mental health services research is also highlighted, with suggestions to foster scientific integration and collaboration between the two paradigms (PUBMED:10576327). |
Instruction: Can topical application of tranexamic acid reduce blood loss in thoracic surgery?
Abstracts:
abstract_id: PUBMED:35193803
Topical Application of Tranexamic Acid Can Reduce Postoperative Blood Loss in Calcaneal Fractures: A Randomized Controlled Trial. The traditional lateral "L" approach is common for managing calcaneal fractures with a drawback of significant blood loss. Yet there are no prospective studies on the hemostatic effect of the topical use of tranexamic acid (TXA) in calcaneal fracture surgeries. The purpose of this study was to evaluate the role of topical administration of TXA in reducing postoperative blood loss in calcaneal fractures. Forty participants were randomly distributed into the TXA group (n = 20) and the control group (n = 20). All participants underwent the same surgery via the lateral "L" approach. At the end of the operation, the surgical wound was irrigated with 80 mL 0.5 g/L TXA in the TXA group and 80 mL 0.9% sodium chloride in the control group, followed by the routine use of a drainage tube when closing the incision. Then, 20 mL 0.5 g/L TXA (TXA group) or 20 mL 0.9% sodium chloride solution (control group) was injected retrogradely into the wound through the drainage tube, which was clipped for 30 minutes thereafter. There were no significant differences in the baseline data between the 2 groups (p > .05). There was significantly less blood loss in the first 24 hours and total blood loss postoperation in the TXA group (p < .01). The surgical wounds healed well after surgery in both groups with no complication. We concluded that topical application of TXA in calcaneal fracture surgeries is a safe and useful method that can reduce postoperative blood loss.
abstract_id: PUBMED:24843337
Evaluations of topical application of tranexamic acid on post-operative blood loss in off-pump coronary artery bypass surgery. Objective: One of the major complications of cardiac surgery is the presence of post-operative bleeding. The aim of the present study was to investigate the topical application of tranexamic acid in the pericardial cavity on post-operative bleeding in off-pump coronary artery bypass graft (CABG) surgery.
Materials And Methods: This study was on 71 patients who underwent off-pump CABG. The anesthesia and surgery methods were the same for all patients. Patients were assigned to two equal groups. In the first group, 1 g of tranexamic acid in 100 mL of normal saline solution (NSS) was applied to pericardium and mediastinal cavity at the end of surgery. In the second group, only 100 mL of NSS was applied. Chest drainage of the patients after 24 h and the amounts of blood and blood products transfusion were also recorded during this time.
Results: Patients were the same regarding demographic information and surgery. The average volume of blood loss after 24 h was 366 mL for the first group and 788 mL for the control group. There was a statistically significant difference between the two groups (P < 0.001). The amount of packed red blood cells transfusion in the first group was less than that of the control group, which was not statistically significant. There was no statistically significant difference between the amount of hemoglobin, hematocrit, platelets, prothrombin time and partial thromboplastin time in the post-operative stage in the two groups.
Conclusion: The topical application of tranexamic acid in off-pump CABG patients leads to a decreased post-operative blood loss.
abstract_id: PUBMED:22842057
Can topical application of tranexamic acid reduce blood loss in thoracic surgery? A prospective randomised double blind investigation. Objective: The systemic or topical use of antifibrinolytic agents is effective in reducing postoperative bleeding and blood product transfusion in cardiac surgery. We sought to study the effect of the topical application of tranexamic acid into the pleural space to reduce postoperative bleeding after lung surgery.
Methods: This was a prospective randomised double blind placebo controlled investigation. From May-2010 to February-2012, 89-patients, scheduled for pulmonary resection, were randomly allocated to one of the two study groups. Group-A received 5 g of tranexamic-acid in 100 ml of saline solution. Group-B received 100 ml of saline solution as placebo.
Results: The blood loss in the first 12-h was significantly less in group-A. The same trend was observed in the first 24-h but without reaching a true statistical significance. The mean volume of blood transfusion was statistically lower in group-A. The analysis between post-operative haemoglobin concentration, haematocrit, platelet-count, international-normalised-ratio, fibrinogen and partial-thromboplastin-time of both groups was not statistically significant.
Conclusion: In our experience, the topical use of tranexamic-acid after lung surgery reduces postoperative bleeding and blood transfusion volume. The topical administration of tranexamic-acid is safe without increasing the risk of post-operative complications related to pharmacological side-effects.
abstract_id: PUBMED:23881695
Topical application of tranexamic acid for the reduction of bleeding. Background: Intravenous tranexamic acid reduces bleeding in surgery, however, its effect on the risk of thromboembolic events is uncertain and an increased risk remains a theoretical concern. Because there is less systemic absorption following topical administration, the direct application of tranexamic acid to the bleeding surface has the potential to reduce bleeding with minimal systemic effects.
Objectives: To assess the effects of the topical administration of tranexamic acid in the control of bleeding.
Search Methods: We searched the Cochrane Injuries Group Specialised Register; Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library; Ovid MEDLINE®, Ovid MEDLINE® In-Process & Other Non-Indexed Citations, Ovid MEDLINE® Daily and Ovid OLDMEDLINE®; Embase Classic + Embase (OvidSP); PubMed and ISI Web of Science (including Science Citation Index Expanded and Social Science Citation Index (SCI-EXPANDED & CPCI-S)). We also searched online trials registers to identify ongoing or unpublished trials. The search was run on the 31st May 2013.
Selection Criteria: Randomised controlled trials comparing topical tranexamic acid with no topical tranexamic acid or placebo in bleeding patients.
Data Collection And Analysis: Two authors examined the titles and abstracts of citations from the electronic databases for eligibility. Two authors extracted the data and assessed the risk of bias for each trial. Outcome measures of interest were blood loss, mortality, thromboembolic events (myocardial infarction, stroke, deep vein thrombosis and pulmonary embolism) and receipt of a blood transfusion.
Main Results: We included 29 trials involving 2612 participants. Twenty-eight trials involved patients undergoing surgery and one trial involved patients with epistaxis (nosebleed). Tranexamic acid (TXA) reduced blood loss by 29% (pooled ratio 0.71, 95% confidence interval (CI) 0.69 to 0.72; P < 0.0001). There was uncertainty regarding the effect on death (risk ratio (RR) 0.28, 95% CI 0.06 to 1.34; P = 0.11), myocardial infarction (RR 0.33, 95% CI 0.04 to 3.08; P = 0.33), stroke (RR 0.33, 95% CI 0.01 to 7.96; P = 0.49), deep vein thrombosis (RR 0.69, 95% CI 0.31 to 1.57; P = 0.38) and pulmonary embolism (RR 0.52, 95% CI 0.09 to 3.15; P = 0.48). TXA reduced the risk of receiving a blood transfusion by a relative 45% (RR 0.55, 95% CI 0.55 to 0.46; P < 0.0001). There was substantial statistical heterogeneity between trials for the blood loss and blood transfusion outcomes.
Authors' Conclusions: There is reliable evidence that topical application of tranexamic acid reduces bleeding and blood transfusion in surgical patients, however the effect on the risk of thromboembolic events is uncertain. The effects of topical tranexamic acid in patients with bleeding from non-surgical causes has yet to be reliably assessed. Further high-quality trials are warranted to resolve these uncertainties before topical tranexamic acid can be recommended for routine use.
abstract_id: PUBMED:29456563
Does topical tranexamic acid reduce postcoronary artery bypass graft bleeding? Background: Postoperative bleeding is a common problem in cardiac surgery. We tried to evaluate the effect of topical tranexamic acid (TA) on reducing postoperative bleeding of patients undergoing on-pump coronary artery bypass graft (CABG) surgery.
Materials And Methods: One hundred and twenty-six isolated primary CABG patients were included in this clinical trial. They were divided blindly into two groups; Group 1, patients receiving 1 g TA diluted in 100 ml normal saline poured into mediastinal cavity before closing the chest and Group 2, patients receiving 100 ml normal saline at the end of operation. First 24 and 48 h chest tube drainage, hemoglobin decrease and packed RBC transfusion needs were compared.
Results: Both groups were the same in baseline characteristics including gender, age, body mass index, ejection fraction, clamp time, bypass time, and operation length. During the first 24 h postoperatively, mean chest tube drainage in intervention group was 567 ml compared to 564 ml in control group (P = 0.89). Mean total chest tube drainage was 780 ml in intervention group and 715 ml in control group (P = 0.27). There was no significant difference in both mean hemoglobin decrease (P = 0.26) and packed RBC transfusion (P = 0.7). Topical application of 1 g TA diluted in 100 ml normal saline does not reduce postoperative bleeding of isolated on-pump CABG surgery.
Conclusion: We do not recommend topical usage of 1 g TA diluted in 100 ml normal saline for decreasing blood loss in on-pump CABG patients.
abstract_id: PUBMED:19247741
Topical application of antifibrinolytic drugs for on-pump cardiac surgery: a systematic review and meta-analysis. Purpose: This systematic review aimed to evaluate the efficacy and safety of topical application of antifibrinolytic drugs to reduce postoperative bleeding and transfusion requirements in patients undergoing on-pump cardiac surgery.
Methods: We searched The Cochrane Library, MEDLINE, EMBASE and SCI-EXPANDED for all randomized controlled trials on the topic. Trial inclusion, quality assessment, and data extraction were performed independently by two authors. Standard meta-analytic techniques were applied.
Results: Eight trials (n = 622 patients) met our inclusion criteria. The medication/placebo was applied into the pericardial cavity and/or mediastinum at the end of cardiac surgery. Seven trials compared antifibrinolytic agents (aprotinin or tranexamic acid) versus placebo. They showed that, on average, topical use of antifibrinolytic agents reduced the amount of 24-h postoperative chest tube (blood) loss by 220 ml (95% confidence interval: -318 to -126, P < 0.00001, I (2) = 93%) and resulted in a saving of 1 unit of allogeneic red blood cells per patient (95% confidence interval: -1.54 to -0.53, P < 0.0001, I (2) = 55%). The incidence of blood transfusion was not significantly changed following topical application of the medications. One study comparing topical versus intravenous administration of aprotinin found comparable results between the two methods of administration for the above-mentioned outcomes. No adverse effects were reported following topical use of the medications.
Conclusion: This review suggests that topical application of antifibrinolytics can reduce postoperative bleeding and transfusion requirements in patients undergoing on-pump cardiac surgery. These promising findings need to be confirmed by more trials with large sample size using patient-related outcomes and more assessments regarding the systemic absorption of the medications.
abstract_id: PUBMED:31004853
Different Effects of Intravenous, Topical, and Combined Application of Tranexamic Acid on Patients with Thoracolumbar Fracture. Objective: To observe the efficacy of intravenous, topical, and combined application of tranexamic acid (TXA) in patients with thoracolumbar fracture fixed with percutaneous pedicle screw, and to identify the optimal application method of TXA.
Methods: A total of 181 patients with thoracolumbar fracture treated with percutaneous pedicle screw fixation were enrolled in this randomized controlled trial and were randomly classified into 3 groups, including group A (intravenous group), group B (topical group), and group C (combined group). The total blood loss (TBL), hidden blood loss (HBL), intraoperative blood loss (IBL), preoperative D-dimer, postoperative D-dimer, incidence of deep vein thrombosis (DVT), and incidence of other complications were compared and analyzed among the 3 groups.
Results: TBL, HBL, and IBL in the topical group 24 hours after operation were higher (P < 0.05) than those in the intravenous group and combined group, whereas the difference between the intravenous group and combined group was not statistically significant. Meanwhile, there was no statistically significant difference in operation time, preoperative D-dimer, and postoperative D-dimer among the 3 groups (P > 0.05), but D-dimer in all groups at 72 hours after surgery was higher than that before surgery. No DVT or other complication was observed in the patients.
Conclusions: Preoperative intravenous drip of TXA can remarkably reduce intraoperative HBL and IBL in patients with thoracolumbar fracture fixed with percutaneous pedicle screw. Nonetheless, intraoperative topical application of TXA before wound closure is not recommended.
abstract_id: PUBMED:37270493
Topical use of tranexamic acid can reduce opioid consumption compared with intravenous use for patients undergoing primary total hip arthroplasty: a prospective randomized controlled trial. Background: The problem of opioid addiction after total hip arthroplasty (THA) has been widely concerned. Tranexamic acid (TXA) has been shown to be effective in reducing blood loss for patients undergoing THA, but few studies focus on its alleviation of postoperative local pain symptoms. The purpose of this study was to investigate whether topical TXA could reduce early postoperative hip pain for primary THA patients, thereby reducing the use of opioids, and whether local pain is related to inflammatory response.
Methods: In this prospective randomized controlled study, we randomly divided 161 patients into a topical group (n = 79) and an intravenous group (n = 82). Hip pain was assessed using the visual analogue scale (VAS) score within three days after surgery and tramadol was used for pain relief when necessary. Inflammatory markers such as high-sensitivity C-reactive protein (CRP), erythrocyte sedimentation rate (ESR), interleukin-6 (IL-6), total blood loss and hemoglobin drop were assessed by hematologic tests. The primary outcomes included the VAS score and dose of tramadol from the first to the third day after surgery. The secondary outcomes included the inflammatory markers level, total blood loss and complications.
Results: The pain score and inflammation markers level on the first day in the topical TXA group were significantly lower than those in the intravenous TXA group (P < 0.05). The correlation analysis showed that the VAS score on the first day after surgery was positively correlated with the inflammation markers level (P < 0.05). The tramadol dose for topical group was lower than intravenous group on the first and second day after surgery. There were no differences in total blood loss between the two groups (640.60 ± 188.12 ml vs. 634.20 ± 187.85 ml, P = 0.06). There was no difference in the incidence of complications.
Conclusion: Topical use of TXA could relieve the local pain symptoms and reduce opioid consumption compared with intravenous use for patients undergoing primary THA by reduce the early postoperative inflammatory response.
Trial Registration: The trial was registered at the China Clinical Trial Registry (ChiCTR2100052396) on 10/24/2021.
abstract_id: PUBMED:26349843
Randomized clinical trial of topical tranexamic acid after reduction mammoplasty. Background: The antifibrinolytic drug tranexamic acid is currently being rediscovered for both trauma and major surgery. Intravenous administration reduces the need for blood transfusion and blood loss by about one-third, but routine administration in surgery is not yet advocated owing to concerns regarding thromboembolic events. The aim of this study was to investigate whether topical application of tranexamic acid to a wound surface reduces postoperative bleeding.
Methods: This was a randomized double-blind placebo-controlled trial on 30 consecutive women undergoing bilateral reduction mammoplasty. On one side the wound surfaces were moistened with 25 mg/ml tranexamic acid before closure, and placebo (saline) was used on the other side. Drain fluid production was measured for 24 h after surgery, and pain was measured after 3 and 24 h. Postoperative complications including infection, seroma, rebleeding and suture reactions were recorded.
Results: Topical application of tranexamic acid to the wound surface after reduction mammoplasty reduced drain fluid production by 39 per cent (median 12·5 (range 0-44) versus 20·5 (0-100) ml; P = 0·038). Adverse effects were not observed. There were no significant differences in postoperative pain scores or complications.
Conclusion: Topical application of dilute tranexamic acid reduced bleeding in this model. The study adds to the evidence that this simple procedure may reduce wound bleeding after surgery.
Registration Number: NCT01964781 ( http://www.clinicaltrials.gov).
abstract_id: PUBMED:31653606
Does topical application of tranexamic acid reduce intraoperative bleeding in sinus surgery during general anesthesia? Introduction: Tranexamic acid is a hemostatic agent, which inhibits fibrin degradation, which may be beneficial in controlling bleeding during surgery.
Objectives: The purpose of this study was to provide a meta-analysis and review of the effects of tranexamic acid on hemorrhage and surgical fields and side effects on patients during endoscopic sinus surgery.
Methods: Two authors independently searched six databases (Medline, Scopus, Embase, Web of Science, Google Scholar and Cochrane library) from the start of article collection until July 2018. Postoperative complications such as intraoperative bleeding, operative time, hypotension, nausea, vomiting, and coagulation profile were included in the analysis of tranexamic acid (Treatment Group) and placebo (Control Group) during the operation.
Results: The amount of blood loss during surgery was statistically lower in the treatment group compared to the placebo group, and the surgical field quality was statistically higher in the treatment group than in the placebo group. On the other hand, there was no significant difference in operation time, hemodynamics, or coagulation profile between groups. In addition, tranexamic acid had no significant effect on vomiting and thrombosis compared to the Control Group.
Conclusion: This meta-analysis has shown that topical administration of tranexamic acid can reduce the amount of bleeding during surgery and improve the overall quality of the surgery. Hemodynamic instability during surgery, vomiting after surgery, or abnormal clotting profile were not reported. Additional studies are needed to confirm the results of this study because there are fewer studies.
Answer: Yes, the topical application of tranexamic acid can reduce blood loss in thoracic surgery. A prospective randomized double-blind investigation found that the topical use of tranexamic acid into the pleural space significantly reduced postoperative bleeding after lung surgery. The study reported that blood loss in the first 12 hours was significantly less in the group that received tranexamic acid topically. Additionally, the mean volume of blood transfusion was statistically lower in the group that received the topical application of tranexamic acid. The analysis between post-operative hemoglobin concentration, hematocrit, platelet count, international normalized ratio, fibrinogen, and partial thromboplastin time of both groups was not statistically significant, indicating that the topical application was safe without increasing the risk of post-operative complications related to pharmacological side effects (PUBMED:22842057). |
Instruction: Do behavioural self-blame and stigma predict positive health changes in survivors of lung or head and neck cancers?
Abstracts:
abstract_id: PUBMED:23544675
Do behavioural self-blame and stigma predict positive health changes in survivors of lung or head and neck cancers? Unlabelled: Survivors of lung or head and neck cancers often change tobacco and alcohol consumption after diagnosis, but few studies have examined other positive health changes (PHCs) or their determinants in these groups. The present study aims to: (a) document PHCs in survivors of lung (n = 107) or head and neck cancers (n = 99) and (b) examine behavioural self-blame and stigma as determinants of PHCs. We hypothesised that: (a) survivors would make a variety of PHCs; (b) behavioural self-blame for the disease would positively predict making PHCs; and (c) stigma would negatively predict making PHCs.
Methods: Respondents self-administered measures of PHC, behavioural self-blame, and stigma. Hierarchical multiple regression analysis tested the hypotheses.
Results: More than 65% of respondents reported making PHCs, the most common being changes in diet (25%), exercise (23%) and tobacco consumption (16.5%). Behavioural self-blame significantly predicted PHCs but stigma did not. However, both behavioural self-blame and stigma significantly predicted changes in tobacco consumption.
Conclusions: Many survivors of lung or head and neck cancers engage in PHCs, but those who do not attribute the disease to their behaviour are less likely to do so. Attention to this problem and additional counselling may help people to adopt PHCs.
abstract_id: PUBMED:21932417
The psychosocial impact of stigma in people with head and neck or lung cancer. Background: Lung and head and neck cancers are widely believed to produce psychologically destructive stigma because they are linked to avoidable risk-producing behaviors and are highly visible, but little research has tested these ideas. We examined cancer-related stigma, its determinants, and its psychosocial impact in lung (n = 107) and head and neck cancer survivors (n = 99) ≤ 3 years post-diagnosis. We investigated cancer site, self-blame, disfigurement, and sex as determinants, benefit finding as a moderator, and illness intrusiveness as a mediator of the relation between stigma and its psychosocial impact.
Methods: Prospective participants received questionnaire packages 2 weeks before scheduled follow-up appointments. They self-administered widely used measures of subjective well-being, distress, stigma, self-blame, disfigurement, illness intrusiveness, and post-traumatic growth.
Results: As hypothesized, stigma correlated significantly and uniquely with negative psychosocial impact, but contrary to common beliefs, reported stigma was comparatively low. Reported stigma was higher in (i) men than women, (ii) lung as compared with head and neck cancer, and (iii) people who were highly disfigured by cancer and/or its treatment. Benefit finding buffered stigma's deleterious effects, and illness intrusiveness was a partial mediator of its psychosocial impact.
Conclusions: Stigma exerts a powerful, deleterious psychosocial impact in lung and head and neck cancers, but is less common than believed. Patients should be encouraged to remain involved in valued activities and roles and to use benefit finding to limit its negative effects.
abstract_id: PUBMED:33420906
Health literacy impacts self-management, quality of life and fear of recurrence in head and neck cancer survivors. Purpose: Little is known about whether health literacy is associated with affects certain key outcomes in head and neck cancer (HNC) survivors. We investigated (i) the socio-demographic and clinical profile of health literacy and (ii) associations among between health literacy and self-management behaviours, health-related quality of life (HRQL) and fear of recurrence (FoR) in HNC survivors.
Methods: A population-based survey was conducted in Ireland. Health literacy was assessed using a validated single-item question. Socio-demographic, clinical and psychosocial outcome variables (FoR, self-management behaviours, HRQL) were collected. Multivariable linear regression was performed to estimate associations between health literacy and each psychosocial outcome.
Results: Three hundred ninety-five (50%) individuals responded to the survey. Inadequate health literacy was evident among 47% of the sample. In adjusted models, HNC survivors with inadequate health literacy had significantly lower levels of self-management behaviours in the domains of health-directed behaviour, positive and active engagement in life, self-monitoring and insight, constructive attitudes and approaches and skills and technique acquisition. Inadequate health literacy was independently associated with lower functional well-being and HNC disease-specific HRQL. FoR was also significantly higher among those with inadequate health literacy.
Conclusions: HNC survivors with inadequate health literacy have lower levels of self-management behaviours, lower functional HRQL and increased FoR compared to those with adequate health literacy.
Implications For Cancer Survivors: Clinicians, healthcare providers and those developing interventions should consider how inadequate health literacy among HNC survivors might affect post-treatment outcomes when developing services and providing support for this group.
abstract_id: PUBMED:33345659
The eHealth self-management application 'Oncokompas' that supports cancer survivors to improve health-related quality of life and reduce symptoms: which groups benefit most? Background: Oncokompas is a web-based self-management application that supports cancer survivors to monitor their health-related quality of life (HRQOL) and symptoms, and to obtain personalised feedback and tailored options for supportive care. In a large randomised controlled trial among survivors of head and neck cancer, colorectal cancer, and breast cancer and (non-)Hodgkin lymphoma, Oncokompas proved to improve HRQOL, and to reduce several tumour-specific symptoms. Effect sizes were however small, and no effect was observed on the primary outcome patient activation. Therefore, this study aims to explore which subgroups of cancer survivors may especially benefit from Oncokompas.
Materials And Methods: Cancer survivors (n = 625) were randomly assigned to the intervention group (access to Oncokompas, n = 320) or control group (6 months waiting list, n = 305). Outcome measures were HRQOL, tumour-specific symptoms, and patient activation. Potential moderators included socio-demographic (sex, age, marital status, education, employment), clinical (tumour type, stage, time since diagnosis, treatment modality, comorbidities), and personal factors (self-efficacy, personal control, health literacy, Internet use), and patient activation, mental adjustment to cancer, HRQOL, symptoms, and need for supportive care, measured at baseline. Linear mixed models were performed to investigate potential moderators.
Results: The intervention effect on HRQOL was the largest among cancer survivors with low to moderate self-efficacy, and among those with high personal control and those with high health literacy scores. Cancer survivors with higher baseline symptom scores benefitted more on head and neck (pain in the mouth, social eating, swallowing, coughing, trismus), and colorectal cancer (weight) specific symptoms.
Discussion: Oncokompas seems most effective in reducing symptoms in head and neck cancer and colorectal cancer survivors who report a higher burden of tumour-specific symptoms. Oncokompas seems most effective in improving HRQOL in cancer survivors with lower self-efficacy, and in cancer survivors with higher personal control, and higher health literacy.
abstract_id: PUBMED:38460391
What factors contribute to cancer survivors' self-management skills? A cross-sectional observational study. Purpose: Many cancer survivors, facing the consequences of their disease and its treatment, have medical and supportive aftercare needs. However, limited knowledge exists regarding the relationship between support needs and survivors' self-management skills. The study aim is to explore factors contributing to cancer survivors' self-management skills.
Methods: A cross-sectional study was conducted among cancer survivors (n = 277) of two outpatient oncology clinics at a university hospital in the Netherlands. Patients with head and neck cancer (n = 55) who had received radiotherapy and cisplatin or cetuximab were included, as well as patients who had undergone hematopoietic stem cell transplantation (n = 222). The primary outcome was self-management skills, assessed using the Partners in Health Scale (PIH), which comprises two subscales: knowledge and coping (PIH-KC), and recognition and management of symptoms, and adherence to treatment (PIH-MSA). Secondary outcomes were quality of life (EORTC QLQ-C30), self-efficacy (SECD6), patient-centered care (CAPHS), and social support (HEIQ). Machine learning-based Random Forest models were employed to construct associative models. Feature Importance (FI) was used to express the contribution to the model.
Results: High emotional quality of life (FI = 33.1%), increased self-efficacy (FI = 22.2%), and greater social support (FI = 18.2%) were identified as key factors contributing to cancer survivors' self-management knowledge (PIH-KC). Furthermore, greater support from professionals (FI = 36.1%) and higher self-efficacy (FI = 18.2%) were found to benefit participants' recognition and management, and therapy adherence (PIH-MSA).
Conclusions: A patient-centered relationship between nurses and cancer survivors is essential for therapy adherence and the management of aftercare needs. Training to provide this holistic self-management support is required.
abstract_id: PUBMED:31838009
Role of eHealth application Oncokompas in supporting self-management of symptoms and health-related quality of life in cancer survivors: a randomised, controlled trial. Background: Knowledge about the efficacy of behavioural intervention technologies that can be used by cancer survivors independently from a health-care provider is scarce. We aimed to assess the efficacy, reach, and usage of Oncokompas, a web-based eHealth application that supports survivors in self-management by monitoring health-related quality of life (HRQOL) and cancer-generic and tumour-specific symptoms and obtaining tailored feedback with a personalised overview of supportive care options.
Methods: In this non-blinded, randomised, controlled trial, we recruited patients treated at 14 hospitals in the Netherlands for head and neck cancer, colorectal cancer, breast cancer, Hodgkin lymphoma, or non-Hodgkin lymphoma. Adult survivors (aged ≥18 years) were recruited through the Netherlands Cancer Registry (NCR) and invited by their treating physician through the Patient Reported Outcomes Following Initial Treatment and Long term Evaluation of Survivorship (PROFILES) registry. Participants were randomly assigned (1:1) by an independent researcher to the intervention group (access to Oncokompas) or control group (access to Oncokompas after 6 months), by use of block randomisation (block length of 68), stratified by tumour type. The primary outcome was patient activation (knowledge, skills, and confidence for self-management), assessed at baseline, post-intervention, and 3-month and 6-month follow-up. Linear mixed models (intention-to-treat) were used to assess group differences over time from baseline to 6-month follow-up. The trial is registered in the Netherlands Trial Register, NTR5774 and is completed.
Findings: Between Oct 12, 2016, and May 24, 2018, 625 (21%) of 2953 survivors assessed for eligibility were recruited and randomly assigned to the intervention (320) or control group (305). Median follow-up was 6 months (IQR 6-6). Patient activation was not significantly different between intervention and control group over time (difference at 6-month follow-up 1·7 [95% CI -0·8-4·1], p=0·41).
Interpretation: Oncokompas did not improve the amount of knowledge, skills, and confidence for self-management in cancer survivors. This study contributes to the evidence for the development of tailored strategies for development and implementation of behavioural intervention technologies among cancer survivors.
Funding: Dutch Cancer Society (KWF Kankerbestrijding).
abstract_id: PUBMED:34797953
Internalized stigma among cancer patients enrolled in a smoking cessation trial: The role of cancer type and associations with psychological distress. Purpose: Cancer patients who smoke may experience significant stigma due both to their disease, and negative attitudes and beliefs regarding smoking. We investigated whether internalized stigma differed between currently smoking cancer patients diagnosed with lung or head and neck cancers, other smoking related cancers, and non smoking-related cancers, and whether internalized stigma was associated with psychological distress.
Methods: This cross-sectional analysis used baseline data on 293 participants enrolled in a multi-site randomized smoking cessation intervention trial of patients with recently diagnosed cancer. Internalized stigma was assessed using five Internalized Shame items from the Social Impact of Disease Scale. Smoking-related cancers included lung, head and neck, esophageal, bladder, kidney, liver, pancreatic, colorectal, anal, small intestinal, gastric, and cervical. We used multivariable linear regression to examine whether mean internalized stigma levels differed between individuals with lung and head and neck cancers, other smoking-related cancers, and non smoking-related cancers, adjusting for potential confounders. We further examined the association of internalized stigma with depression, anxiety, and perceived stress, overall and among cancer type groups.
Results: Thirty-nine percent of participants were diagnosed with lung or head and neck cancer, 21% with another smoking-related cancer, and 40% with a non smoking-related cancer. In multivariable-adjusted models, participants with lung or head and neck cancers (11.6, 95% confidence intervals (CI) = 10.8-12.2; p < 0.0001) or other smoking-related cancers (10.7, 95% CI = 9.8-11.7; p = 0.03) had higher mean internalized stigma scores compared to those non-smoking-related cancers (9.3, 95% CI = 8.6-10.0). We observed similar positive associations between internalized stigma and depressive symptoms, anxiety, and perceived stress among participants with smoking-related and non smoking-related cancers.
Conclusions: Among smokers, those with smoking-related cancers experienced the highest levels of internalized stigma, and greater internalized stigma was associated with greater psychological distress across cancer types. Providers should assess patients for internalized and other forms of stigma, refer patients for appropriate psychosocial support services, and address stigma in smoking cessation programs.
abstract_id: PUBMED:29959792
Barriers to active self-management following treatment for head and neck cancer: Survivors' perspectives. Objective: Active self-management practices may help head and neck cancer (HNC) survivors to deal with challenges to their physical, functional, social, and psychological well-being presented by HNC and its treatment. This study investigates the factors perceived by HNC survivors to act as barriers to their active self-management following primary treatment.
Methods: In this qualitative study, 27 HNC survivors identified through 4 designated cancer centres in Ireland participated in face-to-face semistructured interviews. Interviews were audio-recorded, transcribed, and analysed using thematic analysis.
Results: Four themes (and associated subthemes) describing barriers to survivors' active self-management were identified: emotional barriers (eg, fear of recurrence), symptom-related barriers (eg, loss of taste), structural barriers (eg, access to appropriate health services), and self-evaluative barriers (eg, interpersonal self-evaluative concerns).
Conclusions: This is the first study to describe HNC survivors' views about barriers to their active self-management after treatment. The findings have important implications for self-management research and intervention development concerning HNC survivorship.
abstract_id: PUBMED:37248541
Health behaviors among head and neck cancer survivors. Purpose: To determine to what extent head and neck cancer (HNC) survivors participate in health behaviors (HBs) recommended by the National Cancer Center Network (NCCN®).
Methods: Participants identified through the tumor registries at the Abramson Cancer Center (ACC), University of Pennsylvania and affiliated sites. Eligibility: (a) diagnosis and treatment HNC; (b) aged 18 to 70 years; (c) ≥ 1-year post-diagnosis; (d) human papillomavirus (HPV) status confirmed; (e) ability to understand written English. Potential participants received an explanation of the study, informed consent, self-reported questionnaire, and self-addressed stamped envelope.
Results: 451 individuals eligible, 102 (23%) agreed to participate, HPV positive (74%). Current smoking rare (7%), historical use common (48%). Current alcohol use common (65%), average 2.1 drinks/day, 12 days/month. 22% binge drank with an average of 3.5 binge-drinking sessions per month. Nutritional behavior mean 7.1 (range 0-16), lower scores indicating better nutrition. Body mass index (BMI) 59% overweight/obese. Adequate aerobic exercise 59%, adequate strength and flexibility 64%. Leisure time activity, 18% sedentary, 19% moderately active, 64% active. All participants reported having a primary care physician, 92% seen in the previous 12 months.
Conclusions: Most HNC survivors participated in some HBs. Current smoking rarely reported, binge drinking and high BMI most common negative HBs. Opportunities remain to improve dietary and exercise behaviors.
Implications For Cancer Survivors: The NCCN® has outlined HBs that decrease likelihood of cancer survivors developing comorbidities that could impact overall survival. It is incumbent on healthcare providers to educate and encourage cancer survivors to participate in these HBs.
abstract_id: PUBMED:10586379
Smoking and drinking behavior in patients with head and neck cancer: effects of behavioral self-blame and perceived control. Patients who continue to use tobacco or alcohol following treatment for head and neck cancers are at greater risk for cancer recurrence and mortality. The present study examined the effects of behavioral self-blame and perceived control over health on smoking and alcohol use in a sample of 55 patients with cancers of the head and neck. Measures of self-blame, perceived control, and depression were administered and an assessment of past and current smoking and drinking behavior was obtained. As anticipated, continued smoking after completion of oncologic treatment was predicted by the interaction of behavior specific self-blame and perceived control. Patients who attributed the cause of their cancer to their past substance use exhibited a lower likelihood of smoking only if they also held the expectancy that their future cancer-related health was contingent on their own behavior. Among patients not holding the belief that cancer recurrence was contingent on their own actions, self-blame was associated with a higher probability of continued smoking. Self-blame and perceived control had no effect on continued alcohol use.
Answer: Yes, behavioral self-blame and stigma have been found to predict positive health changes in survivors of lung or head and neck cancers. Specifically, behavioral self-blame significantly predicts positive health changes (PHCs), such as changes in diet, exercise, and tobacco consumption. However, stigma did not predict PHCs overall, although both behavioral self-blame and stigma significantly predicted changes in tobacco consumption (PUBMED:23544675). This suggests that survivors who attribute their disease to their own behavior are more likely to engage in PHCs, while those who do not make such attributions are less likely to do so.
Additionally, stigma has been shown to correlate significantly and uniquely with negative psychosocial impact, but it is less common than commonly believed. Stigma was found to be higher in men than women, in lung cancer compared to head and neck cancer, and in people who were highly disfigured by cancer and/or its treatment. Benefit finding buffered stigma's deleterious effects, and illness intrusiveness was a partial mediator of its psychosocial impact (PUBMED:21932417).
Therefore, while behavioral self-blame can be a motivator for positive health behavior changes, stigma can have a negative psychosocial impact. Interventions that encourage survivors to remain involved in valued activities and roles and to use benefit finding can help limit the negative effects of stigma. |
Instruction: Does a regional comprehensive palliative care program improve pain in outpatient cancer patients?
Abstracts:
abstract_id: PUBMED:24705857
Does a regional comprehensive palliative care program improve pain in outpatient cancer patients? Context: Pain is still a major problem for cancer patients, and the effect of a population-based approach on patients' experience of pain is not fully understood.
Aims: The primary aim of this study was to clarify the changes in pain intensity in outpatients before and after a regional palliative care program. The secondary aim was to clarify the prevalence of patients who had unmet needs for pain treatment and to clarify the reasons for not wanting pain treatment.
Subjects And Methods: A regional palliative care program was implemented in four regions of Japan. A region-representative sample of metastatic/locally advanced cancer patients in outpatient settings took part in questionnaire surveys before and after the regional intervention. Responses were obtained from 859 from 1,880 and 857 from 2,123 in the preintervention and postintervention surveys, respectively.
Results: After a regional palliative care program, neither worst, average, nor least pain levels in outpatients changed significantly. A total of 134 patients (16 %) reported that they needed more pain treatment. There were various reasons for not wanting pain treatment, namely, minimum interference with daily life, general nonpreference for medicines, longstanding symptoms before the diagnosis of cancer, concerns about tolerance and addiction, and experienced neuropsychiatric symptoms under current medications.
Conclusion: The regional palliative care program failed to demonstrate improvement of the pain intensity of cancer outpatients. One possible interpretation is that they are less likely to be regarded as target populations and that the study population experienced generally well-controlled pain. Future study including patients with more severe pain is needed, but to improve pain levels of cancer outpatients, intensive, patient-directed intervention seems to be more promising than region-based intervention.
abstract_id: PUBMED:16802127
Bringing palliative care to a Canadian cancer center: the palliative care program at Princess Margaret Hospital. It is increasingly recognized that complete care of the patient with cancer includes palliative care, which is applicable early in the course of illness, in conjunction with life-prolonging treatment. Princess Margaret Hospital (PMH) is Canada's largest center for cancer care and research, and it is an international referral center for patients with cancer. The Palliative Care Program at PMH has developed into a comprehensive clinical, educational, and research program, with an acute palliative care unit, daily palliative care clinics, a cancer pain clinic, and a consultation service that sees urgent consultations on a same-day basis in inpatient and outpatient areas. We will describe the components, successes, and challenges of our program, which may be useful for others, who are developing palliative care programs in an academic setting.
abstract_id: PUBMED:29410087
Characteristics of Unscheduled and Scheduled Outpatient Palliative Care Clinic Patients at a Comprehensive Cancer Center. Context: There is limited literature regarding outpatient palliative care and factors associated with unscheduled clinic visits.
Objectives: To compare characteristics of patients with unscheduled vs. scheduled outpatient palliative care clinic visits.
Methods: Medical records of 183 unscheduled cancer new outpatients and 104 unscheduled follow-up (FU) patients were compared with random samples of 361 and 314 scheduled new patients and FU patients, respectively. We gathered data on demographics, symptoms, daily opioid usage, and performance status.
Results: Compared with scheduled new patients, unscheduled new patients had worse Edmonton Symptom Assessment Scale subscores for pain (P < 0.001), fatigue (P = 0.002), nausea (P = 0.016), depression (P = 0.003), anxiety (P = 0.038), drowsiness (P = 0.002), sleep (P < 0.001), and overall feeling of well-being (P = 0.001); had a higher morphine equivalent daily dose of opioids (median of 45 mg for unscheduled vs. 30 mg for scheduled; P < 0.001); and were more likely to be from outside the greater Houston area (P < 0.001). Most unscheduled and scheduled new and FU visits were for uncontrolled physical symptoms. Unscheduled FU patients, compared with scheduled FU patients, had worse Edmonton Symptom Assessment Scale subscores for pain (P < 0.001), fatigue (P < 0.001), depression (P = 0.002), anxiety (P = 0.004), drowsiness (P = 0.010), appetite (P = 0.023), sleep (P = 0.022), overall feeling of well-being (P < 0.001), and higher morphine equivalent daily dose of opioid (median of 58 mg for unscheduled FU visits vs. 40 mg for scheduled FU visits; P = 0.054).
Conclusion: Unscheduled new FU patients have higher levels of physical and psychosocial distress and higher opioid intake. Outpatient palliative care centers should consider providing opportunities for walk-in visits for timely management and close monitoring of such patients.
abstract_id: PUBMED:37344868
Impact of an outpatient palliative care consultation and symptom clusters in terminal patients at a tertiary care center in Pakistan. Background: Patients with terminal diseases may benefit physically and psychosocially from an outpatient palliative care visit. Palliative care services are limited in Pakistan. An improved understanding of the symptom clusters present in our population is needed. The first outpatient palliative care center in Karachi, Pakistan, was established at our tertiary care institution. The primary aim of this study was to evaluate the impact of a palliative care outpatient consultation on symptom burden in patients with a terminal diagnosis. The secondary aim was to analyze the symptom clusters present in our population.
Methods: Patients with a terminal diagnosis referred to our outpatient palliative department between August 2020-August 2022 were enrolled. The Edmonton Symptom Assessment Scale (ESAS) questionnaire was administered at the initial visit and the first follow-up visit at one month. Change in symptom burden was assessed using a Wilcoxon signed ranks test. A principal component analysis with varimax rotation was performed on the symptoms reported at the initial visit to evaluate symptom clusters. The palliative performance scale (PPS) was used to measure the performance status of palliative care patients.
Results: Among the 78 patients included in this study, the average age was 59 ± 16.6 years, 52.6% were males, 99% patients had an oncological diagnosis, and the median duration between two visits was 14 (Q1-Q3: (7.0, 21.0) days. The median PPS level was 60% (Q1-Q3: 50-70). Overall, ESAS scores decreased between the two visits (6.0 (2.8, 11.0), p < 0.001) with statistically significant improvement in pain (5.0 vs. 2.5, p < 0.001), loss of appetite (5.0 vs. 4.0, p = 0.004), depression (2.0 vs. 0.0, p < 0.001), and anxiety (1.5 vs. 0.0, p = 0.032). Based on symptoms at the initial visit, 3 clusters were present in our population. Cluster 1 included anxiety, depression, and wellbeing; cluster 2 included nausea, loss of appetite, tiredness, and shortness of breath; and cluster 3 included drowsiness.
Conclusion: An outpatient palliative care visit significantly improved symptom burden in patients with a terminal diagnosis. Patients may benefit from further development of outpatient palliative care facilities to improve the quality of life in terminally ill patients.
abstract_id: PUBMED:14718327
The comprehensive care team: a controlled trial of outpatient palliative medicine consultation. Background: Little is known about the use of palliative care for outpatients who continue to pursue treatment of their underlying disease or whether outpatient palliative medicine consultation teams improve clinical outcomes.
Methods: We conducted a year-long controlled trial involving 50 intervention patients and 40 control patients in a general medicine outpatient clinic. Primary care physicians referred patients with advanced congestive heart failure, chronic obstructive pulmonary disease, or cancer who had a prognosis ranging from 1 to 5 years. In the intervention group, the primary care physicians received multiple palliative care team consultations, and patients received advance care planning, psychosocial support, and family caregiver training. Clinical and health care utilization outcomes were assessed at 6 and 12 months.
Results: Groups were similar at baseline. Similar numbers of patients died during the study year (P =.63). After the intervention, intervention group patients had less dyspnea (P =.01) and anxiety (P =.05) and improved sleep quality (P =.05) and spiritual well-being (P =.007), but no change in pain (P =.41), depression (P =.28), quality of life (P =.43), or satisfaction with care (P =.26). Few patients received recommended analgesic or antidepressant medications. Intervention patients had decreased primary care (P =.03) and urgent care visits (P =.04) without an increase in emergency department visits, specialty clinic visits, hospitalizations, or number of days in the hospital. There were no differences in charges (P =.80).
Conclusions: Consultation by a palliative medicine team led to improved patient outcomes in dyspnea, anxiety, and spiritual well-being, but failed to improve pain or depression. Palliative care for seriously ill outpatients can be effective, but barriers to implementation must be explored.
abstract_id: PUBMED:25131891
Avoidable and unavoidable visits to the emergency department among patients with advanced cancer receiving outpatient palliative care. Context: Admissions to the emergency department (ED) can be distressing to patients with advanced cancer receiving palliative care. There is limited research about the clinical characteristics of these patients and whether these ED visits can be categorized as avoidable or unavoidable.
Objectives: To determine the frequency of potentially avoidable ED visits (AvEDs) for patients with advanced cancer receiving outpatient palliative care in a large tertiary cancer center, identify the clinical characteristics of the patients receiving palliative care who visited the ED, and analyze the factors associated with AvEDs and unavoidable ED visits (UnAvEDs).
Methods: We randomly selected 200 advanced cancer patients receiving treatment in the outpatient palliative care clinic of a tertiary cancer center who visited the ED between January 2010 and December 2011. Visits were classified as AvED (if the problem could have been managed in the outpatient clinic or by telephone) or UnAvED.
Results: Forty-six (23%) of 200 ED visits were classified as AvED, and 154 (77%) of 200 ED visits were classified as UnAvED. Pain (71/200, 36%) was the most common chief complaint in both groups. Altered mental status, dyspnea, fever, and bleeding were present in the UnAvED group only. Infection, neurologic events, and cancer-related dyspnea were significantly more frequent in the UnAvED group, whereas constipation and running out of pain medications were significantly more frequent in the AvED group (P < 0.001). In a multivariate analysis, AvED was associated with nonwhite ethnicity (odds ratio [OR] 2.66; 95% CI 1.26, 5.59) and constipation (OR 17.08; 95% CI 3.76, 77.67), whereas UnAvED was associated with ED referral from the outpatient oncology or palliative care clinic (OR 0.24; 95% CI 0.06, 0.88) and the presence of baseline dyspnea (OR 0.46; 95% CI 0.21, 0.99).
Conclusion: Nearly one-fourth of ED visits by patients with advanced cancer receiving palliative care were potentially avoidable. Proactive efforts to improve communication and support between scheduled appointments are needed.
abstract_id: PUBMED:21217998
Palliative cancer care ethics: principles and challenges in the Indian setting. Palliative cancer treatment is a system of care that seeks to relieve suffering in patients with progressive cancer. Given the intractable symptoms with which certain malignancies manifest, palliative care offers a practical approach towards improving the patient's quality of life. However, there are an array of ethical issues associated with this treatment strategy such as particular methods of pain relief, a reliable assessment of suffering, autonomy, and multi-specialist care. While these principles are important to increase and improve the network of palliative care, the resource-poor Indian environments present numerous barriers for these principles to be practically applied. As the infrastructure of comprehensive cancer centers develop, paralleled with an increase in training of palliative care professionals, significant improvements need to be made in order to elevate the status of palliative cancer care in India.
abstract_id: PUBMED:33691072
Family Caregiver Problems in Outpatient Palliative Oncology. Background: Understanding challenges of family caregivers within specific palliative care contexts is needed. Objective: To describe the challenges of family caregivers of patients with cancer who receive outpatient palliative care. Methods: We summarized the most common and most challenging problems for 80 family caregivers of cancer patients receiving outpatient palliative care in the midwestern United States. Results: Caregiver worry and difficulty managing side effects or symptoms other than pain, constipation, and shortness of breath were most common. "Financial concerns" was cited most as a "top 3" problem. Almost half of caregivers reported "other" problems, including family members, patient physical function, care coordination, and patient emotional state. Conclusions: The most common and most challenging problems of family caregivers of cancer patients receiving outpatient palliative care may differ from those experienced in other serious illness care contexts. Comparative studies on caregiver problems across the cancer care continuum can help develop and refine interventions.
abstract_id: PUBMED:37552851
Effectiveness of Long-Term Opioid Therapy for Chronic Pain in an Outpatient Palliative Medicine Clinic. Background: Despite widespread use of opioid therapy in outpatient palliative medicine, there is limited evidence supporting its efficacy and safety in the long term. Objectives: We sought to improve overdose risk scores, maintain pain reduction, and preserve patient function in a cohort with severe chronic pain as we managed opioid therapy for a duration of four years in an outpatient palliative care clinic. Design: Over four years, we provided ongoing goal-concordant outpatient palliative care, including opioid therapy, using quarterly clinical encounters for a patient cohort with chronic pain. Setting/Subjects: The project took place in the outpatient palliative medicine clinic of a regional cancer center in Orlando, Florida (United States). The subjects were a cohort group who received palliative care during the time period between July 2018 and October 2022. Measurements: Key metrics included treatment-related reduction in pain intensity, performance scores, and overall overdose risk scores. Secondary metrics included cohort demographics, average daily opioid use in morphine milligram equivalents and categorization of type of pain. Results: In 97 patients, we observed a stable mean treatment-related reduction in pain intensity of 4.9 out of 10 points over four years. The cohort showed a 2-point (out of 100) improvement in performance scores and an 81-point (out of 999) reduction in mean overall overdose risk score. Conclusions: We present evidence that providing outpatient palliative care longitudinally over four years offered lasting treatment-related reductions in pain intensity, preservation of performance status, and reduction in overall overdose risk.
abstract_id: PUBMED:34757271
What is the impact of specialist palliative care outpatient consultations on pain in adult patients with cancer? A systematic review. Purpose: The aim of this systematic review was to determine the impact of specialist palliative care (SPC) consultations in outpatient settings on pain control in adults suffering from cancer.
Methods: Systematic Review. Databases CINAHL, Medline, PsychInfo, and Embase were searched in February 2021. Relevant studies were also hand-searched and gray literature was searched in February 2021. The PICO mnemonic (Population, Intervention, Comparison, and Outcome) was used to form the review question. Of 1053 potential studies identified, 10 met the inclusion criteria. Quality appraisal of included studies was conducted using the evidence-based librarian (EBL) critical appraisal checklist.
Results: Outcome data from 56% (n = 5/9) studies indicated a non-statistically significant reduction in pain. Narrative analysis of the remaining studies indicated a statistically significant reduction in pain in 50% (n = 2/4) of the studies, one study showed mixed results, and one study found no statistically significant improvement in pain control. In relation to secondary outcomes, results from 33% (3/9) of studies indicated statistically significant improvement in symptom control. Data from 22% (n = 2/9) of studies indicated no statistically significant improvement in the symptoms measured. Narrative analysis of the remaining four studies indicated generally mixed results. EBL scores of included studies ranged between 50% and 95.23%.
Conclusion: Outpatient SPC consultations may have a positive impact on the control of pain and other distressing symptoms for cancer patients, however, results show mixed effects. Given that it is unclear what it is about outpatient SPC that impacts positively or otherwise on pain and symptom control.
Answer: The impact of regional comprehensive palliative care programs on pain in outpatient cancer patients appears to be mixed and not fully conclusive. According to the study by PUBMED:24705857, a regional palliative care program in Japan did not demonstrate a significant improvement in pain intensity for cancer outpatients. The study suggested that more intensive, patient-directed interventions might be more effective than region-based interventions for this population.
In contrast, the study from a tertiary care center in Pakistan (PUBMED:37344868) reported that an outpatient palliative care visit significantly improved symptom burden, including pain, in patients with a terminal diagnosis. This suggests that outpatient palliative care can be beneficial in managing pain for terminally ill patients in certain settings.
The systematic review (PUBMED:34757271) also presents mixed results, with some studies indicating a non-statistically significant reduction in pain, while others showed a statistically significant reduction or mixed results. The review suggests that outpatient specialist palliative care consultations may have a positive impact on pain control for cancer patients, but the evidence is not definitive.
Overall, while some individual studies report improvements in pain management with outpatient palliative care interventions, the evidence does not uniformly support the effectiveness of regional comprehensive palliative care programs in improving pain for all outpatient cancer patients. Further research, including patient populations with more severe pain and different intervention strategies, may be needed to fully understand the impact of such programs on pain management in outpatient cancer patients. |
Instruction: Cryptogenic localization related epilepsy in children from a tertiary outpatient clinic: is neurological and neuropsychological outcome predictable?
Abstracts:
abstract_id: PUBMED:17451873
Cryptogenic localization related epilepsy in children from a tertiary outpatient clinic: is neurological and neuropsychological outcome predictable? Objectives: Up to one-third of the children with epilepsy are diagnosed with cryptogenic localization related epilepsy (CLRE). As yet, there is a lack of studies that specify the short- and long-term prognosis for this group. In this study, we systematically established neurological outcome (represented by seizure frequency) as well as neuropsychological outcome in a cohort of 68 children with CLRE who had been referred to our tertiary outpatient clinic. Also, we analysed correlations with risk and prognostic factors.
Patients And Methods: A systematic cross-sectional open clinical and non-randomized design was used including 68 children admitted to our epilepsy centre in a child neurological programme between January 1999 and December 2004. A model was defined, distinguishing risk factors with a potential effect on epileptogenesis (history of febrile seizures, family history of epilepsy, history of early mild development delay and serious diagnostic delay) and prognostic factors, with a potential effect on the course of the epilepsy (neurological symptoms or soft signs, age at onset, duration of epilepsy, seizure type, percentage of time with epileptiform activity, localization of epileptiform activity, treatment history and treatment duration). Seizure frequency was used as the primary outcome variable, whereas three neuropsychological outcomes (IQ, psychomotor delay and educational delay) were used as secondary outcome variables.
Results: The children experienced a broad range of seizure types with the 'absence-like' complex partial seizure as the most commonly occurring seizure type. Almost half of the children of the study sample had a high seizure frequency. They experienced several seizures per month, week or even daily seizures. Also a substantial impact on neuropsychological outcome was observed. Mean full scale IQ was 87.7, mean academic delay was almost 1 school year and 27 children showed psychomotor delay on the Movement ABC. Only 'having more than one seizure type' showed a prognostic value for seizure frequency, and no factors were found to be correlated with the secondary outcome measures. None of the risk factors show a differential impact on seizure outcome.
Conclusion: CLRE has a non-predictable course; clinical variability is high and prognosis in many children with CLRE is obscure. Having more than one seizure type was the only factor correlated to seizure frequency. Further longitudinal studies are needed.
abstract_id: PUBMED:18229655
Neuropsychological assessment in newly diagnosed cryptogenic partial epilepsy in children--a pilot study. Purpose: Cryptogenic epilepsy (CE) is defined as a partial or generalized epilepsy syndromes in which we can not point out any underlying cause. The role of neuropsychological assessment of "non-lesional" epilepsies is crucial not only to better control of different medical treatment but also to understanding the role of epilepsy for cognitive functions. The aim of the study was to compare the intellectual and cognitive functions between children with newly diagnosed cryptogenic partial epilepsy (CPE) children and the control healthy group.
Material And Methods: 184 participants, 89 patients with cryptogenic partial epilepsy and 95 healthy children and adolescents, with ages ranging from 6-16 years were assessed on neuropsychological tests of general intellectual functioning and selected cognitive skills.
Results: There were significant differences found between groups for four examined functions. Children with CPE scored significantly lower in verbal and categorial fluency, visuoconstructional tasks, learning and memory than group of healthy children. There was no differences in general IQ level.
Conclusions: Study of neuropsychological profile in newly diagnosed CPE can get us an information of influence of stable, related to illness factors and the paroxysmal activity on cognitive function. Neurological follow-up of children with CPE at the very beginning of diagnosis should include screening evaluation of cognitive functions to provide appropriate intervention.
abstract_id: PUBMED:18568779
Neuropsychological profile of children with cryptogenic localization related epilepsy. Up to one third of the epilepsy population consists of children with cryptogenic localization related epilepsy (CLRE). Unfortunately, the effect of CLRE on the development is still unclear. Behavioral and academic problems have been reported, but no conclusive study concerning the impact of CLRE on neuropsychological functioning is yet published. This study was a systematic cross-sectional open clinical and nonrandomized investigation, which included 68 children with CLRE. Several neuropsychological tests were analyzed and age-related normative values were used as reference. Differences between CLRE and reference values were tested with Paired-Samples t-tests. Z scores were computed to compare the different neuropsychological tests and to inspect whether a characteristic neuropsychological profile exists for CLRE. The Independent-Samples t-test was used to explore which epilepsy factors (seizure type, seizure frequency, age at onset, duration of epilepsy, and drug load) were influencing the cognitive profile of CLRE. There seems to be a characteristic cognitive profile for children with CLRE; children with CLRE experience cognitive difficulties on a wide range of areas-in particular, alertness, mental speed, and memory. Seizure type, seizure frequency, duration of epilepsy, and drug load do not influence this neuropsychological profile. Age at onset was an important risk factor; the earlier the age at onset, the worse the cognitive performance. In spite of the influence of age at onset, the revealed profile can be seen as a stable, independent of temporary factors, neuropsychological profile for children with CLRE.
abstract_id: PUBMED:21941845
Neuropsychological profile of children with cryptogenic localization-related epilepsy and its association with age at onset The aim of this study was to investigate the neuropsychological profile of children with cryptogenic localization-related epilepsy (CLRE). Neuropsychological evaluations were performed in 16 CLRE children and 14 children with idiopathic localization-related epilepsy (ILRE) for control within 8 months (average 2.1 months) of initial seizure. The neuropsychological tests used in this study are as follows: the Wechsler Intelligence Scale for Children-Third Edition, Wechsler Intelligence Scale for Children-Revised, and Wechsler Preschool and Primary Scale of Intelligence. Age at onset and test differed significantly between CLRE and ILRE, while the duration between onset and test and the number of seizures before test did not. No marked difference was observed in the neuropsychological profile between 2 groups; however, the discrepancy between VIQ and PIQ was significantly larger in CLRE than in ILRE. This discrepancy was negatively correlated with age at the time of seizure onset (r = -0.615, and p = 0.011). The laterality in discrepancy between VIQ and PIQ was associated with the dominance of interictal discharge. In conclusion, children with lower age at the time of seizure onset were likely to have had a larger discrepancy between VIQ and PIQ.
abstract_id: PUBMED:26773674
Neuropsychological profiles and outcomes in children with new onset frontal lobe epilepsy. Frontal lobe epilepsy (FLE) is the second most frequent type of localization-related epilepsy, and it may impact neurocognitive functioning with high variability. The prevalence of neurocognitive impairment in affected children remains poorly defined. This report outlines the neuropsychological profiles and outcomes in children with new onset FLE, and the impact of epilepsy-related factors, such as seizure frequency and antiepileptic drug (AED) load, on the neurocognitive development. Twenty-three consecutive children (15 males and 8 females) with newly diagnosed cryptogenic FLE were enrolled; median age at epilepsy onset was 7 years (6-9.6 years). They underwent clinical and laboratory evaluation and neuropsychological assessment before starting AED treatment (time 0) and after one year of treatment (time 1). Twenty age-matched patients affected by idiopathic generalized epilepsy (10 male and 10 females) and eighteen age-matched healthy subjects (9 males and 9 females) were enrolled as controls and underwent the same assessment. All patients with FLE showed a significant difference in almost all assessed cognitive domains compared with controls, mainly in frontal functions and memory. At time 1, patients were divided into two groups according to epilepsy-related factors: group 1 (9 patients) with persisting seizures despite AED polytherapy, and group 2 (14 patients) with good seizure control in monotherapy. A significant difference was highlighted in almost all subtests in group 1 compared with group 2, both at time 0 and at time 1. In children with FLE showing a broad range of neurocognitive impairments, the epilepsy-related factors mostly related to a worse neurocognitive outcome are poor seizure control and the use of AED polytherapy, suggesting that epileptic discharges may have a negative impact on the functioning of the involved cerebral regions.
abstract_id: PUBMED:30797670
A neuropsychological model for the pre-surgical evaluation of children with focal-onset epilepsy: An integrated approach. This review explores the complexities of pre-surgical neuropsychological assessment for children with focal-onset epilepsy. A model is proposed outlining a range of factors that potentially influence the neuropsychological formulation. These factors include a developmental, epilepsy, psychological and cognitive dimension, together with family and social context and intrinsic factors. This model is child-centered and recognizes that these factors will be weighted differently for each individual. In some instances the neuropsychological profile might suggest localized and lateralized function, but there are significant limitations to this approach in the context of the contemporary view of epilepsy as a network disorder. This review recognizes that a range of issues impact on neuropsychological function in children with focal-onset epilepsy, including the connectivity between neural systems and the dynamic nature of development. The aim of this review is to provide a neuropsychological framework to enhance and support clinical decision-making in the pre-surgical evaluation of children with focal-onset epilepsy.
abstract_id: PUBMED:14738420
Resective surgery for intractable focal epilepsy in patients with low IQ: predictors for seizure control and outcome with respect to seizures and neuropsychological and psychosocial functioning. Purpose: To investigate possible predictive factors for seizure control in a group of children and adults with low IQs (IQ, < or =70) who underwent resective surgery for intractable focal epilepsy and to study outcome with respect to seizures and neuropsychological functioning. We also studied psychosocial outcome in the adult patients.
Methods: Thirty-one patients (eight children younger than 18 years) with a Wechsler Full Scale IQ of 70 or less underwent comprehensive neuropsychological assessments before and 2 years after surgery. Adults also completed the Washington Psychosocial Seizure Inventory (WPSI). Univariate analyses were used to identify variables differentiating between patients who became seizure free and those who did not. Pre- and postoperative test results were compared by t test for dependent samples.
Results: Forty-eight percent of the patients became seizure free, 52% of those with temporal lobe resection and 38% of those with extratemporal resection. Only one variable was predictive for seizure outcome: duration of epilepsy. In one third of the patients, who had the shortest duration of epilepsy (<12 years), 80% became seizure free. Significant improvement was seen regarding vocational adjustment in adults (WPSI). Seizure-free adults improved their Full Scale IQ scores. No cognitive changes were found in seizure-free children or in patients who did not become seizure free.
Conclusions: A good seizure outcome was obtained after resective surgery in patients with intractable focal epilepsy and low IQ, provided that treatment was done relatively shortly after onset of epilepsy. No adverse effects were seen on cognitive and psychosocial functioning.
abstract_id: PUBMED:19168434
Behavioral status of children with cryptogenic localization-related epilepsy. Using the Child Behavior Checklist, behavior of 51 children with cryptogenic localization-related epilepsy was studied. According to parent report, children with cryptogenic localization-related epilepsy scored in the clinical range on the subscales "internalizing behavior," ''total behavior,'' and "attentional problems.'' No relation between the epilepsy factors seizure frequency, age at onset, duration of epilepsy or the number of antiepileptic drugs, and the subscales of the Child Behavior Checklist was found. Only for seizure type a relationship was found. Although in the normal range, the more severe the seizure type, the more delinquent, aggressive, and externalizing behavioral problems. Other studies have demonstrated that in children with epilepsy, internalizing problems are more common than externalizing problems, and that attentional, social, and thought problems are relatively specific. Therefore, we can conclude that the behavioral problems we found in our cohort are not very different from behavioral problems described in other epilepsy types.
abstract_id: PUBMED:29223473
Cryptogenic West syndrome: Clinical profile, response to treatment and prognostic factors Introduction: West syndrome (WS) is an age-dependent epileptic encephalopathy in which the prognosis varies according to the, not always identified, underlying origin.
Objectives: To define the profile of cryptogenic (a least studied isolated sub-group) WS, in Spain. To study its outcome, response to different treatments, and to establish prognostic factors.
Patients And Methods: The study included a review of the medical records of 16 patients diagnosed with cryptogenic WS during the period, 2000-2015. The mean follow-up time was 6.6 years, with a minimum of 2 years.
Results: The large majority (11/16) were male. The mean age at onset was 6 months, and 6/16 had a family history of idiopathic epilepsy. The first line treatment with vigabatrin had an electrical-clinical response in 5/16 patients, with the remaining cases responding to adrenocorticotropic hormone (ACTH). Almost half (44%) of the patients progressed to other types of epilepsy, with no difference between those treated with vigabatrin or ACTH. A greater number of adverse effects were obtained with ACTH, with no retinal involvement being observed with vigabatrin. The aetiological cause was found in 2/16. Being female, late onset, and early control of the hypsarrhythmia, were factors of a good prognosis.
Conclusions: The overall prognosis of cryptogenic WS was more serious than expected. Although the incidence of Lennox-Gastaut syndrome was low, the progression to focal epilepsy was the most common, with it appearing within the first 2 years of the diagnosis. The initial response to vigabatrin was lower than expected, but the long-term result was comparable to ACTH.
abstract_id: PUBMED:22748634
Cognitive and behavioural findings in children with frontal lobe epilepsy. Background: Frontal Lobe Epilepsy (FLE) is the second most frequent type of partial epilepsy and its onset is generally in childhood. Though cognitive and behavioural impairments have been described as co-morbid disorders in epilepsy, their extent in FLE, particularly in children, remains unknown.
Aims: In this study, we assess cognitive skills and behaviour in a cohort of paediatric FLE patients.
Methods: We measured the performance of 71 children with cryptogenic FLE on intelligence tests, neuropsychological tests, and behavioural questionnaires. Age-dependent normative values were used for reference. Results were related to epilepsy-factors including age at epilepsy onset, duration of epilepsy, seizure frequency, localisation of the epileptic focus and drug load.
Results: Paediatric FLE patients performed worse on intellectual and neuropsychological tests compared to reference values, and had a delay in school achievement. The performance of patients was typically worse on tasks measuring visual-spatial functions, memory, psychomotor speed and alertness. High seizure frequency was associated with lower scores on the arithmetic subtest of the intelligence scale; the other epilepsy-factors had no statistically significant influence on intelligence test or neuropsychological test outcome. Behavioural problems included attention problems, anxiety and internalising behaviour. These were not significantly related to epilepsy-factors.
Conclusions: Children with cryptogenic FLE show a broad range of cognitive and behavioural impairments, compared to reference values. While high seizure frequency may affect performance on selected cognitive measures, other epilepsy-factors do not seem to influence cognition and behaviour. Study of micro-structural or functional brain abnormalities that underlie these cognitive and behavioural impairments are warranted.
Answer: The neurological and neuropsychological outcomes for children with cryptogenic localization related epilepsy (CLRE) from a tertiary outpatient clinic are not easily predictable. Studies have shown that children with CLRE experience a broad range of seizure types and frequencies, with almost half of the children experiencing high seizure frequencies, including several seizures per month, week, or even daily seizures (PUBMED:17451873). The neuropsychological outcomes also indicate a substantial impact, with mean full-scale IQ being 87.7, mean academic delay of almost 1 school year, and psychomotor delay observed in some children (PUBMED:17451873).
The only factor found to have prognostic value for seizure frequency was 'having more than one seizure type,' and no factors were found to be correlated with secondary outcome measures such as IQ, psychomotor delay, and educational delay (PUBMED:17451873). This suggests that the course of CLRE is highly variable and the prognosis for many children with CLRE remains unclear.
Additional studies have found that children with newly diagnosed cryptogenic partial epilepsy (CPE) scored significantly lower in verbal and categorical fluency, visuoconstructional tasks, learning, and memory compared to healthy children, although there were no differences in general IQ level (PUBMED:18229655). Another study revealed a characteristic cognitive profile for children with CLRE, with difficulties in alertness, mental speed, and memory, and found that age at onset was an important risk factor, with earlier onset associated with worse cognitive performance (PUBMED:18568779).
Furthermore, a study on children with cryptogenic frontal lobe epilepsy (FLE) showed significant differences in almost all assessed cognitive domains compared with controls, particularly in frontal functions and memory. Poor seizure control and the use of antiepileptic drug (AED) polytherapy were related to worse neurocognitive outcomes (PUBMED:26773674).
In summary, while certain factors such as having multiple seizure types and age at onset may provide some insight into the prognosis of children with CLRE, the overall neurological and neuropsychological outcomes are not easily predictable due to the high clinical variability and the influence of various factors on cognitive and behavioral functioning (PUBMED:17451873, PUBMED:18229655, PUBMED:18568779, PUBMED:26773674). |
Instruction: Do increased training requirements in gastrointestinal endoscopy and advanced laparoscopy necessitate a paradigm shift?
Abstracts:
abstract_id: PUBMED:19059172
Do increased training requirements in gastrointestinal endoscopy and advanced laparoscopy necessitate a paradigm shift? A survey of program directors in surgery. Background: Many modifications to the traditional residency model contribute to the ongoing paradigm shift in surgical education; yet, the frequency and manner by which such changes occur at various institutions is less clear. To address this issue, our study examined the variability in endoscopy and laparoscopy training, the potential impact of new requirements, and opinions of Program Directors in Surgery (PDs).
Methods: A 22-item online survey was sent to 251 PDs in the United States. Appropriate parametric tests determined significance.
Results: In all, 105 (42%) PDs responded. No difference existed in response rates among university (56.2%), university-affiliated/community (30.5%), or community (13.3%) program types (p = 0.970). Surgeons alone (46.7%) conducted most endoscopy training with a trend toward multidisciplinary teams (43.8%). A combination of fellowship-trained minimally invasive surgeons and other surgeon types (66.7%) commonly provided laparoscopy training. For adequate endoscopy experience in the future, most PDs (74.3%) plan to require a formal flexible endoscopy rotation (p < 0.001). For laparoscopy, PDs intend for more minimally invasive surgery (59%) as well as colon and rectal surgery (53.4%) rotations (both p < 0.001). Respondents feel residents will perform diagnostic endoscopy (86.7%) and basic laparoscopy (100%) safely on graduation. Fewer PDs confirm graduates will safely practice therapeutic endoscopy (12.4%) and advanced laparoscopy (52.4%). PDs believe increased requirements for endoscopy and laparoscopy will improve procedural competency (79% and 92.4%, respectively) and strengthen the fields of surgical endoscopy and minimally invasive surgery (55.2% and 68.6%, respectively). Less believe new requirements necessitate redesign of cognitive and technical skills curricula (33.3% endoscopy, 28.6% laparoscopy; p = 0.018). A national surgical education curriculum should be a required component of resident training, according to 79% of PDs.
Conclusions: PDs employ and may implement varied tools to meet the increased requirements in endoscopy and laparoscopy. With such variability in educational methodology, establishment of a national surgical education curriculum is very important to most PDs.
abstract_id: PUBMED:12695809
The utility of flexible endoscopy during advanced laparoscopy. Advanced laparoscopic techniques have continued to grow in prevalence for the treatment of gastrointestinal surgical conditions. The field of flexible endoscopy has also continued to increase the boundaries of its capabilities with the advent of purely flexible endoscopic techniques, such as in the treatment of gastrointestinal reflux disease. This article illustrates how flexible endoscopy can be used in combination with laparoscopy in a diverse number of operations in the human foregut and hindgut, such as reflux operations, esophageal myotomies, gastric resections, peptic ulcer operations, colon resections, and pancreatic pseudocyst operations. These examples of the utility of flexible endoscopy during laparoscopy show the marriage of these two disciplines. To be able to adequately use flexible endoscopy during laparoscopy, the surgeon will need to be skilled in flexible endoscopy, and the best way to maintain those skills is to use the flexible endoscopy in one's daily practice.
abstract_id: PUBMED:23745073
Current status of advanced gastrointestinal endoscopy training fellowships in the United States. Rapid growth in the field of advanced gastrointestinal endoscopy has led to an increase in specialized therapeutic endoscopy fellowships. The cornerstones of these programs are training in endoscopic retrograde cholangiopancreatography (ERCP) and endoscopic ultrasound. These procedures are more complex and challenging to master than routine colonoscopy and upper endoscopy, and in the case of ERCP, higher risk. The concentration of the educational experience in the hands of relatively fewer trainees with specialized interest in advanced endoscopy has resulted in providing a focused cohort of graduating fellows with higher case volumes in training, which likely enhances diagnostic and therapeutic success and safer performance of these procedures. Endoscopic simulators, although not currently in widespread use, have the potential to improve advanced procedural training without jeopardizing patient safety.
abstract_id: PUBMED:27345646
Simulator training in gastrointestinal endoscopy - From basic training to advanced endoscopic procedures. Simulator-based gastrointestinal endoscopy training has gained acceptance over the last decades and has been extensively studied. Several types of simulators have been validated and it has been demonstrated that the use of simulators in the early training setting accelerates the learning curve in acquiring basic skills. Current GI endoscopy simulators lack the degree of realism that would be necessary to provide training to achieve full competency or to be applicable in certification. Virtual Reality and mechanical simulators are commonly used in basic flexible endoscopy training, whereas ex vivo and in vivo models are used in training the most advanced endoscopic procedures. Validated models for the training of more routine therapeutic interventions like polypectomy, EMR, stenting and haemostasis are lacking or scarce and developments in these areas should be encouraged.
abstract_id: PUBMED:1531106
Incorporation of laparoscopy into a surgical endoscopy training program. The impact of introducing laparoscopy as part of the overall gastrointestinal endoscopy case load performed by residents was reviewed. During 1990, there was a significant increase (56.9%) in the number of flexible diagnostic endoscopic procedures performed compared with 1989. When the total number of laparoscopic procedures was considered, the increase was 117%. Residents participated in the "surgeon's" position in 59% of the therapeutic laparoscopic procedures and as either surgeon or "first assistant" in 86% of all therapeutic laparoscopic procedures and 94% of all diagnostic laparoscopic procedures. Complication rates for diagnostic laparoscopic procedures were low in 1989 (0.03%) and 1990 (0.2%). Complication rates for therapeutic laparoscopic procedures were also low (4%). There was no difference in the complication rate for cases in which residents were in the surgeon's position (4%) versus cases in which they were not (4%). Introduction of laparoscopic procedures into a surgical residency program can be done safely, especially in cases in which an established program in endoscopy exists.
abstract_id: PUBMED:2147002
Complications of diagnostic gastrointestinal endoscopy. Undesired side effects and complications of gastrointestinal endoscopy and premedication are rare events. However, this is true only of endoscopic units with experienced investigators, modern equipment and monitoring. The complication rate of upper gastrointestinal endoscopy is about 0.1% with cardiopulmonary events predominating. The typical complication of colonoscopy is perforation, seen in 0.2%. The relevant ERCP specific complication is acute pancreatitis in about 1%, followed by acute cholangitis. The most serious complications of laparoscopy are hemorrhage from the liver biopsy site, bleeding from abdominal wall varices, and perforation of the colon. The cardiopulmonary mortality is low for upper gastrointestinal endoscopy as well as for colonoscopy (1 death/20,000 procedures). Premedication, chronic obstructive pulmonary disease, coronary heart disease, valvular heart disease and, last but not least, advanced age, must be considered risk factors for the development of complications of gastrointestinal endoscopy. Balanced indication, particularly in the elderly patient, should be the consequence. If possible, endoscopy should be performed without sedatives. If premedication is necessary, it should be used sparingly. Not only patients at high risk for the development of cardiopulmonary complications, but all patients undergoing endoscopy must be carefully monitored after premedication, during and after endoscopy. The non-invasive procedure of pulse-oximetry is appropriate for continuous monitoring of arterial oxygen saturation in patients with cardiopulmonary diseases, irrespective of their premedication status. Antibiotic prophylaxis is recommended in patients with valvular heart disease or prosthetic valves. Standardized cleaning and disinfection of the instruments is of great importance to avoid hepatitis B or HIV transfer.(ABSTRACT TRUNCATED AT 250 WORDS)
abstract_id: PUBMED:28783925
Education and Training Guidelines for the Board of the Korean Society of Gastrointestinal Endoscopy. The Korean Society of Gastrointestinal Endoscopy (KSGE) developed a gastrointestinal (GI) endoscopy board in 1995 and related regulations. Although the KSGE has acquired many specialists since then, the education and training aims and guidelines were insufficient. During GI fellowship training, obtaining sufficient exposure to some types of endoscopic procedures is difficult. Fellows should acquire endoscopic skills through supervised endoscopic procedures during GI fellowship training. Thus, the KSGE requires training guidelines for fellowships that allow fellows to perform independent endoscopic procedures without supervision. This document is intended to provide principles that the Committee of Education and Training of the KSGE can use to develop practical guidelines for granting privileges to perform accurate GI endoscopy safely. The KSGE will improve the quality of GI endoscopy by providing guidelines for fellowships and supervisors.
abstract_id: PUBMED:15580316
A comparative study of skills in virtual laparoscopy and endoscopy. Background: The present study was designed to investigate whether there is a correlation between manual skills in laparoscopic procedures and manual skills in flexible endoscopy.
Methods: In a prospective study using laparoscopy and endoscopy simulators (MIST-VR, and GI-Mentor II), 24 consecutive subjects (gastrointestinal surgeons, novice and experienced gastroenterologists, and untrained subjects) were asked to perform laparoscopic and endoscopic tasks. Their performance was assessed by the simulators' software and by observers blinded to the levels of subjects' experience. Performance in experienced vs inexperienced subjects was compared. Score pairs of three parameters--time, errors, and economy of movement--were also compared.
Results: Experienced subjects performed significantly better than inexperienced subjects on both tasks in terms of time, errors, and economy of movement (p < 0.05). All three performance parameters in laparoscopy and endoscopy correlated significantly (p < 0.02).
Conclusion: Both simulators can distinguish between experienced and inexperienced subjects. Observed skills in simulated laparoscopy correlate with skills in simulated flexible endoscopy. This finding may have an impact on the design of training programs involving both procedures.
abstract_id: PUBMED:15839826
Practical training in gastrointestinal endoscopy In this paper, we describe a practical approach to the gastrointestinal endoscopy. We comment on the basic clinical education, endoscopy training with static models, basic courses with animals, as well as a reference to audiovisual media as books, journals, videotapes and CDs. Also we deal with the computer simulation. We describe the strategies for the interventional endoscopy training, as well as the education in the future developments. At the end, we introduce a structured training in gastrointestinal endoscopy.
abstract_id: PUBMED:26527039
Laparoscopy shows superiority over endoscopy for early detection of malignant atrophic papulosis gastrointestinal complications: a case report and review of literature. Background: The malignant form of atrophic papulosis (Köhlmeier-Degos disease) is a rare thrombo-occlusive vasculopathy that can affect multiple organ systems. Patients typically present with distinctive skin lesions reflective of vascular drop out. The small bowel is the most common internal organ involved, resulting in considerable morbidity and mortality attributable to ischemic microperforations. Determination of the presence of gastrointestinal lesions is critical in distinguishing systemic from the benign, cutaneous only disease and in identifying candidates for treatment.
Case Presentation: We describe an 18 year old male who first presented with cutaneous atrophic papulosis but became critically ill from small bowel microperforations. He had an almost immediate and dramatic response to treatment. Prior to his presentation with acute abdomen he had upper and lower endoscopy showing areas of nonspecific patchy erythema. At laparotomy, innumerable characteristic lesions with central pearly hue and erythematous border were seen. PubMed was used for a literature search using the keywords malignant atrophic papulosis, Degos disease, endoscopy, laparoscopy and laparotomy. This search yielded 200 articles which were further analyzed for diagnostic procedures and findings. Among the 200 articles we identified only 11 cases in which endoscopy was performed. Results of endoscopy and laparotomy in our patient with malignant atrophic papulosis were compared to those in the literature. Endoscopy of the gastrointestinal tract has shown gastritis and non-specific inflammation whereas laparoscopy shows white plaques with red borders on the serosal surface of the small bowel and the peritoneum. From personal communications with other physicians worldwide, we identified three additional unpublished cases in which endoscopy revealed only minimal changes while laparoscopy showed dramatic lesions. From our experience the endoscopic findings are often subtle and nonspecific, whereas laparascopy or laparotomy will reveal pathognomic lesions on the serosal surface of the intestine.
Conclusion: Our report contrasts the endoscopic and laparoscopic findings in malignant atrophic papulosis which suggest laparoscopy is the more powerful means of detecting gastrointestinal involvement. Imaging studies may serve as a key indicator of systemic progression. Based on our experience, laparoscopy should be performed when there is a high index of suspicion for gastrointestinal malignant atrophic papulosis, even if endoscopic examination is non-diagnostic or normal.
Answer: Yes, increased training requirements in gastrointestinal endoscopy and advanced laparoscopy do necessitate a paradigm shift. A survey of Program Directors in Surgery (PDs) revealed that most PDs plan to require a formal flexible endoscopy rotation and more minimally invasive surgery rotations to meet the increased requirements for adequate endoscopy and laparoscopy experience in the future. This indicates a shift towards more structured and specialized training in these areas (PUBMED:19059172). The utility of flexible endoscopy during advanced laparoscopy is also expanding, which suggests that surgeons need to be skilled in both disciplines, further supporting the need for a paradigm shift in training (PUBMED:12695809).
The current status of advanced gastrointestinal endoscopy training fellowships in the United States shows a focused cohort of graduating fellows with higher case volumes in training, which likely enhances the success and safety of complex procedures like ERCP and endoscopic ultrasound. This specialized training is a departure from traditional training methods and reflects a paradigm shift towards concentrated expertise in advanced endoscopy (PUBMED:23745073).
Simulator training in gastrointestinal endoscopy has gained acceptance and has been shown to accelerate the learning curve in acquiring basic skills. However, current simulators lack the realism needed for full competency in advanced procedures, suggesting that training methods must continue to evolve to include more realistic and advanced simulation technologies (PUBMED:27345646).
The incorporation of laparoscopy into surgical endoscopy training programs has been shown to be safe and effective, further supporting the integration of these two fields within surgical education (PUBMED:1531106). Additionally, a comparative study of skills in virtual laparoscopy and endoscopy found a significant correlation between manual skills in both procedures, which may influence the design of future training programs that encompass both laparoscopic and endoscopic skills (PUBMED:15580316).
In conclusion, the increased training requirements in gastrointestinal endoscopy and advanced laparoscopy are driving a paradigm shift towards more specialized, structured, and potentially simulation-based training programs to ensure that surgeons are competent and safe in performing these complex procedures. |
Instruction: Esophageal dysfunction as a cause of angina pectoris ("linked angina"): does it exist?
Abstracts:
abstract_id: PUBMED:8166156
Esophageal dysfunction as a cause of angina pectoris ("linked angina"): does it exist? Purpose: The differentiation between cardiac and esophageal causes of retrosternal chest pain is notoriously difficult. Theoretically, cardiac and esophageal causes may coexist. It has also been reported that gastroesophageal reflux and esophageal motor abnormalities may elicit myocardial ischemia and chest pain, a phenomenon called linked angina pectoris. The aim of this study was to assess the incidence of esophageal abnormalities as a cause of retrosternal chest pain in patients with previously documented coronary artery disease.
Patients And Methods: Thirty consecutive patients were studied, all of whom had undergone coronary arteriography. The patients were studied after they were admitted to the coronary care unit with an attack of typical chest pain. On electrocardiograms (ECGs) taken during pain, 15 patients (group I) had new signs of ischemia; the other 15 patients (group II) did not. In none of the patients were cardiac enzymes elevated. As soon as possible, but within 2 hours after admission, combined 24-hour recording of esophageal pressure and pH was performed. During chest pain, 12-lead ECG recording was carried out.
Results: In group I, all 15 patients experienced one or more pain episodes during admission, 25 of which were associated with ischemic electrocardiographic changes. The other two episodes were reflux-related. Only one of the 25 ischemia-associated pain episodes was also reflux-related, ie, it was preceded by a reflux episode. In group II, 19 chest pain episodes occurred in 11 patients. None of these was associated with electrocardiographic changes, but 8 were associated with reflux (42%) and 8 with abnormal esophageal motility (42%).
Conclusion: Linked angina is a rare phenomenon.
abstract_id: PUBMED:7131679
'Esophageal angina' as the cause of chest pain. One hundred consecutive medical emergency patients with anterior chest pain were followed to their final diagnosis to discover the prevalence of esophageal disease as the cause of anginal pain. Seventy-seven of the patients had pain that was anginal in character, and one fifth of these (16 patients) had abnormalities demonstrated by the following esophageal investigations: endoscopy with biopsy, manometry, radiology, and acid perfusion. The 16 patients whose anginal pain was thought to be due to esophageal disease all performed normally on an exercise tolerance test, and in eight of them the association between the esophagus and their symptoms was demonstrated by a positive provocation test result: esophageal acid perfusion was the most useful investigation in this group.
abstract_id: PUBMED:24695751
Recurrent transient apical cardiomyopathy (tako-tsubo-like left ventricular dysfunction) in a postmenopausal female with diffuse esophageal spasms. Transient apical cardiomyopathy, also known as Takot-tsubo-like left ventricular dysfunction, is a clinical syndrome characterized by reversible left ventricular dysfunction at the apex with preserved basal contractility, in the setting of new ST and T wave changes suggestive of ischemia but no evidence of obstructive coronary artery disease on angiography. The main mechanism appears to be intense neuroadrenergic myocardial stimulation with endothelial dysfunction of the coronary vasculature. It has been noted that patients with esophageal spasms also have a tendency for coronary spasms. We present the case of a postmenopausal female with documented severe esophageal spasms who presented with atypical angina and recurrent Tako-tsubo cardiomyopathy.
abstract_id: PUBMED:2910093
Esophageal dysfunction and chest pain in patients with mitral valve prolapse: a prospective study utilizing provocative testing during esophageal manometry. Purpose: The cause of chest discomfort in patients with mitral valve prolapse (MVP) remains unknown. Our aim was to determine prospectively the incidence of esophageal disorders and abnormal responses to edrophonium chloride and esophageal acid infusions in patients with MVP and troublesome non-ischemic chest pain.
Patients And Methods: After coronary artery disease was excluded, 20 patients with MVP and chest pain underwent esophageal manometry and provocative testing with edrophonium chloride and acid infusion. Seven patients with MVP but without chest pain served as control subjects; they also underwent esophageal manometry with provocative testing.
Results: Esophageal manometry revealed esophageal disorders in 16 patients: diffuse esophageal spasm in 14 patients, nutcracker esophagus in one, and hypotensive lower esophageal sphincter in one. Esophageal motility was normal in four patients. Injection of edrophonium and acid infusion tests evoked typical chest discomfort in three of 18 and five of 19 patients, respectively. In six of seven control subjects with MVP but with no chest discomfort, esophageal motility was normal and provocative testing did not produce chest discomfort (p less than 0.05 versus results in patients).
Conclusion: Esophageal disorders were common and may account for chest discomfort in certain patients with MVP and persistent chest pain syndromes.
abstract_id: PUBMED:7172956
Ergonovine-induced esophageal spasm in patients with chest pain resembling angina pectoris. We studied the effect of ergonovine maleate (EM) on esophageal motor activity in 18 consecutive patients with angina-like chest pain. Significant coronary artery disease was excluded in each patient by cardiac catheterization studies. Baseline esophageal motility was abnormal in 12 patients (66%). After injection of EM, ten patients developed their typical chest pain at the onset of repetitive contractions. Thus, chest pain and esophageal dysfunction were clearly linked. Compared with saline injection, only the repetitive contractions were significantly increased after AM in these patients (P less than 0.01). Amplitude and duration of contractions were increased after EM, but not significantly. Due to potentially serious adverse effects, however, EM cannot be recommended for routine use as a provocative agent.
abstract_id: PUBMED:2339827
Follow-up study of morbidity in patients with angina pectoris and normal coronary angiograms and the value of investigation for esophageal dysfunction. A postal questionnaire was used to assess the symptoms, use of medical facilities, and employment status of patients with angina pectoris and normal coronary angiograms following cardiac catheterization. In a retrospective study of 187 patients, 66 had left ventricular dysfunction demonstrated by abnormal regional wall motion and 121 had normal left ventricular function. At follow-up twelve to forty-six months following catheterization, 89% with left ventricular dysfunction and 82% with normal ventricular function had continued to experience chest pain. There was no significant change in the admission rate to hospital because of chest pain or the proportion of patients who were working, after catheterization as compared with before, in either group. Some patients with left ventricular regional wall motion abnormalities appeared to have progressive left ventricular dysfunction. In a prospective study of 63 patients, detailed investigation of esophageal function was performed. Twenty-two patients had left ventricular wall motion abnormalities. The majority of the 41 patients with normal left ventricular wall motion had esophageal abnormalities that were treated appropriately. At follow-up six to twenty-four months following catheterization significantly fewer patients with normal left ventricular function continued to experience chest pain compared with those with left ventricular dysfunction. Following catheterization the hospital admission rate fell significantly and the proportion of patients working increased significantly in the group with normal left ventricular function. The hospital admission rate and employment status of patients with left ventricular dysfunction did not change significantly following angiography. These findings suggest, therefore, that investigation for and treatment of esophageal dysfunction should be performed in patients with angina pectoris, normal coronary angiograms, and normal left ventricular function.
abstract_id: PUBMED:2067670
Esophageal angina Angina-like chest pain, caused by alterations of esophageal function, is an increasingly common occurrence confronting cardiologists: advances in pathogenetic knowledge and in diagnostic possibilities in this field have in fact shed light on the prevalence of esophageal angina, which is present in approximately 60% of patients with angiographically intact coronaries (11% of anginal patients overall). Classically, esophageal chest pain is attributed to alterations of motility or to mucosal disease (pathologic gastro-esophageal reflux of the acid, mixed or alkaline type): this last cause prevails quantitatively. Little is known of the nociceptive mechanisms triggered by these alterations: as far as mucous disease is concerned, activation of the chemosensitive receptors has been postulated, while esophageal mechanoreceptors may be activated, in the course of a motor disorder, by distension of the wall. A recently proposed additional mechanism consists in the induction of parietal esophageal ischemia by chemical or mechanical injury: it is a fascinating and potentially resolvable mechanism, which however requires further investigation. Moreover, elements of psychological nature are also involved in the genesis of esophageal pain. A diagnosis of esophageal angina, heavily conditioned by obvious considerations of prognostic order, must necessarily aim for "certainty". Prolonged monitoring of the endoluminal pH and the adoption of provocative tests, in the course of pH monitoring and manometry, play an important role in achieving this aim (ergometric test, distension induced with a balloon, edrophonium, electrostimulation, seem most effective). A promising outlook is supported by the recent introduction of prolonged manometry. Finally, diagnostic attitude must necessarily abandon its limited specialistic horizon to consider the patient's profile in its entirety.
abstract_id: PUBMED:8482176
Ambulatory esophageal manometry, pH-metry, and Holter ECG monitoring in patients with atypical chest pain. Standard Holter electrocardiographic (ECG) monitoring was combined with ambulatory esophageal manometry and pH-metry in 25 patients with atypical chest pain in order to determine whether an association could be found between spontaneous pain episodes and ischemic ECG changes or esophageal dysfunction. Results of ambulatory testing were compared to those obtained with standard esophageal manometry and provocative testing. Twenty-two of the 25 patients experienced a total of 88 pain episodes during ambulatory testing. Although 15 of the 22 patients (68%) experiencing pain during testing had at least one pain episode that correlated temporally with gastroesophageal reflux, esophageal dysmotility or ischemic ECG changes, 65% of all pain episodes were unrelated to abnormal esophageal events or ECG changes. Seventeen percent of pain episodes were associated with gastroesophageal reflux, 15% with esophageal dysmotility, and 2% with a combined acid reflux and esophageal dysmotility event. Only one pain episode was associated with ischemic ECG changes. Twelve of the 15 patients with chest pain episodes associated with reflux or esophageal dysmotility had other identical pain episodes in which there was no correlation. Reproduction of a patient's pain during standard manometry with provocative testing did not predict a strong correlation between the patient's spontaneous pain episodes and esophageal dysfunction during ambulatory recordings. In summary, patients with atypical chest pain have relatively few spontaneous pain episodes that correlate with gastroesophageal reflux, esophageal dysmotility, or ischemic ECG changes. It appears that different stimuli can trigger identical episodes of chest pain, which suggests that many of these patients may have dysfunction of their visceral pain sensory mechanisms.
abstract_id: PUBMED:21178904
Does esophageal dysfunction affect the course of treadmill stress test in patients with recurrent angina-like chest pain? Introduction: cardioesophageal reflex may increase severity of chest pain and signs of myocardial ischemia on electrocardiogram (ECG), both in patients with and without significant coronary artery stenosis.
Objectives: the aim of the study was to evaluate the relationships between esophageal pH and pressure and clinical and electrocardiographic signs of myocardial ischemia.
Patients And Methods: in 129 consecutive patients with recurrent chest pain, 77 without significant coronary artery lesions in coronary angiography and 52 with myocardial ischemia, panendoscopy, pH-metry, manometry, and treadmill stress test were performed.
Results: The prevalence of esophageal disorders was similar in patients with and without significant coronary artery narrowing. Subjects with significant ST interval depression in the stress test had a higher rate of simultaneous esophageal contractions. There were no differences in the results of the treadmill test between patients with and without esophageal disorders. Forty percent of patients with significant coronary artery lesions, who had to stop the test because of chest pain, did not present significant ST interval depression on ECG; however, such depression was observed in 60% of patients with normal coronary angiography. Patients with exercise-provoked chest pain had more pronounced abnormalities in esophageal pH, together with the amplitude and coordination of esophageal contractions. Demographic and clinical factors associated with chest pain and changes in exercise ECG were not evaluated.
Conclusions: esophageal disorders are an important cause of chest pain, potentially affecting the results of the treadmill stress test. However, further research is needed to determine the predictors of the cardioesophageal loop activity.
abstract_id: PUBMED:2642283
Esophageal chest pain: current controversies in pathogenesis, diagnosis, and therapy. Purpose: To evaluate the importance of esophageal abnormalities as a potential cause of recurrent noncardiac chest pain.
Data Identification: We discuss the rapidly evolving new knowledge in this field after analyzing the literature in English published since 1979.
Study Selection: We reviewed 117 articles on recurring chest pain and paid specific attention to the following nine controversial issues: the potential mechanisms of esophageal pain, the differentiation of cardiac and esophageal causes, the evaluation of new esophageal motility disorders, the use of esophageal tests in evaluating noncardiac chest pain, the usefulness of techniques for prolonged monitoring of intraesophageal pressure and pH, the relation of psychologic abnormalities to esophageal motility disorders, the possible mechanisms for decreased visceral pain thresholds in these patients, the relation of esophageal chest pain to the irritable bowel syndrome, and the appropriate therapies for these patients.
Results Of Data Synthesis: Through our review of the literature, we identified areas of concordance and disagreement. These areas are discussed and an overall perspective is provided.
Conclusions: Continuing attempts to develop rational diagnostic and therapeutic approaches to patients with noncardiac chest pain should include a multidisciplinary approach involving basic scientists, gastroenterologists, psychologists, and other clinical experts in the field of pain research.
Answer: Yes, esophageal dysfunction as a cause of angina pectoris, often referred to as "linked angina," does exist. Studies have shown that esophageal abnormalities can cause retrosternal chest pain that mimics angina pectoris. For instance, a study aimed at assessing the incidence of esophageal abnormalities in patients with documented coronary artery disease found that linked angina is a rare phenomenon, but it does occur. In the study, some chest pain episodes were associated with ischemic electrocardiographic changes, while others were related to gastroesophageal reflux or abnormal esophageal motility (PUBMED:8166156).
Another study followed 100 emergency patients with anterior chest pain and found that one-fifth of those with anginal pain had esophageal abnormalities. These patients had normal exercise tolerance tests, and some had a positive response to esophageal acid perfusion, suggesting a link between the esophagus and their symptoms (PUBMED:7131679).
Additionally, there have been cases of recurrent transient apical cardiomyopathy (tako-tsubo-like left ventricular dysfunction) in patients with diffuse esophageal spasms, indicating a connection between esophageal spasms and cardiac symptoms (PUBMED:24695751).
A prospective study utilizing provocative testing during esophageal manometry in patients with mitral valve prolapse (MVP) and chest pain found that esophageal disorders were common and could account for chest discomfort in certain patients with MVP (PUBMED:2910093).
Furthermore, ergonovine-induced esophageal spasm has been observed in patients with chest pain resembling angina pectoris, linking chest pain and esophageal dysfunction (PUBMED:7172956).
Investigations for esophageal dysfunction in patients with angina pectoris, normal coronary angiograms, and normal left ventricular function have been suggested, as treatment of esophageal abnormalities can lead to a decrease in chest pain and hospital admission rates (PUBMED:2339827).
In summary, esophageal dysfunction can indeed be a cause of chest pain that resembles angina pectoris, and it is important to consider this diagnosis, particularly in patients with normal coronary angiograms and non-ischemic chest pain. |
Instruction: Academic radiology and the emergency department: does it need changing?
Abstracts:
abstract_id: PUBMED:30194569
Gender disparity in academic emergency radiology. Objective: This study is intended to better understand how academic productivity and career advancement differs between men and women emergency radiologists in academic practices.
Materials And Methods: Parameters of academic achievement were measured, including number of citations, number of publications, and h-index, while also collecting information on academic and leadership ranking among emergency radiologists in North America.
Results: In emergency radiology, there are significantly fewer women than men (22.2% vs 77.8%). Of these women, the greatest proportion of women held the lower academic rank of assistant professor (95.4%). Female assistant professors had a higher h-index than men at the same rank (4 vs 2), but it was not statistically significantly higher. There was no significant difference between gender and academic (p = 0.089) or leadership (p = 0.586) rankings.
Conclusion: This study provides further evidence that gender disparity persists in emergency radiology, with women achieving less upward academic career mobility than men, despite better academic productivity in the earlier stages of their careers. The academic productivity of emergency radiologists at the rank of assistant professor is significantly higher for women than men.
abstract_id: PUBMED:24713502
Survey of after-hours coverage of emergency department imaging studies by US academic radiology departments. Purpose: The aim of this study was to document how academic radiology departments cover emergency department radiologic services after hours.
Methods: Program directors of neuroradiology fellowship programs were invited to participate in a web-based survey addressing how their radiology departments covered after-hours emergency department studies.
Results: A total of 67 separate institutional responses were obtained from 96 institutions, for a 70% response rate. Seventy-three percent of programs (49 of 67) reported providing exclusively preliminary interpretations on emergency department reports for some overnight hours. Only 27% of respondents (18 of 67) said that they provided 24-hour real-time staff coverage. Among those who provided around-the-clock staff coverage, 72% (13 of 18) did so with dedicated emergency department sections. Only 2 respondents offered 24-hour subspecialty staff coverage. Emergency departments and hospital administrators were noted as the most frequent drivers of these changes.
Conclusions: Academic radiology departments vary widely in how they cover after-hours emergency department examinations. A number have recently expanded their hours of coverage under institutional pressures.
abstract_id: PUBMED:37400045
Are Academic Emergency Radiologists Systematically Disadvantaged Compared With Diagnostic Radiology Subspecialty Counterparts When It Comes to Promotion? Purpose: The aim of this study was to assess academic rank differences between academic emergency and other subspecialty diagnostic radiologists.
Methods: Academic radiology departments likely containing emergency radiology divisions were identified by inclusively merging three lists: Doximity's top 20 radiology programs, the top 20 National Institutes of Health-ranked radiology departments, and all departments offering emergency radiology fellowships. Within departments, emergency radiologists (ERs) were identified via website review. Each was then matched on career length and gender to a same-institutional nonemergency diagnostic radiologist.
Results: Eleven of 36 institutions had no ERs or insufficient information for analysis. Among 283 emergency radiology faculty members from 25 institutions, 112 career length- and gender-matched pairs were included. Average career length was 16 years, and 23% were women. The mean h indices for ERs and non-ERs were 3.96 ± 5.60 and 12.81 ± 13.55, respectively (P < .0001). Non-ERs were twice as likely as ERs (0.21 versus 0.1) to be associate professors at h index < 5. Men had nearly 3 times the odds of advanced rank compared with women (odds ratio, 2.91; 95% confidence interval, 1.02-8.26; P = .045). Radiologists with at least one additional degree had nearly 3 times the odds of advancing rank (odds ratio, 2.75; 95% confidence interval, 1.02-7.40; P = .045). Each additional year of practice increased the odds of advancing rank by 14% (odds ratio, 1.14; 95% confidence interval, 1.08-1.21; P < .001).
Conclusions: Academic ERs are less likely to achieve advanced rank compared with career length- and gender-matched non-ERs, and this persists even after adjusting for h index, suggesting that academic ERs are disadvantaged in current promotions systems. Longer term implications for staffing and pipeline development merit further attention as do parallels to other nonstandard subspecialties such as community radiology.
abstract_id: PUBMED:33989533
Pediatric Emergency Imaging Studies in Academic Radiology Departments: A Nationwide Survey of Staffing Practices. Objective: Particularly for pediatric patients presenting with acute conditions or challenging diagnoses, identifying variation in emergency radiology staffing models is essential in establishing a standard of care. We conducted a cross-sectional survey among radiology departments at academic pediatric hospitals to evaluate staffing models for providing imaging interpretation for emergency department imaging requests.
Methods: We conducted an anonymous telephone survey of academic pediatric hospitals affiliated with an accredited radiology residency program across the United States. We queried the timing, location, and experience of reporting radiologists for initial and final interpretations of emergency department imaging studies, during weekday, overnight, and weekend hours. We compared weekday with overnight, and weekday with weekend, using Fisher's exact test and an α of 0.05.
Results: Surveying 42 of 47 freestanding academic pediatric hospitals (89%), we found statistically significant differences for initial reporting radiologist, final reporting radiologist, and final report timing between weekday and overnight. We found statistically significant differences for initial reporting radiologist and final report timing between weekday and weekend. Attending radiologist involvement in initial reports was 100% during daytime, but only 33.3% and 69.0% during overnight and weekends. For initial interpretation during overnight and weekend, 38.1% and 28.6% use resident radiologists without attending radiologists, and 28.6% and 2.4% use teleradiology. All finalized reports as soon as possible during weekdays, but only 52.4% and 78.6% during overnight and weekend.
Discussion: A minority of hospitals use 24-hour in-house radiology attending radiologist coverage. During overnight periods, the majority of academic pediatric emergency departments rely on resident radiologists without attending radiologist supervision or outside teleradiology services to provide initial reports. During weekend periods, over a quarter rely on resident radiologists without attending radiologist supervision for initial reporting. This demonstrates significant variation in staffing practices at academic pediatric hospitals. Future studies should look to determine whether this variation has any impact on standard of care.
abstract_id: PUBMED:10730811
Emergency department coverage by academic departments of radiology. Rationale And Objectives: The purpose of this study was to survey academic radiology departments to determine how emergency radiology coverage is handled and whether there are any prerequisites for those individuals providing this coverage.
Materials And Methods: The authors developed a simple two-page survey and sent it to a total of 608 program directors, chiefs of diagnostic radiology, chairpersons, and chief residents at academic departments of radiology.
Results: Of the 608 surveys sent, 278 (46%) were returned. More than half of the departments have an emergency radiology section that provides "wet read" coverage during the day, and most academic departments cover the emergency department during the night and on weekends. Nighttime and weekend coverage is handled mostly by residents. Most departments give time off for lunch, with few other prerequisites for faculty who provide emergency coverage. Sixty percent of the departments have teleradiology capability, and many use it for emergency department coverage.
Conclusion: These results can serve as the basis for discussion and comparison with other institutions regarding a variety of aspects of emergency department coverage.
abstract_id: PUBMED:17434076
Academic radiology and the emergency department: does it need changing? Rational And Objectives: The increasing importance of imaging for both diagnosis and management in patient care has resulted in a demand for radiology services 7 days a week, 24 hours a day, especially in the emergency department (ED). We hypothesized the resident preliminary reports were better than generalist radiology interpretations, although inferior to subspecialty interpretations.
Materials And Methods: Total radiology volume through our Level I pediatric and adult academic trauma ED was obtained from the radiology information system. We conducted a literature search for error and discordant rates between radiologists of varying experience. For a 2-week prospective period, all preliminary reports generated by the residents and final interpretations were collected. Significant changes in the report were tabulated.
Results: The ED requested 72,886 imaging studies in 2004 (16% of the total radiology department volume). In a 2-week period, 12 of 1929 (0.6%) preliminary reports by residents were discordant to the final subspecialty dictation. In the 15 peer-reviewed publications documenting error rates in radiology, the error rate between American Board of Radiology (ABR)-certified radiologists is greater than that between residents and subspecialists in the literature and in our study. However, the perceived error rate by clinicians outside radiology is significantly higher.
Conclusion: Sixteen percent of the volume of imaging studies comes through the ED. The residents handle off-hours cases with a radiology-detected error rate below the error rate between ABR-certified radiologists. To decrease the perceived clinician-identified error rate, we need to change how academic radiology handles ED cases.
abstract_id: PUBMED:15561578
Quantification of clinical consultations in academic emergency radiology. Rationale And Objectives: The purpose of this study is to quantify the impact of clinical consultation on the workload of an academic emergency radiology section.
Materials And Methods: Data from a 7-day audit (24 h/d) of the number and length of clinical consultations was expressed as the mean number of consultations per 24 hours and consultation minutes per 24 hours. Consultations performed on images acquired from outside institutions were noted. The attending radiologist consultation fraction was defined as the attending consultation minutes per 24 hours divided by the number of minutes of attending coverage per 24 hours. Using annualized work relative value units per full-time employee (wRVU/FTE) over the 7 days, the consultation value unit per full-time employee (CVU/FTE) was defined and calculated as the consultation fraction multiplied by the annual wRVU/FTE.
Results: For the attending radiologists, the consultation fraction was 0.13 and the CVU/FTE was 1216. Twenty-two percent of the total consultation minutes were spent on studies performed outside our institution.
Conclusions: Clinical consultation represents a significant portion of the workload in academic emergency radiology. The consultation fraction describes the fraction of the radiologist's time spent in consultation, and the CVU/FTE expresses the workload of clinical consultations in terms of wRVU/FTE, the factor used most commonly to determine the academic radiologist's productivity and staffing.
abstract_id: PUBMED:22915403
Overnight subspecialty radiology coverage: review of a practice model and analysis of its impact on CT utilization rates in academic and community emergency departments. Objective: The purpose of this study is to describe a new practice model (overnight subspecialty radiology coverage) and to determine its impact on CT utilization rates in academic and community emergency departments.
Materials And Methods: Overnight subspecialty (neuroradiology and abdominal imaging) attending coverage was instituted at the University of Pittsburgh Medical Center in 2008. Previously, preliminary interpretations of CT studies performed at four academic emergency departments were provided by radiology residents. Interpretations were provided to five community emergency departments by either a senior resident or a contracted teleradiology service. Rotating shifts of neuroradiologists and abdominal imagers have since provided contemporaneous final reports for emergency department CT studies from 5:00 pm to 7:00 am. We compared total CT volume, emergency department visits, and CT "intensity" (CT volume / emergency department visits) within academic and community hospitals 12 months before and after institution of overnight coverage. We also compared on-call (5:00 pm to 7:00 am) and daytime CT intensity in academic and community emergency departments during these time periods.
Results: Academic emergency department visits increased 7% and community emergency department visits decreased 3% during the study period. Total academic emergency department CT volume increased 8%, and community emergency department CT volume increased 9%. Daytime community emergency department CT volume remained constant, but on-call CT volume increased 16%. Academic emergency department CT intensity remained constant at 0.57, whereas community emergency department CT intensity increased from 0.40 to 0.45 (12.5%).
Conclusion: Institution of overnight subspecialty emergency department coverage resulted in a disproportionate increase in CT utilization in community emergency departments. We hypothesize that community emergency departments lacking in-house clinical subspecialists may be more apt to use subspecialist radiology interpretations for patient management. Overnight subspecialty coverage increases CT utilization in the community emergency department, but the appropriateness and clinical impact is uncertain and in need of exploration.
abstract_id: PUBMED:27681086
Emergency radiology fellowship training in the USA: a web-based survey of academic programs. Interest in emergency radiology as a distinct subspecialty within radiology continues to rise in the USA and globally. While acute care imaging has been performed since the earliest days of the specialty, fellowship training in emergency radiology is a relatively new phenomenon. The purpose of this study was to examine the current status of emergency radiology training in the USA, using data derived from the official websites of US residency training programs. The most current list of radiology residency programs participating in the 2017 match was obtained from the official Electronic Residency Application Service (ERAS) website. The total number of emergency radiology fellowships was recorded after visiting available websites of each academic radiology program. The total number of subspecialty fellowships offered by each academic radiology program was also recorded. There were 12 confirmed emergency radiology fellowships offered in the USA for a combined total of 22 fellowship positions. Eleven programs were 1 year in duration, with one program offering a one- or two-year option. One hundred eight of the 174 (approximately 62 %) surveyed academic radiology programs offered at least one subspecialty fellowship. Emergency radiology fellowships are on the rise, paralleling the growth of emergency radiology as a distinct subspecialty within radiology.
abstract_id: PUBMED:36094681
American Society of Emergency Radiology (ASER) social media committee workgroup: best practices for the use of social media in emergency radiology. Social media has become integrated within the profession of medicine, and emergency radiology has inevitably felt the impact of its presence. Emergency radiologists are encouraged to consider the advantages of embracing the digital era and the benefits it may bring to our careers. We aim to present the best practice guidelines for emergency radiologists and radiology departments. This paper is a product of the American Society of Emergency Radiology Social Media (ASER) Committee workgroup and represents the best practices of the society.
Answer: The question "Academic radiology and the emergency department: does it need changing?" is addressed by several studies that highlight various aspects of emergency radiology within academic settings. The studies collectively suggest that there may indeed be a need for change in how academic radiology departments handle emergency department coverage and the career advancement of emergency radiologists.
One study found that there is a significant gender disparity in academic emergency radiology, with women achieving less upward academic career mobility than men, despite having better academic productivity at the assistant professor level (PUBMED:30194569). Another study indicated that academic emergency radiologists are less likely to achieve advanced rank compared with their non-emergency counterparts, even after adjusting for h-index, suggesting a systematic disadvantage in promotion systems (PUBMED:37400045).
Coverage of emergency department imaging studies after hours by academic radiology departments varies widely, with some institutions expanding their hours of coverage due to institutional pressures (PUBMED:24713502). Pediatric emergency imaging studies also show significant variation in staffing practices, with many academic pediatric hospitals relying on resident radiologists or teleradiology services for initial reports during overnight and weekend periods (PUBMED:33989533).
The survey of academic radiology departments revealed that emergency radiology coverage is mostly handled by residents during nighttime and weekends, and that teleradiology capability is used by many departments for emergency department coverage (PUBMED:10730811). Another study suggested that academic radiology needs to change how it handles emergency department cases to decrease the perceived clinician-identified error rate (PUBMED:17434076).
Clinical consultation represents a significant portion of the workload in academic emergency radiology, and the study suggests a need to quantify this workload in terms of productivity and staffing (PUBMED:15561578). The introduction of overnight subspecialty radiology coverage was found to increase CT utilization rates in community emergency departments, raising questions about the appropriateness and clinical impact of such a practice model (PUBMED:22915403).
Lastly, the rise of emergency radiology fellowships in the USA indicates the growing recognition of emergency radiology as a distinct subspecialty, which may necessitate changes in training and practice models (PUBMED:27681086). The American Society of Emergency Radiology also recognizes the importance of adapting to the digital era and has provided best practice guidelines for the use of social media in emergency radiology (PUBMED:36094681). |
Instruction: Assessment of vascular dysfunction after transradial coronary angiography. Is a single catheter better?
Abstracts:
abstract_id: PUBMED:26108386
Assessment of vascular dysfunction after transradial coronary angiography. Is a single catheter better? Background: The aim of this study was to investigate the midterm effects of transradial coronary angiography (TRCAG) on the radial and brachial artery diameter, the vasodilator characteristics, as well as to assess the factors determining functional recovery.
Methods: This study included 136 consecutive patients who underwent TRCAG. The radial artery was evaluated with ultrasonography before and 1 month after the procedure.
Results: The basal right radial artery diameter (2.97 ± 0.46 vs. 2.82 ± 0.51, p < 0.001), after flow-mediated dilatation (FMD; 3.18 ± 0.45 vs. 2.99 ± 0.54, p < 0.001) and after nitroglycerin-mediated dilatation (NMD; 3.32 ± 0.45 vs. 3.11 ± 0.54, p < 0.001), and the percentage change in diameter after FMD (7.50 ± 3.62 vs. 5.89 ± 3.04, p < 0.001) and NMD (12.42 ± 4.96 vs. 10.54 ± 4.47, p < 0.001) were significantly decreased 1 month after TRCAG. The mean basal diameter of the right brachial artery (4.41 ± 0.58 vs. 4.40 ± 0.58, p = 0.012) after FMD (4.61 ± 0.60 vs. 4.59 ± 0.59, p < 0.001) and the percentage change in diameter after FMD (4.53 ± 2.29 vs. 4.33 ± 2.56, p = 0.038) were significantly decreased 1 month after TRCAG. The number of catheters used (B = 0.372, p < 0.001, 95 % CI = 0.006-0.013), basal radial artery diameter (B = - 0.217, p = 0.001, 95 % CI = - 0.021- 0.006), presence of hypertension (B = - 0.151, p = 0.011, 95 % CI = - 0.015 - 0.002), and pain score (B = 0.493, p < 0.001, 95 % CI = 0.007 - 0.012) were independent predictors of radial artery FMD change in multivariate regression analysis. The number of catheters used (B = 0.378, p < 0.001, 95 % CI = 0.009 - 0.020), basal radial artery diameter (B = - 0.210, p = 0.010, 95 % CI = - 0.034 - 0.005), and pain score (B = 0.221, p < 0.001, 95 % CI = 0.002-0.011) were independent predictors of radial artery NMD change in multivariate regression analysis.
Conclusion: Basal radial artery diameter, the number of catheters used during TRCAG, and the pain perceived during the procedure seem to be important predictors of vascular functional changes after TRCAG.
abstract_id: PUBMED:9043044
Transradial artery coronary angiography and intervention in patients with severe peripheral vascular disease. Background: Traditionally, cardiac catheterization in patients with severe aorto-iliac disease has been performed using a brachial arteriotomy. This approach is associated with significant vascular and neuronal complications and requires considerable training to achieve an adequate level of expertise. Improvement and miniaturization of catheter equipment now allows the radial artery to be used for coronary investigation and intervention. The lack of important structures close to the radial artery, a good collateral ulnar artery circulation and its superficial position suggests that these procedures should have a low complication rate. The purpose of this study was to assess the efficacy and safety of percutaneous transradial diagnostic and interventional coronary catheterization in patients with severe peripheral vascular disease.
Patients And Methods: We undertook a non-randomized prospective analysis of 75 patients who had transradial artery diagnostic and interventional coronary catheterization in whom femoral angiography was impossible or relatively contraindicated (22 patients with severe claudication and absent femoral pulses, 24 patients with previous aorto-iliac surgery or intervention, 20 patients with a failed femoral approach, 9 patients with an aortic aneurysm). Three patients had an absent ulnar artery and were excluded.
Results: Radial artery cannulation was successful in 73/75 (97%) cases. Seventy-one (95%) patients had a successful diagnostic study. There was a high incidence of 3 vessel disease (73%), and the majority of patients (64%) were referred for coronary bypass surgery. Twelve patients underwent successful follow-on intervention including the insertion of 9 intracoronary stents. Adequate haemostasis was achieved within 20 min after diagnostic angiography and 60 min after interventional procedures. One patient had a forearm haematoma with paraesthesia of the hand which settled with conservative treatment. At 4-6 weeks, all patients had normal hand sensation and function (100%) with a palpable pulse present in 59/62 (96%). All patients undergoing diagnostic angiography were discharged on the same day, and patients undergoing intervention were discharged the following day.
Conclusions: Transradial coronary investigation and intervention can be performed with a high degree of success and a low complication rate with early mobilization and discharge in patients with severe peripheral vascular disease. We suggest that the percutaneous transradial technique should be considered as an alternative to the Sones' technique in these patients.
abstract_id: PUBMED:14696161
Transradial coronary angiography in patients with contraindications to the femoral approach: an analysis of 500 cases. The transradial approach to coronary angiography is considered by some to be a route of choice, by others to be a route that should be used only where there are relative contraindications to the femoral approach. We present the largest series to date of patients in whom transradial coronary angiography was undertaken specifically because of contraindications to the femoral approach. Since 1995, patients at this cardiothoracic center have been considered for a transradial approach to coronary angiography if there were relative contraindications to the femoral route. Data from 500 patients was prospectively collected. Patients were aged 66 +/- 9 years; 72% were male. Indications for the radial approach included peripheral vascular disease (305), therapeutic anticoagulation (77), musculoskeletal (59), and morbid obesity (32). Sixty-eight patients (14%) required a radial procedure following a failed femoral approach. Access was right radial 291 (58%), left radial 209 (42%). Eighteen operators were involved, but two operators undertook 355 (71%) of the cases. Catheter gauge was 6 Fr (n = 243; 49%), 5 Fr (219; 43%), and 4 Fr (29; 6%). The procedure was successful in 463 cases [92.6%; 88.2% for nonmajority vs. 94.4% (P < 0.05) for the two majority operators]. Success in males (93.6%) significantly exceeded that in females (90.1%; P < 0.05). In-catheter-laboratory duration was 45 +/- 17 min; fluoroscopy time, 7.5 +/- 6 min; radiation dose, 40 +/- 23 CGy. The procedure was without incident in 408 cases (82%). There were procedural difficulties in 18% of cases, including radial artery spasm (12%) and vasovagal response (5%). The incidence was higher with 6 Fr catheters (23%) than with 5/4 Fr (15%; P < 0.05). Major procedural complications occurred in three cases: brachial artery dissection in one and cardiac arrest in two. Postprocedure major vascular complications numbered three: claudicant pain on handgrip in one, ischemic index finger (with subsequent terminal phalanx amputation due to osteomyelitis) in one, and ischemic hand for 4 hr in one. Patients with contraindications to the femoral approach form a high-risk group. In these patients, transradial cardiac catheterization can be performed successfully and with a low risk of major complications. Minor adverse features remain frequent, occurring in one in five cases, though difficulties are minimized both with increasing operator experience and smaller sheath diameter.
abstract_id: PUBMED:15557713
Vasospasms of the radial artery after the transradial approach for coronary angiography and angioplasty. We examined vasospasms of the radial artery after a transradial approach was used for coronary angiography or angioplasty. In forty-eight patients (39 males and 9 females), arteriography of the radial artery was initially performed just after the transradial approach was used for coronary angiography and/or angioplasty. Then, five months later, a second arteriography of the radial artery was obtained after a transbrachial approach was used for coronary angiography. First and second arteriographies were compared to evaluate vaso-spasms of the radial artery. In the present study, more than 75% stenosis in the radial artery, 25-75% stenosis, and less than 25% stenosis were tentatively defined as severe spasms, moderate spasms, and mild spasms, respectively. In arteriographic studies on the radial artery, twenty-four patients (50%) had severe radial artery spasms, eleven patients (23%) had moderate spasms, and thirteen patients (27%) had mild spasms. The diameters of both the proximal and distal radial arteries in the severe spasm group were significantly smaller than those in the mild and moderate spasm groups (proximal site: severe group 2.39 +/- 0.70 mm versus mild group 2.98 +/- 0.46 mm, P < 0.05, and moderate group 2.96 +/- 0.77 mm, P < 0.05, distal site: severe group 2.26 +/- 0.60 mm versus mild group 2.73 +/- 0.47 mm, P < 0.05, and moderate group 2.86 +/- 0.71 mm, P < 0.05). We concluded that vasospasms of the radial artery occurred in most patients after the transradial approach. Furthermore, severe radial spasms were strongly correlated with the size of the diameter of the artery.
abstract_id: PUBMED:22781476
Safety and efficacy of transradial access in coronary angiography: 8-year experience. Aims: The transradial approach (TRA) in coronary angiography is used less frequently than the transfemoral approach; the learning curve and transradial failure (TRF) have slowed its widespread use. We evaluate the incidence, causes, and predictors of TRF in TRA coronary angiographies in an unselected population.
Methods And Results: All elective coronary angiographies using TRA from January 2002 to December 2009 were analyzed in this single-center, prospective, observational study. TRF occurred in 465/8463 procedures (5.5%). The main causes of TRF were puncture failure in 48.3% and tortuous brachiocephalic arteries in 22.8% of cases. The annual TRF percentage decreased from 9.1% in 2002 to 4.1% in 2009 (P<.001). In a multivariable regression model, the independent factors associated with TRF included use of >3 catheters (odds ratio [OR], 3.973; confidence interval [CI], 3.198-4.937), abnormal Allen test (OR, 3.231; CI, 1.839-5.676), radial spasm (OR, 3.896; CI, 2.903-5.229), peripheral vascular disease (OR, 1.900; CI, 1.426-2.532), female sex (OR, 1.451; CI, 1.094-1.925), and age >80 years (OR, 1.441; CI, 1.020-2.036). Intra-arterial administration of verapamil (OR, 0.137; CI, 0.098-0.190) and nitroglycerin (OR, 0.455; CI, 0.317-0.653), and height (OR, 0.974; CI, 0.959-0.990) reduced the risk of TRF.
Conclusions: Experience with TRA was associated with a low incidence of TRF. Independent factors associated with TRF were identified.
abstract_id: PUBMED:28582153
Skin to Skin: Transradial Carotid Angiography and Stenting. Carotid artery stenting (CAS) is a proven alternative to carotid endarterectomy in patients with significant carotid disease. The femoral artery is the conventional access site for CAS procedures. However, this approach may be problematic because of peripheral vascular disease and anatomic variations. Access site complications are the most common adverse event after CAS from the transfemoral approach (TFA) and most technical failures are related to a complex aortic arch. The transradial approach has been evaluated to address the shortcomings of TFA. In cases involving a complex arch, transradial access may be a viable alternative strategy.
abstract_id: PUBMED:26120053
Time-course of vascular dysfunction of brachial artery after transradial access for coronary angiography. Background: Prior studies have demonstrated endothelial and smooth muscle brachial artery dysfunction after transradial cardiac catheterization for diagnostic coronary angiography. The duration of this vascular dysfunction is unknown.
Objective: To determine the time-course of endothelial and smooth muscle cell dysfunction in the upstream brachial artery after transradial cardiac catheterization.
Methods: We studied 22 consecutive patients with suspected coronary artery disease (age 64.4 ± 7.7 years) undergoing diagnostic transradial cardiac catheterization. Using high-resolution vascular ultrasound, we measured ipsilateral brachial artery diameter changes during reactive hyperemia (endothelium-dependent dilatation) and administration of sublingual nitroglycerin (endothelium-independent dilatation). The measurements were taken at baseline (before cardiac catheterization), 6 h, 24 h, 1 week, and 1 month postprocedure. The contralateral brachial artery served as a control.
Results: Ipsilateral brachial artery diameter during endothelium-dependent dilatation decreased significantly compared with the contralateral diameters at 6 h and 24 h after transradial cardiac catheterization (3.22 vs. 4.11 and 3.29 vs. 4.11, respectively, P < 0.001). The administration of nitroglycerin did not affect this difference. At 1 week and 1 month postprocedure there was no significant difference in diameter of the ipsilateral versus the contralateral brachial artery. As expected the contralateral brachial artery showed no significant changes in diameter.
Conclusion: Our results showed that transradial cardiac catheterization causes transient vascular endothelial and smooth muscle dysfunction of the ipsilateral brachial artery, which resolves within 1 week postprocedure. These findings strongly suggest the absence of systemic vascular dysfunction after transradial catheterization both immediately postprocedure as well as 1 week postprocedure. © 2015 Wiley Periodicals, Inc.
abstract_id: PUBMED:19926046
Vascular dysfunction of brachial artery after transradial access for coronary catheterization: impact of smoking and catheter changes. Objectives: The aim of this study was to investigate the effect of diagnostic transradial catheterization on vascular function of upstream brachial artery (BA).
Background: The transradial access has recently become an alternative to transfemoral cardiac catheterization. A potential caveat of this approach lies in possible sustained physical radial artery (RA) damage.
Methods: We studied 30 patients (age 61 +/- 11 years) undergoing diagnostic coronary angiography with the transradial access (5-F). Endothelium-dependent, flow-mediated vasodilation (FMD) was measured before and at 6 and 24 h after catheterization of the right-sided RA and BA with high-resolution ultrasound. The left-sided RA served as a control.
Results: Transradial catheterization significantly decreased FMD in the RA (overall mean 8.5 +/- 1.7% to 4.3 +/- 1.6%) and the upstream BA (overall mean 4.4 +/- 1.6% to 2.9 +/- 1.6%) at 6 h. Subgroup analysis showed that FMD of both arteries at 6 h was significantly lower in active smokers and that it only remained impaired at 24 h in this group, whereas nonsmoker FMD fully recovered. The degree of BA but not RA FMD dysfunction was related to the number of catheters used, with no change after 2 catheters, 1.9 +/- 1.2% decrease (6 h) and recovery (24 h) after 3 catheters, and 3.9 +/- 1.2% decrease (6 h) without recovery (24 h) after 4 to 5 catheters. The RA dysfunction correlated with the baseline diameter. The contralateral control RA exhibited no change ruling out systemic effects.
Conclusions: Transradial catheterization not only leads to dysfunction of the RA but also the upstream BA, which is more severe and sustained in smokers and with increasing numbers of catheters.
abstract_id: PUBMED:24293786
Effect of Heparin Administration during Coronary Angiography on Vascular or Peripheral Complications: A Single-Blind Randomized Controlled Clinical Trial. Background: Coronary angiography consists of the selective injection of contrast agents in coronary arteries. Optimal strategy for heparin administration during coronary angiography has yet to be determined. We assessed the effect of heparin administration during coronary angiography on vascular, hemorrhagic, and ischemic complications.
Methods: Five hundred candiates for diagnostic coronary angiography (femoral approach) were randomly divided into case (intravenous Heparin [2000-3000 units]) and control (placebo) groups. Assessment included vascular complications like groin hematoma, retroperitoneal hematoma, pseudoaneurysm, active hemorrhage, cerebral ischemia, and clot formation in the catheter or the sheath during angiography. Information was obtained about the patients' age, sex, and hypertension and diabetes mellitus history. Patients with severe peripheral vascular disease, aortic stenosis, history of coagulopathy, and angiography over 30 minutes were excluded.
Results: Nine patients from each group were excluded. The remaining 482 patients included 285 (59.1%) men and 197 (40.9%) women. In the case group (n=241), 7 (2.9%) patients experienced active hemorrhage at the site of angiographic puncture, 2 (0.83%) developed groin hematoma, and 8 (3.32%) experienced clot formation during angiography, while the corresponding figures for the control group (n=241) were 3 (1.24%), 2 (083%), and 13 (5.39%), respectively. No significant differences were found in hemorrhagic, ischemic, and vascular complications between the two groups.
Conclusion: Heparin administration during coronary angiography had no effect on clot formation as well as hemorrhagic, ischemic, and vascular complications in our patients.
Trial Registration Number: IRCT201202199080N1.
abstract_id: PUBMED:26124493
Vascular Complications Following Transradial and Transulnar Coronary Angiography in 1600 Consecutive Patients. Background: Major, noncoronary complications are rarely encountered following transradial coronary procedures.
Methods And Results: Among 1600 prospectively studied patients with complete follow-up, 7 patients experienced major complications following coronary forearm procedures corresponding to an incidence of 0.44%. We found inadvertent symptomatic intramyocardial contrast medium injection, 2 cases with compartment syndrome of which 1 was managed surgically, exertional hand ischemia due to radial artery occlusion, a large ulnar artery pseudoaneurysm, an ulnar arteriovenous fistula, and 1 critical hand ischemia due to late occlusion of the distal brachial artery.
Conclusions: Although infrequent, surveillance for major complications should be encouraged after forearm coronary procedures.
Answer: The assessment of vascular dysfunction after transradial coronary angiography (TRCAG) indicates that the use of a single catheter may be associated with better vascular outcomes. A study that included 136 consecutive patients who underwent TRCAG found that the basal right radial artery diameter, as well as the vasodilator characteristics, were significantly decreased one month after the procedure. The number of catheters used during TRCAG was identified as an independent predictor of radial artery flow-mediated dilatation (FMD) change and nitroglycerin-mediated dilatation (NMD) change in multivariate regression analysis. This suggests that using more catheters could be associated with greater vascular functional changes after TRCAG (PUBMED:26108386).
In contrast, another study that examined the safety and efficacy of transradial access in coronary angiography over an 8-year experience found that the incidence of transradial failure (TRF) was low, at 5.5%. The study identified the use of more than three catheters as one of the independent factors associated with TRF, along with other factors such as abnormal Allen test, radial spasm, peripheral vascular disease, female sex, and age over 80 years. The administration of intra-arterial verapamil and nitroglycerin reduced the risk of TRF, suggesting that pharmacological intervention could mitigate some of the risks associated with multiple catheter use (PUBMED:22781476).
Furthermore, another study investigating the effect of diagnostic transradial catheterization on the vascular function of the upstream brachial artery found that the degree of brachial artery FMD dysfunction was related to the number of catheters used. There was no change after using two catheters, but there was a decrease in FMD after using three catheters, and a more significant decrease without recovery after using four to five catheters (PUBMED:19926046).
In summary, the evidence suggests that using a single catheter during transradial coronary angiography may be associated with less vascular dysfunction compared to using multiple catheters. However, the number of catheters used is just one of several factors that can influence vascular outcomes, and pharmacological interventions may help reduce the risk of vascular complications associated with multiple catheter use. |
Instruction: Army family practice: does our training meet our needs?
Abstracts:
abstract_id: PUBMED:37902519
A systematic review of measures of ability to meet basic needs in older persons. Background: The ability of older persons to meet their basic needs (i.e. personal, financial and housing security), as well as to perform Activities of Daily Living (ADL), is crucial. It is unclear, however, whether such measures exist. This systematic review aimed to review English-language measures of the ability of older persons to meet their basic needs, and to critically review the comprehensiveness of these measures and their psychometric properties.
Methods: Fifteen electronic databases including PubMed, EBSCOhost and CINAHL were systematically searched for studies of measures that assessed the ability of older persons to meet their basic needs, as defined by the World Health Organization. Two review authors independently assessed the studies for inclusion in the review and evaluated their comprehensiveness and psychometrics.
Results: We found seven instruments from 62 studies that assessed multi-domain function including ADL and some elements of basic needs. The instruments varied in breadth and in reporting of key psychometric criteria. Further, no single instrument provided a comprehensive assessment of the ability of older persons to meet their basic needs.
Conclusion: No single instrument that measures the ability to meet basic needs was identified by this review. Further research is needed to develop an instrument that assesses the ability of older persons to meet their basic needs. This measure should include an evaluation of ADL.
abstract_id: PUBMED:9290294
Army family practice: does our training meet our needs? Objectives: (1) To determine the perceived adequacy of residency training for current practice by Army family physicians; (2) to ascertain if differences exist by residency setting: medical center, medical activity, or civilian.
Methods: Surveys were mailed to the 334 family physicians in the Army in 1993. Training in various subject areas was rated as inadequate, adequate, or overly prepared.
Results: More than 75% of respondents felt prepared in 76% of general medical subjects (GM) but in only 39% of family medicine subjects (FM). There were no practice management subjects in which more than 75% felt adequately prepared. There were no differences in perceptions of GM or FM training between military- and civilian-trained respondents.
Conclusions: Army and civilian residencies prepare family physicians for the medical aspects of practice. Early training in management subjects could be enhanced. Civilian and Army programs could improve training in family medicine subjects.
abstract_id: PUBMED:37615551
Impact of the COVID-19 pandemic on army families: Household finances, familial experiences, and soldiers' behavioral health. The Coronavirus Disease 2019 (COVID-19) pandemic has significantly impacted employment and finances, childcare, and behavioral health across the United States. The Behavioral Health Advisory Team assessed the pandemic's impact on the behavioral health of U.S. Army soldiers and their families. Over 20,000 soldiers at three large installation groups headquartered in the northwestern continental U.S., Republic of Korea, and Germany participated in the cross-sectional survey. Multivariable logistic regression models indicated that key demographics (gender, rank), severity of household financial impact, changes in work situation due to childcare issues, and family members' difficulty coping (both self and spouse/partner and/or child) were independently and consistently associated with greater odds of screening positive for probable clinical depression and generalized anxiety, respectively. These findings highlight how Army families were impacted similarly by the pandemic as their civilian counterparts. Army leadership may action these findings with targeted support for soldiers and their families to ensure they are utilizing supportive services available to them, and that military services continually evolve to meet soldier and family needs during times of crisis and beyond.
abstract_id: PUBMED:23534510
Family needs and involvement in the intensive care unit: a literature review. Aims And Objectives: To understand the needs of critically ill patient families', seeking to meet those needs and explore the process and patterns of involving family members during routine care and resuscitation and other invasive procedures.
Methods: A structured literature review using Cumulative Index to Nursing and Allied Health Literature, Pubmed, Proquest, Google scholar, Meditext database and a hand search of critical care journals via identified search terms for relevant articles published between 2000 and 2010.
Results: Thirty studies were included in the review either undertaken in the Intensive Care Unit or conducted with critical care staff using different methods of inquiry. The studies were related to family needs; family involvement in routine care; and family involvement during resuscitation and other invasive procedures. The studies revealed that family members ranked both the need for assurance and the need for information as the most important. They also perceived their important needs as being unmet, and identified the nurses as the best staff to meet these needs, followed by the doctors. The studies demonstrate that both family members and healthcare providers have positive attitudes towards family involvement in routine care. However, family members and healthcare providers had significantly different views of family involvement during resuscitation and other invasive procedures.
Conclusion: Meeting Intensive Care Unit family needs can be achieved by supporting and involving families in the care of the critically ill family member. More emphasis should be placed on identifying the family needs in relation to the influence of cultural values and religion held by the family members and the organisational climate and culture of the working area in the Intensive Care Unit.
abstract_id: PUBMED:7501198
Army family physician satisfaction. Introduction: As numbers of family physicians decrease in the Army, the Army needs to know how satisfied they are with being family physicians and military officers. What variables are associated with these satisfactions?
Methods: This was a cross-sectional mailed survey of Army family physicians (N = 334). The response rate was 82% (N = 274). The survey included questions with a Likert scale and was analyzed using Kruskal-Wallis, one-way analysis of variance, and logistic regression.
Results: Ninety-two percent were satisfied with being family physicians and 74% were satisfied with being military officers. The variables associated with satisfaction were rank (positively associated) and percent time in patient care (negatively associated).
Conclusions: Army family physicians are more satisfied with being family physicians than they are with being military officers. They are, however, satisfied with both professions. The Army, as an organization, may want to explore how its system of rewards interplay with rank and amount of time in patient care to make them predictive of satisfaction.
abstract_id: PUBMED:30091252
Developing a model of factors that influence meeting the needs of family with a relative in ICU. Aim: To develop a model of factors influencing meeting family needs when a relative was admitted to the intensive care unit (ICU).
Background: Studies identify individual factors impact on the needs of family members with a relative in ICU. No studies have reported on relationships between these factors and/or the extent of influence of multiple factors on family needs.
Design: Observational, correlational, and predictive study design.
Methods: Data were collected from August 2013 to June 2014 using validated scales and a demographic tool. The setting was a large tertiary referral hospital in Brisbane, Australia. Structural equation modelling was undertaken.
Results: One hundred and seventy ICU family members participated. Factors included in the developed model were consistent with the literature. Family member anxiety had direct and significant influence on ICU family needs (β = 0.21). Gender was also found to have direct influence (β = 0.19), suggesting female family members were more likely to report needs being met. Family member coping self-efficacy (β = -0.40) and family member depression (β = -0.33) were mediating variables.
Discussion: Interventions to meet family needs within the ICU should take into account family member levels of anxiety, depression, and coping self-efficacy with consideration of gender. Further model validation is required to confirm findings.
abstract_id: PUBMED:9468163
Measuring the ability to meet family needs in an intensive care unit. Objective: To measure the ability to meet family needs in an intensive care unit (ICU).
Design: Descriptive survey.
Setting: University hospital ICU.
Subjects: Ninety-nine next of kin respondents and 16 secondary family respondents were recruited.
Interventions: A modified Society of Critical Care Medicine Family Needs Assessment instrument was used.
Measurements And Main Results: Demographic variables included patient age, gender, diagnosis, Acute Physiology and Chronic Health Evaluation (APACHE) II score on admission, Therapeutic Intervention Scoring System (TISS) score on the date of interview, cumulative TISS of the ICU on the day of interview, number of patients in the ICU at time of interview, nurse/patient ratio for the patient, average nurse/patient ratio of the entire unit, day of the week of the interview, timing of the interview, number of ICU attending physicians who cared for this patient (scheduled for a period of seven consecutive days), number of nurses who cared for the patient, if a nurse was assigned the same patient on two consecutive days worked, length of stay in the ICU, and length of hospital stay. Demographic information concerning the family member included gender, age, commuting time to the hospital, visiting time in the hospital per day, number in family group, relationship to the patient, ethnic background, and education level. The additive score of all questions in the needs assessment instrument was calculated and used as the dependent variable. The independent variables were demographic information concerning patients, ICU, and respondents. The model coefficient of determination (R2adj) was 0.20 with a p = .0079. Greater family dissatisfaction (i.e., higher score) was present if there were more than two ICU attendings per patient (p = .048), or if the same nurse was not assigned on two consecutive days (p = .044). Family satisfaction increased if the respondent was female (p = .006), if the patient had a higher APACHE II score (p = .007), and if the patient's relationship with the most significant family member was brother/sister (p = .012). The family needs instrument was reliable and demonstrated a high degree of concordance with a second respondent in the same family surveyed.
Conclusions: Communication by the same provider was important when measuring the ability of an ICU to meet family needs. Instrument scores and the ability to meet family needs differed depending on the gender and the relationship to the patient of the most significant family member. We speculate that this instrument may be a useful adjunct in assessing quality of critical care services provided.
abstract_id: PUBMED:6867262
Dental care needs of Army recruits. To determine the prevalence amng current U.S. Army recruits of dental conditions requiring treatment, an assessment was done of the dental care needs of a 3 percent sample (N = 5,613) of incoming recruits at all seven U.S. Army reception stations that operate under a dental treatment planning concept. Both the treatment needs of the total sample and of each Army component--that is, Regular, Reserve, and National Guard forces--were quantified. The results indicated that the requirement for dental care among Army recruits currently being processed for training is approximately the same as it was for such recruits at the time that the Selective Service System draft was in effect, although the types of care needed have changed. Like the draft-based recruits, current Army recruits enter active-duty status with a substantial backlog of unmet dental care needs.
abstract_id: PUBMED:36007142
Paid Leave to Meet the Health Needs of Aging Family Members in 193 Countries. Women and workers over 50 disproportionately provide care for aging family members worldwide, including the 101 million who are care-dependent. Paid leave for adult health needs, which temporarily replaces employment income for workers providing care, can critically support both caregivers' economic outcomes and care recipients' wellbeing. We created quantitatively comparable data on paid leave policies that can be used to meet adult family members' health needs in all United Nations member states. Globally, 112 countries fail to provide any paid leave that can be used to meet the serious health needs of an aging parent, spouse, or adult child. These gaps have profound consequences for older workers providing care as well as care access by aging, ill, and disabled adults.
abstract_id: PUBMED:25230040
Most important needs of family members of critical patients in light of the critical care family needs inventory. Objective: This work sought to identify the most important needs for family members of adult critical patients as described in the literature pursuant to the dimensions established in the "Critical Care Family Needs Inventory" (CCFNI) by Molter and Leske.
Methodology: A literature review was carried out by using the CCFNI instrument. The databases used were: Pubmed, CINAHL, Proquest Nursing & Allied Health Source, Proquest Psychology Journals, LILACS, Science Direct, Ovid SP, PsyicINFO, and SciELO. The following limitations for the search were identified: adult patients, articles in English and Spanish, with abstract and complete text available and which had been published from 2003 to June 2013; 15 articles were included.
Results: The family's hope on desired results and sincere communication with the healthcare staff turned out to be the most relevant needs, while the least important were related to comfort and having support structures or systems. Most of the studies were conducted in Asia and North America revealing differences in the order of importance assigned to each necessity. Certain sociodemographic and cultural characteristics impact upon how family members rank their needs; this also occurs with the nature of the most important needs for the family and the factors determining their prioritization.
Conclusion: The articles included in this review mention the frequent interaction with the family and their holistic view of the person beyond the illness, determine that nurses are the most appropriate professionals to know and satisfy the family needs of critical patients.
Answer: The question "Army family practice: does our training meet our needs?" seems to be inquiring about the adequacy of training for Army family physicians in relation to their practice needs. Based on the abstracts provided, there are several relevant findings:
1. A survey of Army family physicians indicated that while more than 75% of respondents felt prepared in general medical subjects, only 39% felt prepared in family medicine subjects. Additionally, there were no practice management subjects in which more than 75% felt adequately prepared. This suggests that there may be gaps in training for Army family physicians, particularly in family medicine and practice management areas (PUBMED:9290294).
2. Army family physicians reported high levels of satisfaction with being family physicians (92%) and military officers (74%). The variables associated with satisfaction included rank (positively associated) and percent time in patient care (negatively associated). This indicates that while there is satisfaction among Army family physicians, there may be aspects of their role or workload that could be addressed to improve satisfaction further (PUBMED:7501198).
3. The impact of the COVID-19 pandemic on Army families highlighted the importance of support systems for soldiers and their families, especially during times of crisis. The study found that household financial impact, changes in work situation due to childcare issues, and difficulty coping were associated with greater odds of screening positive for probable clinical depression and generalized anxiety. This underscores the need for targeted support and the evolution of military services to meet soldier and family needs in challenging times (PUBMED:37615551).
In conclusion, while Army family physicians generally feel prepared and satisfied with their roles, there are areas in training, particularly in family medicine subjects and practice management, that could be improved to better meet their needs. Additionally, support systems for Army families, especially during crises, are crucial for maintaining the behavioral health of soldiers and their families. |
Instruction: Inferior mesenteric arterial type II endoleaks after endovascular repair of abdominal aortic aneurysm: are they predictable?
Abstracts:
abstract_id: PUBMED:22333935
Prevention of type II endoleak by laparoscopic inferior mesenteric artery ligation. Abdominal aortic aneurysm repair by endovascular techniques have gained wide acceptance as a treatment option. A potential well-known complication of endovascular repair includes endoleak. Specifically, type II endoleak, which is described as retrograde flow into the aneurysm sac through collateral vessels, can occur in up to 30% of patients. Certain preoperative factors can predict which patients may develop type II endoleak. This article describes laparoscopic inferior mesenteric artery ligation prior to endovascular abdominal aortic aneurysm repair as a viable treatment option in the prevention of type II endoleak.
abstract_id: PUBMED:35425973
Inferior mesenteric artery diameter and number of patent lumbar arteries as factors associated with significant type 2 endoleak after infrarenal endovascular aneurysm repair. Objectives: Our goal was to identify the inferior mesenteric artery diameter and number of patent lumbar arteries causing a significant type 2 endoleak to develop after infrarenal endovascular aneurysm repair.
Material And Methods: Included were patients who underwent infrarenal endovascular aneurysm repair between April 2002 and January 2017. Patients with an aneurysm involving the iliac arteries were excluded. Significant type 2 endoleak was defined as a type 2 endoleak observed after infrarenal endovascular aneurysm repair and accompanied by abdominal aneurysm growth of at least 5 mm during that time.
Results: A total of 277 patients were included. Mean follow-up was 38.9 (standard deviation 121.6) months. Immediately after infrarenal endovascular aneurysm repair, type 2 endoleaks occurred in 55 patients (20%), resolving spontaneously in 2 patients 6 months after infrarenal endovascular aneurysm repair. Thirty (10.8%) patients revealed a significant type 2 endoleak with aneurysm sack enlargement > 5 mm during follow-up, for which inferior mesenteric artery or lumbar artery coiling was performed. Mean time for coiling after primary infrarenal endovascular aneurysm repair was 25.4 (standard deviation 19.10) months. Twenty-three patients (8.3%) showed a non-significant type 2 endoleak during follow-up (no aneurysm sack enlargement). We found that the inferior mesenteric artery diameter and number of patent lumbar arteries were factors associated with a significant type 2 endoleak (odds ratio 1.755, P = 0.001; odds ratio 1.717, P < 0.001, respectively). Prior to endovascular aneurysm repair, the inferior mesenteric artery was patent in 212 (76.5%) patients; its median diameter measured 3 (0.5-3.8) mm. The median number of patent lumbar arteries was 3 (2-4). According to our receiver operating characteristic curve analysis, an inferior mesenteric artery diameter ≥3 mm (sensitivity 93.3%, specificity 65%) and ≥3 patent lumbar arteries (sensitivity 87.5%, specificity 43.6%) proved to be optimal cut-off values related to developing a significant type 2 endoleak. We therefore propose a composite score for the development of a significant type 2 endoleak [(inferior mesenteric artery diameter + patent lumbar arteries)/2].
Conclusions: Patients in whom the diameter of the inferior mesenteric artery is ≥ 3 mm and with ≥ 3 patent lumbar arteries carry a higher risk of developing significant type 2 endoleak after infrarenal endovascular aneurysm repair.
abstract_id: PUBMED:36143138
Effectiveness of Inferior Mesenteric Artery Embolization on Type II Endoleak-Related Complications after Endovascular Aortic Repair (EVAR): Systematic Review and Meta-Analysis. Type II endoleak is one of the most common and problematic complications after endovascular aneurysm repair. It has been suggested that the inferior mesenteric artery (IMA) embolization could prevent further adverse events and postoperative complications. This article is a systematic review and meta-analysis following PRISMA guidelines. The Medline, PubMed, Embase, and Cochrane databases were used to identify studies that investigated the effect of IMA embolization on the occurrence of type II endoleaks and secondary interventions in a group of patients with abdominal aortic aneurysm who underwent EVAR compared with results after EVAR procedure without embolization. A random effects meta-analysis was performed. Of 3510 studies, 6 studies involving 659 patients were included. Meta-analysis of all studies showed that the rate of secondary interventions was smaller in patients with IMA embolization (OR, 0.17; SE, 0.45; 95% CI, 0.07 to 0.41; p < 0.01; I2 = 0%). The occurrence of type II endoleaks was also smaller in the embolization group (OR, 0.37; SE, 0.21; 95% CI, 0.25 to 0.57; p < 0.01; I2 = 16.20%). This meta-analysis suggests that IMA embolization correlates with lower rates of type II endoleaks and secondary interventions.
abstract_id: PUBMED:36120997
Impact of the Patency of Inferior Mesenteric Artery on 7-Year Outcomes After Endovascular Aneurysm Repair. Purpose: The impact of preoperative patent inferior mesenteric artery (IMA) on late outcomes following endovascular aneurysm repair (EVAR) remains unclear. This study aimed to investigate the specific influence of IMA patency on 7-year outcomes after EVAR.
Materials And Methods: In this retrospective cohort study, 556 EVARs performed for true abdominal aortic aneurysm cases between January 2006 and December 2019 at our institution were reviewed. Endovascular aneurysm repairs performed using a commercially available device with no type I or type III endoleak (EL) during follow-up and with follow-up ≥12 months were included. A total of 336 patients were enrolled in this study. The cohort was divided into the patent IMA group and the occluded IMA group according to preoperative IMA status. The late outcomes, including aneurysm sac enlargement, reintervention, and mortality rates, were compared between both groups using propensity-score-matched data.
Results: After propensity score matching, 86 patients were included in each group. The median follow-up period was 56 months (interquartile range: 32-94 months). The incidence of type II EL at discharge was 50% in the patent IMA group and 19% in the occluded IMA group (p<0.001). The type II EL from IMA and lumbar arteries was significantly higher in the patent IMA group than in the occluded IMA group (p<0.001 and p=0.002). The rate of freedom from aneurysm sac enlargement with type II EL was significantly higher in the occluded IMA group than in the patent IMA group (94% vs 69% at 7 years; p<0.001). The rate of freedom from reintervention was significantly higher in the occluded IMA group than in the patent IMA group (90% vs 74% at 7 years; p=0.007). Abdominal aortic aneurysm-related death and all-cause mortality did not significantly differ between groups (p=0.32 and p=0.34).
Conclusions: Inferior mesenteric artery patency could affect late reintervention and aneurysm sac enlargement but did not have a significant impact on mortality. Preoperative assessment and embolization of IMA might be an important factor for improvement in late EVAR outcomes.
Clinical Impact: The preoperative patency of the inferior mesenteric artery was significantly associated with a higher incidence of sac enlargement and reintervention with type II endoleak following endovascular aneurysm repair, even after adjustment for patient background. Preoperative assessment and embolization of inferior mesenteric artery might be an important factor for improvement in late EVAR outcomes.
abstract_id: PUBMED:15216873
Laparoscopic transperitoneal clipping of the inferior mesenteric artery for the management of type II endoleak after endovascular repair of an aneurysm. We report the case of a high risk patient with an abdominal infrarenal aortic aneurysm (AAA) who was treated by endovascular technique and the subsequent management of a type II endoleak by the laparoscopic approach. In this case, a 74-year-old woman with a 6-cm infrarenal AAA underwent endovascular repair using a bifurcated stent-graft device. Surveillance CT scan showed a persistent type II endoleak at 1 week and 3 months after the operation. Angiography confirmed retrograde flow from the inferior mesenteric artery (IMA). Attempted transarterial embolization of the IMA via the superior mesenteric artery was not successful. Laparoscopic transperitoneal IMA clipping was performed. Subsequent aortic duplex scan and CT scan confirmed complete elimination of the type II endoleak. We conclude that a combination of endovascular and laparoscopic procedures can be used to manage AAA successfully.
abstract_id: PUBMED:25216443
Laparoscopic ligation of inferior mesenteric artery and internal iliac artery for the treatment of symptomatic type II endoleak after endovascular aneurysm repair. We present a case undergoing successful laparoscopic ligation of the inferior mesenteric artery (IMA) and internal iliac artery (IIA) for the treatment of a symptomatic type II endoleak (T2E) after endovascular aneurysm repair (EVAR). The patient presented with abdominal and back pain 1 year after EVAR. Subsequent enhanced computed tomography scan showed aneurysm sac enlargement from 60 mm to 70 mm, and digital substraction angiography revealed a T2E caused by patent IMA and right IIA. Then the patient underwent successful laparoscopic ligation of the IMA and right IIA. Postprocedural angiogram demonstrated complete resolution of the type II endoleak, and no intraoperative complications occurred. Also, there was no remaining abdominal pain or back pain after the operation.
abstract_id: PUBMED:26319477
Is Inferior Mesenteric Artery Embolization Indicated Prior to Endovascular Repair of Abdominal Aortic Aneurysm? Type II endoleak is a common condition occurring after endovascular repair of abdominal aortic aneurysms (EVAR), and may result in aneurysm sac growth and/or rupture in a small number of patients. A prophylactic strategy of inferior mesenteric artery (IMA) embolization before EVAR has been advocated, however, the benefits of this strategy are controversial. A clinical vignette allows the authors to summarize the available data about this issue and discuss the possible benefits and risks of prophylactic IMA embolization before EVAR. The authors performed a meta-analysis of available data which showed that the pooled rate of type II endoleak after IMA embolization was 19.9% (95% CI 3.4-34.7%, I2 93%) whereas it was 41.4% (95% CI 30.4-52.3%, I2 76%) in patients without IMA embolization (5 studies including 596 patients: p < .0001, OR 0.369, 95% CI 0.22-0.61, I2 27%). Since treatment for type II endoleaks is needed in less than 20% of cases and this complication can be treated successfully in 60-70% of cases resulting in an aneurysm rupture risk of 0.9%, these data indicate that embolization of patent IMA may be of no benefit in patients undergoing EVAR.
abstract_id: PUBMED:30515534
Type II Endoleak After Endovascular Aortic Aneurysm Repair Using the Endurant Stent Graft System for Abdominal Aortic Aneurysm with Occluded Inferior Mesenteric Artery. Purpose: To evaluate the incidence of type II endoleak (EL-II) and aneurysm enlargement after endovascular aneurysm repair (EVAR) using the Endurant stent graft in patients with abdominal aortic aneurysm (AAA) with occluded inferior mesenteric artery (IMA).
Materials And Methods: Between 2012 and 2017, 103 patients who underwent EVAR using the Endurant stent graft for AAA with occluded IMA (50 patients with prophylactic embolized IMA and 53 with spontaneous occluded IMA) were retrospectively reviewed. The incidence of EL-II and aneurysm enlargement was evaluated. Predictive factors for persistent EL-II were evaluated based on patient characteristics, preprocedural anatomical characteristics, intraprocedural details, and postprocedural complications.
Results: Incidence rates of early EL-II and persistent EL-II were 6.8% (7/103 patients) and 4.9% (5/103 patients), respectively. Aneurysm enlargement was found in 10 patients (9.7%), including all 5 patients with persistent EL-II, 3 with de novo EL-II, and 2 with no EL-II. The rates of freedom from aneurysm enlargement at 1, 2, and 3 years were 98.7%, 97.0%, and 93.1% for the group without persistent EL-II, and 80.0%, 60.0%, and 20.0% for the group with persistent EL-II (p < 0.001), respectively. The maximum aneurysm diameter (odds ratio (OR), 1.16; 95% confidence interval (CI), 1.01-1.34; p = 0.0362) and the number of patent lumbar arteries (OR, 2.72; 95% CI, 1.07-6.90; p = 0.0357) were predictive of persistent EL-II.
Conclusions: The incidence of EL-II after EVAR using the Endurant stent graft for AAA with occluded IMA was low, but most early EL-II persisted and resulted in aneurysm enlargement. Level of Evidence Level 4, Case Series.
abstract_id: PUBMED:27688294
Long-term results of intra-arterial onyx injection for type II endoleaks following endovascular aneurysm repair. Purpose The aim of this paper is to report our experience of type II endoleak treatment after endovascular aneurysm repair with intra-arterial injection of the embolizing liquid material, Onyx liquid embolic system. Methods From 2005 to 2012, we performed a retrospective review of 600 patients, who underwent endovascular repair of an abdominal aortic aneurysm. During this period, 18 patients were treated with Onyx for type II endoleaks. Principal findings The source of the endoleak was the internal iliac artery in seven cases, inferior mesenteric artery in seven cases and lumbar arteries in four cases. Immediate technical success was achieved in all patients and no endoleak from the treated vessel recurred. During a mean follow-up of 19 months, no major morbidity or mortality occurred, and one-year survival was 100%. Conclusions Treatment of type II endoleaks with Onyx is safe and effective over a significant time period.
abstract_id: PUBMED:31327603
The role of the inferior mesenteric artery in predicting secondary intervention for type II endoleak following endovascular aneurysm repair. Objective: Endovascular aneurysm repair (EVAR) is associated with a greater risk of graft-related complications and need for secondary interventions compared with open repair. Type II endoleak (EL-2) is the most common complication. We examined the hypothesis that a functionally occluded inferior mesenteric artery (IMA) before EVAR was associated with fewer secondary interventions for EL-2.
Methods: All nonruptured abdominal aortic aneurysms (AAA) treated by EVAR using U.S. Food and Drug Administration-approved endografts from January 2005 to December 2017 were retrospectively reviewed, including computed tomography angiograms. Preoperative patency of the IMA and any secondary interventions performed after the index EVAR procedure were recorded. A functionally occluded IMA was defined as one that was (1) chronically occluded or severely stenosed on preoperative imaging or (2) coil embolized before EVAR. Secondary interventions for persistent EL-2 were indicated when AAA sac diameter increased by more than 5 mm.
Results: The study cohort comprised 490 patients (84 women) with a mean age of 74.8 ± 8.2 years. The mean preoperative AAA diameter was 5.6 ± 0.9 cm. One hundred twenty-nine patients (26.3%) died during follow-up. The mean follow-up of survivors was 38 months. Types (prevalence) of endoleak were I (2.4%), II (18.9%), III (0.7%), IV (0.5%), and V (0.2%). Patients with a functionally occluded IMA underwent significantly fewer secondary interventions for EL-2 compared with patients with a patent IMA (2.6% vs 7.1%; P = .020). All secondary interventions in the functionally occluded IMA group involved the lumbar arteries (LA). When the IMA was patent, secondary interventions were equally distributed between the LA and IMA. Logistic regression confirmed that a functionally patent IMA was associated with a greater number of secondary interventions for EL-2 (odds ratio, 3.0; 95% confidence interval, 1.2-7.5; P = .025).
Conclusions: Patients with a functionally occluded IMA required significantly fewer secondary interventions for EL-2 after EVAR. In addition, the type of vessels intervened on were primarily LA. Among patients with a patent IMA, preoperative coil embolization may decrease secondary interventions and improve the long-term durability of EVAR.
Answer: Inferior mesenteric artery (IMA) type II endoleaks are a well-known complication following endovascular repair of abdominal aortic aneurysms (EVAR). They are described as retrograde flow into the aneurysm sac through collateral vessels and can occur in up to 30% of patients (PUBMED:22333935). Certain preoperative factors can predict which patients may develop type II endoleak. For instance, the diameter of the IMA and the number of patent lumbar arteries have been identified as factors associated with significant type II endoleak (PUBMED:35425973). Specifically, an IMA diameter ≥3 mm and ≥3 patent lumbar arteries have been proposed as optimal cut-off values related to developing a significant type II endoleak (PUBMED:35425973).
Moreover, the patency of the IMA has been shown to influence late reintervention and aneurysm sac enlargement, although it does not significantly impact mortality (PUBMED:36120997). Preoperative assessment and embolization of the IMA might be an important factor for improving late EVAR outcomes (PUBMED:36120997). Additionally, IMA embolization has been correlated with lower rates of type II endoleaks and secondary interventions (PUBMED:36143138).
In summary, type II endoleaks related to the IMA after EVAR are somewhat predictable based on preoperative factors such as the diameter of the IMA and the number of patent lumbar arteries. Preoperative embolization of the IMA may be beneficial in reducing the incidence of type II endoleaks and the need for secondary interventions (PUBMED:36143138; PUBMED:36120997; PUBMED:35425973). |
Instruction: Postoperative proton radiotherapy for localized and locoregional breast cancer: potential for clinically relevant improvements?
Abstracts:
abstract_id: PUBMED:19615828
Postoperative proton radiotherapy for localized and locoregional breast cancer: potential for clinically relevant improvements? Purpose: To study the potential reduction of dose to organs at risk (OARs) with intensity-modulated proton radiotherapy (IMPT) compared with intensity-modulated radiotherapy (IMRT) and three-dimensional conformal radiotherapy (3D-CRT) photon radiotherapy for left-sided breast cancer patients.
Methods And Materials: Comparative treatment-planning was performed using planning computed tomography scans of 20 left-sided breast cancer patients. For each patient, three increasingly complex locoregional volumes (planning target volumes [PTVs]) were defined: whole breast (WB) or chest wall (CW) = (PTV1), WB/CW plus medial-supraclavicular (MSC), lateral-supraclavicular (LSC), and level III axillary (AxIII) nodes = (PTV2) and WB/CW+MSC+LSC+AxIII plus internal mammary chain = (PTV3). For each patient, 3D-CRT, IMRT, and IMPT plans were optimized for PTV coverage. Dose to OARs was compared while maintaining target coverage.
Results: All the techniques met the required PTV coverage except the 3D-CRT plans for PTV3-scenario. All 3D-CRT plans for PTV3 exceeded left-lung V20. IMPT vs. 3D-CRT: significant dose reductions were observed for all OARs using IMPT for all PTVs. IMPT vs. IMRT: For PTV2 and PTV3, low (V5) left lung and cardiac doses were reduced by a factor >2.5, and cardiac doses (V22.5) were by a factor of >20 lower with IMPT compared with IMRT.
Conclusions: When complex-target irradiation is needed, 3D-CRT often compromises the target coverage and increases the dose to OARs; IMRT can provide better results but will increase the integral dose. The benefit of IMPT is based on improved target coverage and reduction of low doses to OARs, potentially reducing the risk of late-toxicity. These results indicate a potential role of proton-radiotherapy for extended locoregional irradiation in left breast cancer.
abstract_id: PUBMED:34068305
Future Perspectives of Proton Therapy in Minimizing the Toxicity of Breast Cancer Radiotherapy. The toxicity of radiotherapy is a key issue when analyzing the eligibility criteria for patients with breast cancer. In order to obtain better results, proton therapy is proposed because of the more favorable distribution of the dose in the patient's body compared with photon radiotherapy. Scientific groups have conducted extensive research into the improved efficacy and lower toxicity of proton therapy for breast cancer. Unfortunately, there is no complete insight into the potential reasons and prospects for avoiding undesirable results. Cardiotoxicity is considered challenging; however, researchers have not presented any realistic prospects for preventing them. We compared the clinical evidence collected over the last 20 years, providing the rationale for the consideration of proton therapy as an effective solution to reduce cardiotoxicity. We analyzed the parameters of the dose distribution (mean dose, Dmax, V5, and V20) in organs at risk, such as the heart, blood vessels, and lungs, using the following two irradiation techniques: whole breast irradiation and accelerated partial breast irradiation. Moreover, we presented the possible causes of side effects, taking into account biological and technical issues. Finally, we collected potential improvements in higher quality predictions of toxic cardiac effects, like biomarkers, and model-based approaches to give the full background of this complex issue.
abstract_id: PUBMED:29735190
Potential Morbidity Reduction With Proton Radiation Therapy for Breast Cancer. Proton radiotherapy confers significant dosimetric advantages in the treatment of malignancies that arise adjacent to critical radiosensitive structures. To date, these advantages have been most prominent in the treatment of pediatric and central nervous system malignancies, although emerging data support the use of protons among other anatomical sites in which radiotherapy plays an important role. With advances in the overall treatment paradigm for breast cancer, most patients with localized disease now exhibit long-term disease control and, consequently, may manifest the late toxicities of aggressive treatment. As a result, there is increasing emphasis on the mitigation of iatrogenic morbidity, with particular attention to heart and lung exposure in those receiving adjuvant radiotherapy. Indeed, recent landmark analyses have demonstrated an increase in significant cardiac events that is linked directly to low-dose radiation to the heart. Coupled with practice-changing trials that have expanded the indications for comprehensive regional nodal irradiation, there exists significant interest in employing novel technologies to mitigate cardiac dose while improving target volume coverage. Proton radiotherapy enjoys distinct physical advantages over photon-based approaches and, in appropriately selected patients, markedly improves both target coverage and normal tissue sparing. Here, we review the dosimetric evidence that underlies the putative benefits of proton radiotherapy, and further synthesize early clinical evidence that supports the efficacy and feasibility of proton radiation in breast cancer. Landmark, prospective randomized trials are underway and will ultimately define the role for protons in the treatment of this disease.
abstract_id: PUBMED:34888251
Impact of Preoperative vs Postoperative Radiotherapy on Overall Survival of Locally Advanced Breast Cancer Patients. Background: The treatment for locally advanced breast cancer (LABC) is a severe clinical problem. The postoperative radiotherapy is a conventional treatment method for patients with LABC, whereas the effect of preoperative radiotherapy on outcome of LABC remains controversial. This study aimed to examine and compare the overall survival (OS) in patients with LABC who underwent preoperative radiotherapy or postoperative radiotherapy.
Methods: This retrospective cohort study included 41,618 patients with LABC from the National Cancer Database (NCDB) between 2010 and 2014. We collected patients' demographic, clinicopathologic, treatment and survival information. Propensity score was used to match patients underwent pre-operative radiotherapy with those who underwent post-operative radiotherapy. Cox proportional hazard regression model was performed to access the association between variables and OS. Log-rank test was conducted to evaluate the difference in OS between groups.
Results: The estimated median follow-up of all included participants was 69.6 months (IQR: 42.84-60.22); 70.1 months (IQR: 46.85-79.97) for postoperative radiotherapy, 68.5 (IQR: 41.13-78.23) for preoperative radiotherapy, and 67.5 (IQR: 25.92-70.99) for no radiotherapy. The 5-year survival rate was 80.01% (79.56-80.47) for LABC patients who received postoperative radiotherapy, 64.08% (57.55-71.34) for preoperative radiotherapy, and 59.67% (58.60-60.77) for no radiotherapy. Compared with no radiation, patients receiving postoperative radiotherapy had a 38% lower risk of mortality (HR=0.62, 95%CI: 0.60-0.65, p<0.001), whereas those who received preoperative radiotherapy had no significant survival benefit (HR=0.88, 95%CI: 0.70-1.11, p=0.282). Propensity score matched analysis indicated that patients treated with preoperative radiotherapy had similar outcomes as those treated with postoperative radiotherapy (AHR=1.23, 95%CI: 0.88-1.72, p=0.218). Further analysis showed that in C0 (HR=1.45, 95%CI: 1.01-2.07, p=0.044) and G1-2 (AHR=1.74, 95%CI: 1.59-5.96, p=0.001) subgroup, patients receiving preoperative radiotherapy showed a worse OS than those who received postoperative radiotherapy.
Conclusions: Patients with LABC underwent postoperative radiotherapy had improved overall survival, whereas no significant survival benefit was observed in patients receiving preoperative radiotherapy. Preoperative radiotherapy did not present a better survival than postoperative radiotherapy for LABC patients.
abstract_id: PUBMED:31477141
Advantage of proton-radiotherapy for pediatric patients and adolescents with Hodgkin's disease. Radiotherapy is frequently used in the therapy of lymphoma. Since lymphoma, for example Hodgkin's disease, frequently affect rather young patients, the induction of secondary cancer or other long-term adverse effects after irradiation are important issues to deal with. Especially for mediastinal manifestations numerous organs and substructures at risk play a role. The heart, its coronary vessels and cardiac valves, the lungs, the thyroid and, for female patients, the breast tissue are only the most important organs at risk. In this study we investigated if proton-radiotherapy might reduce the dose delivered to the organs at risk and thus minimize the therapy-associated toxicity.
Methods: In this work we compared the dose delivered to the heart, its coronary vessels and valves, the lungs, the thyroid gland and the breast tissue by different volumetric photon plans and a proton plan, all calculated for a dose of 28.8 Gy (EURO-NET-PHL-C2). Target Volumes have been defined by F18-FDG PET-positive areas, following a modified involved node approach. Data from ten young female patients with mediastinal lymphoma have been evaluated. Three different modern volumetric IMRT (VMAT) photon plans have been benchmarked against each other and against proton-irradiation concepts. For plan-evaluation conformity- and homogeneity-indices have been calculated as suggested in ICRU 83. The target volume coverage as well as the dose to important organs at risk as the heart with its substructures, the lungs, the breast tissue, the thyroid and the spinal cord were calculated and compared. For statistical evaluation mean doses to organs at risk were evaluated by non- parametric Kruskal-Wallis calculations with pairwise comparisons.
Results: Proton-plans and three different volumetric photon-plans have been calculated. Proton irradiation results in significant lower doses delivered to organ at risk. The median doses and the mean doses could be decreased while PTV coverage is comparable. As well conformity as homogeneity are slightly better for proton plans. For several organs a risk reduction for secondary malignancies has been calculated using literature data as reference. According to the used data derived from literature especially the secondary breast cancer risk, the secondary lung cancer risk and the risk for ischemic cardiac insults can be reduced significantly by using protons for radiotherapy of mediastinal lymphomas.
Conclusion: Irradiation with protons for mediastinal Hodgkin-lymphoma results in significant lower doses for almost all organs at risk and is suitable to reduce long term side effects for pediatric and adolescent patients.
abstract_id: PUBMED:33917818
Impact of Breast Size on Dosimetric Indices in Proton Versus X-ray Radiotherapy for Breast Cancer. Deep inspiration breath hold (DIBH) radiotherapy is a technique used to manage early stage left-sided breast cancer. This study compared dosimetric indices of patient-specific X-ray versus proton therapy DIBH plans to explore differences in target coverage, radiation doses to organs at risk, and the impact of breast size. Radiotherapy plans of sixteen breast cancer patients previously treated with DIBH radiotherapy were re-planned with hybrid inverse-planned intensity modulated X-ray radiotherapy (h-IMRT) and intensity modulated proton therapy (IMPT). The total prescribed dose was 40.05 Gy in 15 fractions for all cases. Comparisons between the clinical, h-IMRT, and IMPT evaluated doses to target volumes, organs at risk, and correlations between doses and breast size. Although no differences were observed in target volume coverage between techniques, the h-IMRT and IMPT were able to produce more even dose distributions and IMPT delivered significantly less dose to all organs at risk than both X-ray techniques. A moderate negative correlation was observed between breast size and dose to the target in X-ray techniques, but not IMPT. Both h-IMRT and IMPT produced plans with more homogeneous dose distribution than forward-planned IMRT and IMPT achieved significantly lower doses to organs at risk compared to X-ray techniques.
abstract_id: PUBMED:35799256
Effect of postoperative radiotherapy in women with localized pure mucinous breast cancer after lumpectomy: a population-based study. Purpose: Pure mucinous breast cancer is a rare subtype of invasive breast cancer with favorable prognosis, in which the effect of postoperative radiotherapy remains unclear. We aimed to investigate the prognostic value of postoperative radiotherapy in women with localized pure mucinous breast cancer after lumpectomy.
Methods: We conducted a retrospective cohort study to compare the effectiveness of postoperative radiotherapy (RT) and omitting postoperative radiotherapy (non-RT) in patients with first primary T1-2N0M0 (T ≤ 3 cm) pure mucinous breast cancer who underwent lumpectomy between 1998 and 2015 using the Surveillance, Epidemiology, and End Results (SEER) database. Breast cancer-specific survival (BCSS) was compared between RT and non-RT groups using Kaplan-Meier method and Cox proportional hazards regression model. Propensity score matching (PSM) was carried out to balance cohort baselines. In addition, an exploratory analysis was performed to verify the effectiveness of RT in subgroup patients.
Results: Of 7832 eligible patients, 5352 (68.3%) underwent lumpectomy with postoperative RT, 2480 (31.7%) received lumpectomy without postoperative RT. The median follow-up duration was 92 months. The median age was 66 years in the RT group and 76 years in the non-RT group.The 15-year BCSS was 94.39% (95% CI, 93.08% to 95.35%) in the RT group versus 91.45%(95% CI, 88.93% to 93.42%) in the non-RT group (P < 0.001). The adjusted hazard ratio for BCSS was 0.64 (95% CI, 0.49 to 0.83; P = 0.001) for RT group versus non-RT group. After propensity score matching, similar results were yielded. Adjuvant RT reduced the 15-year risk of breast cancer death from 7.92% to 6.15% (P = 0.039). The adjusted hazard ratio for BCSS were 0.66 (95%CI, 0.47 to 0.92; P = 0.014) for RT group versus non-RT group. The benefit of RT was well consistent across subgroup patients.
Conclusion: Among women with T1-2N0M0 (tumor size ≤ 3 cm) pure mucinous breast cancer, the addition of RT after lumpectomy was significantly associated with a reduced incidence of breast cancer death compared with non-RT, and the magnitude of benefit may be modest. This suggests that postoperative RT is recommended in the treatment of localized pure mucinous breast cancer.
abstract_id: PUBMED:12631225
Localized scleroderma in a woman irradiated at two sites for endometrial and breast carcinoma: a case history and a review of the literature. Localized scleroderma is an uncommon side-effect of radiotherapy. We report a unique case with multiple asynchronous primary malignant tumors, which developed localized scleroderma after radiotherapy. A 67-year-old healthy woman received external irradiation for endometrial cancer. Three years later she underwent partial mastectomy and postoperative radiotherapy because of breast cancer. A progressive fibrosis developed in the breast. Within 12 months similar skin reactions were also seen in the irradiated abdominal wall and on both lower extremities. Biopsies revealed scleroderma lesions of breast and abdominal wall and scleroderma-like lesions on the legs. The lesions dissolved partially without generalization. This case, in contrast to most of the cases previously reported in the literature, illustrates not only lesions outside of radiation ports, but also that radiotherapy given to one cancer site can affect distant skin at a previously irradiated cancer site. When a localized scleroderma is diagnosed, further curative radiotherapy should be cautiously prescribed irrespective of cancer site.
abstract_id: PUBMED:37970351
Radiation-induced skin and heart toxicity in patients with breast cancer treated with adjuvant proton radiotherapy: a comparison with photon radiotherapy. This study aimed to investigate the dose parameters and incidence of radiotherapy (RT)-associated toxicity in patients with left breast cancer (LBC) treated with proton-RT, compared with photon-RT. We collected data from 111 patients with LBC who received adjuvant RT in our department between August 2021 and March 2023. Among these patients, 24 underwent proton-RT and 87 underwent photon-RT. In addition to the dosimetric analysis for organs at risk (OARs), we measured NT-proBNP levels before and after RT. Our data showed that proton-RT improved dose conformity and reduced doses to the heart and lungs and was associated with a lower rate of increased NT-proBNP than did photon-RT. Regarding skin toxicity, the Dmax for 1 c.c. and 10 c.c. and the average dose to the skin-OAR had predictive roles in the risk of developing radiation-induced dermatitis. Although pencil beam proton-RT with skin optimization had a dose similar to that of skin-OAR compared with photon-RT, proton-RT still had a higher rate of radiation dermatitis (29%) than did photon RT (11%). Using mice 16 days after irradiation, we demonstrated that proton-RT induced a greater increase in interleukin 6 and transforming growth factor-β1 levels than did photon-RT. Furthermore, topical steroid ointment reduced the inflammatory response and severity of dermatitis induced by RT. In conclusion, we suggest that proton-RT with skin optimization spares high doses to OARs with acceptable skin toxicity. Furthermore, prophylactic topical steroid treatment may decrease radiation dermatitis by alleviating proton-induced inflammatory responses in vivo.
abstract_id: PUBMED:36186685
Incidence of clinical lymphedema in breast cancer patients treated with adjuvant proton-based radiotherapy. Background: The purpose of this study was to evaluate the incidence of clinical lymphedema following adjuvant proton-based radiotherapy (RT) in breast cancer (BC) patients.
Materials And Methods: We performed a retrospective review of our institutional database to identify BC patients treated with adjuvant proton-based RT. Patients receiving re-irradiation for a BC recurrence or those with a history of ipsilateral chest wall radiation were excluded. Clinical lymphedema was determined by documentation in the chart at baseline and during follow-up.
Results: We identified 28 patients treated with adjuvant proton-based RT who met the study criteria. Median age at diagnosis was 45 (range, 24-75). Eleven patients (39%) underwent mastectomy, and fourteen (50%) underwent axillary lymph node dissection (ALND). Median number of LNs removed was 6 (range, 1-28). Nineteen patients (68%) received neoadjuvant chemotherapy. Median whole breast/chest wall dose delivered was 50 Gy (range, 44-54.0 Gy). Target volumes included the axillary and supraclavicular lymph nodes in all patients and internal mammary lymph nodes in 27 (96%) patients. Mean dose to the axilla was 49.7 Gy, and mean dose to 95% of the axillary volume (D95) was 46.3 Gy (94% of prescription dose). Mean dose to supraclavicular (SCV) volume was 47.7 Gy, and D95 was 44.1 Gy (91% of prescription dose). Grade 3 dermatitis occurred in 14% of patients. Five patients (18%) had clinical lymphedema, 4 from the ALND subset (n = 14).
Conclusions: The incidence of clinical lymphedema after proton-based RT is comparable to rates reported with photon-based RT with comprehensive nodal coverage.
Answer: Postoperative proton radiotherapy for localized and locoregional breast cancer shows potential for clinically relevant improvements, particularly in reducing the dose to organs at risk (OARs) compared with conventional photon radiotherapy techniques such as intensity-modulated radiotherapy (IMRT) and three-dimensional conformal radiotherapy (3D-CRT) (PUBMED:19615828). Proton therapy's more favorable dose distribution in the patient's body is proposed to result in improved efficacy and lower toxicity, with a specific emphasis on reducing cardiotoxicity, which is a significant concern in breast cancer radiotherapy (PUBMED:34068305).
Proton radiotherapy has distinct physical advantages over photon-based approaches, offering better target coverage and normal tissue sparing, which is particularly important given the long-term disease control in breast cancer patients and the consequent risk of late toxicities from aggressive treatment (PUBMED:29735190). The dosimetric benefits of proton therapy may lead to a reduction in the risk of secondary malignancies and other long-term side effects, such as cardiac events linked to low-dose radiation to the heart (PUBMED:31477141).
Furthermore, studies have shown that proton therapy can achieve significantly lower doses to OARs compared to X-ray techniques, regardless of breast size, and without compromising target volume coverage (PUBMED:33917818). This suggests that proton therapy could be particularly beneficial for patients with larger breast sizes, who may receive higher doses to the target with X-ray techniques.
In terms of clinical outcomes, while the impact of preoperative versus postoperative radiotherapy on overall survival in locally advanced breast cancer (LABC) patients remains controversial, postoperative radiotherapy has been associated with improved overall survival compared to no radiotherapy, and preoperative radiotherapy did not show a significant survival benefit over postoperative radiotherapy (PUBMED:34888251).
Additionally, for specific subtypes of breast cancer such as pure mucinous breast cancer, postoperative radiotherapy after lumpectomy has been associated with a reduced incidence of breast cancer death, suggesting that it should be considered in the treatment of localized pure mucinous breast cancer (PUBMED:35799256).
In conclusion, postoperative proton radiotherapy for localized and locoregional breast cancer has the potential for clinically relevant improvements, particularly in reducing toxicity to OARs and possibly improving long-term outcomes. However, the definitive role of protons in the treatment of breast cancer will be determined by the results of ongoing prospective randomized trials (PUBMED:29735190). |
Instruction: Histiocytic sarcoma - targeted therapy: novel therapeutic options?
Abstracts:
abstract_id: PUBMED:22846978
Histiocytic sarcoma - targeted therapy: novel therapeutic options? A series of 4 cases. Background: Histiocytic sarcoma (HS) is a rare but highly aggressive disease. The cancer-specific survival of patients with HS is short and only limited response to conventional chemotherapy or radiation therapy is seen. Some data from single case reports have suggested efficacy for high-dose chemotherapy and autologous/allogeneic stem cell transplantation.
Case Report: We report on 4 cases of HS, and demonstrate that different druggable receptors are expressed on HS. Using immunohistochemistry, we detected the expression of platelet-derived growth factor receptor, vascular endothelial growth factor receptor and epidermal growth factor receptor, which are all well-known targets for novel targeted agents. Based on the marker profile, different novel targeted therapies including imatinib, sorafenib and bevacizumab were applied to the patients. We observed a varying clinical course for each patient.
Conclusion: In our case series, we demonstrated that different receptors, which represent potential targets for novel drugs, are expressed on HS tumor cells. For a definitive assessment of the efficacy of these agents a prospective case study of a larger number of patients should be performed.
abstract_id: PUBMED:36969238
Case report: Targeting the PD-1 receptor and genetic mutations validated in primary histiocytic sarcoma with hemophagocytic lymphohistiocytosis. Histiocytic sarcoma (HS) is a rare hematological malignancy with limited treatment options, and it is also prone to complications such as hemophagocytic lymphohistiocytosis (HLH) in the later stages of the disease, leading to difficulties in treatment and poor prognosis. It highlights the importance of developing novel therapeutic agents. Herein, we present a case of a 45-year-old male patient who was diagnosed with PD-L1-positive HS with HLH. The patient was admitted to our hospital with recurrent high fever, multiple skin rashes with pruritus throughout the body and enlarged lymph nodes. Subsequently, pathological biopsy of the lymph nodes revealed high expression of CD163, CD68, S100, Lys and CD34 in the tumor cells and no expression of CD1a and CD207, confirming this rare clinical diagnosis. Concerning the low remission rate by conventional treatment in this disease, the patient was administered with sintilimab (an anti-programmed cell death 1 [anti-PD-1] monoclonal antibody) at 200 mg/d combined with a first-line chemotherapy regimen for one cycle. Further exploration of pathological biopsy by using next-generation gene sequencing led to the use of targeted therapy of chidamide. After one cycle of combination therapy (chidamide+sintilimab, abbreviated as CS), the patient achieved a favorable response. The patient showed remarkable improvement in the general symptoms and laboratory examination results (e.g., elevated indicators of inflammation); even the clinical benefits was not persistent, he survived one more month after his cessation of treatment by himself due to economic difficulty. Our case suggests that PD-1 inhibitor coupled with targeted therapy might constitute a potential therapeutic option for primary HS with HLH.
abstract_id: PUBMED:33904632
Histiocytic and Dendritic Cell Sarcomas of Hematopoietic Origin Share Targetable Genomic Alterations Distinct from Follicular Dendritic Cell Sarcoma. Background: Histiocytic and dendritic cell neoplasms are a diverse group of tumors arising from monocytic or dendritic cell lineage. Whereas the genomic features for Langerhans cell histiocytosis and Erdheim-Chester disease have been well described, other less common and often aggressive tumors in this broad category remain poorly characterized, and comparison studies across the World Health Organization diagnostic categories are lacking.
Methods: Tumor samples from a total of 102 patient cases within four major subtypes of malignant histiocytic and dendritic cell neoplasms, including 44 follicular dendritic cell sarcomas (FDCSs), 41 histiocytic sarcomas (HSs), 7 interdigitating dendritic cell sarcomas (IDCSs), and 10 Langerhans cell sarcomas (LCSs), underwent hybridization capture with analysis of up to 406 cancer-related genes.
Results: Among the entire cohort of 102 patients, CDKN2A mutations were most frequent across subtypes and made up 32% of cases, followed by TP53 mutations (22%). Mitogen-activated protein kinase (MAPK) pathway mutations were present and enriched among the malignant histiocytosis (M) group (HS, IDCS, and LCS) but absent in FDCS (72% vs. 0%; p < .0001). In contrast, NF-κB pathway mutations were frequent in FDCSs but rare in M group histiocytoses (61% vs. 12%; p < .0001). Tumor mutational burden was significantly higher in M group histiocytoses as compared with FDCSs (median 4.0/Mb vs. 2.4/Mb; p = .012). We also describe a pediatric patient with recurrent secondary histiocytic sarcoma treated with targeted therapy and interrogated by molecular analysis to identify mechanisms of therapeutic resistance.
Conclusion: A total of 42 patient tumors (41%) harbored pathogenic mutations that were potentially targetable by approved and/or investigative therapies. Our findings highlight the potential value of molecular testing to enable precise tumor classification, identify candidate oncogenic drivers, and define personalized therapeutic options for patients with these aggressive tumors.
Implications For Practice: This study presents comprehensive genomic profiling results on 102 patient cases within four major subtypes of malignant histiocytic and dendritic cell neoplasms, including 44 follicular dendritic cell sarcomas (FDCSs), 41 histiocytic sarcomas (HSs), 7 interdigitating dendritic cell sarcomas (IDCSs), and 10 Langerhans cell sarcomas (LCSs). MAPK pathway mutations were present and enriched among the malignant histiocytosis (M) group (HS, IDCS, and LCS) but absent in FDCSs. In contrast, NF-κB pathway mutations were frequent in FDCSs but rare in M group histiocytosis. A total of 42 patient tumors (41%) harbored pathogenic mutations that were potentially targetable by approved and/or investigative therapies.
abstract_id: PUBMED:30135215
Targeting MEK in a Translational Model of Histiocytic Sarcoma. Histiocytic sarcoma in humans is an aggressive orphan disease with a poor prognosis as treatment options are limited. Dogs are the only species that spontaneously develops histiocytic sarcoma with an appreciable frequency, and may have value as a translational model system. In the current study, high-throughput drug screening utilizing histiocytic sarcoma cells isolated from canine neoplasms identified these cells as particularly sensitive to a MEK inhibitor, trametinib. One of the canine cell lines carries a mutation in PTPN11 (E76K), and another one in KRAS (Q61H), which are associated with the activation of oncogenic MAPK signaling. Both mutations were previously reported in human histiocytic sarcoma. Trametinib inhibited sensitive cell lines by promoting cell apoptosis, indicated by a significant increase in caspase 3/7. Furthermore, in vitro findings were successfully recapitulated in an intrasplenic orthotopic xenograft mouse model, which represents a disseminated aggressive form of histiocytic sarcoma. Mice with histiocytic sarcoma xenograft neoplasms that were treated with trametinib had significantly longer survival times. Target engagement was validated as activity of ERK, downstream of MEK, was significantly downregulated in neoplasms of treated mice. Additionally, trametinib was found in plasma and neoplastic tissues within projected therapeutic levels. These findings demonstrate that in dogs, histiocytic sarcoma may be associated with a dysfunctional MAPK pathway, at least in some cases, and may be effectively targeted through MEK inhibition. Clinical trials to test safety and efficacy of trametinib in dogs with histiocytic sarcoma are warranted, and may provide valuable translational information to similar diseases in humans. Mol Cancer Ther; 17(11); 2439-50. ©2018 AACR.
abstract_id: PUBMED:35494913
Partial Response to Small Molecule Inhibition in a Case of Anaplastic Large Cell Lymphoma. In the era of personalized medicine, small-molecule inhibitors have become key to targeting many malignancies. Multiple hematologic malignancies are driven by small-molecule pathways that are seemingly ripe for such targeting. In this case report, we present a patient who was treated with a mitogen-activated extracellular signal-regulated kinase (MEK) inhibitor for what was originally diagnosed as a histiocytic sarcoma. Re-biopsy ultimately revealed an anaplastic lymphoma kinase (ALK)-negative anaplastic large cell lymphoma (ALCL), but his disease initially showed a remarkable response to MEK inhibition. This case illustrates both the importance of obtaining high-quality biopsy specimens for diagnostic and molecular analysis as well as the need for further research into the molecular drivers of T-cell lymphomas that may be amenable to targeted therapies.
abstract_id: PUBMED:26110571
Disseminated histiocytoses biomarkers beyond BRAFV600E: frequent expression of PD-L1. The histiocytoses are rare tumors characterized by the primary accumulation and tissue infiltration of histiocytes and dendritic cells. Identification of the activating BRAFV600E mutation in Erdheim-Chester disease (ECD) and Langerhans cell histiocytosis (LCH) cases provided the basis for the treatment with BRAF and/or MEK inhibitors, but additional treatment options are needed. Twenty-four cases of neoplastic histiocytic diseases [11 extrapulmonary LCH, 4 ECD, 4 extranodal Rosai-Dorfman disease (RDD), 3 follicular dendritic cell sarcoma (FDCS), 1 histiocytic sarcoma (HS) and 1 blastic plasmacytoid dendritic cell neoplasm (BPDCN)] were analyzed using immunohistochemical and mutational analysis in search of biomarkers for targeted therapy. BRAF V600E mutations were detected in 4/11 LCH and 4/4 ECD cases. A pathogenic PTEN gene mutation and loss of PTEN protein expression were identified in the case of HS. Increased expression of PD-L1 (≥2+/≥5%) was seen in 3/4 ECD, 7/8 LCH, 3/3 FDCS and 1/1 HS, with overall 81% concordance between 2 antibodies used in the study (SP142 vs. MAB1561 clone). These results show for the first time significant expression of the PD-L1 immune checkpoint protein in these disorders, which may provide rationale for addition of immune check-point inhibitors in treatment of disseminated and/or refractory histiocytoses.
abstract_id: PUBMED:32212266
PTPN11 mutations in canine and human disseminated histiocytic sarcoma. In humans, histiocytic sarcoma (HS) is an aggressive cancer involving histiocytes. Its rarity and heterogeneity explain that treatment remains a challenge. Sharing high clinical and histopathological similarities with human HS, the canine HS is conversely frequent in specific breeds and thus constitutes a unique spontaneous model for human HS to decipher the genetic bases and to explore therapeutic options. We identified sequence alterations in the MAPK pathway in at least 63.9% (71/111) of HS cases with mutually exclusive BRAF (0.9%; 1/111), KRAS (7.2%; 8/111) and PTPN11 (56.75%; 63/111) mutations concentrated at hotspots common to human cancers. Recurrent PTPN11 mutations are associated to visceral disseminated HS subtype in dogs, the most aggressive clinical presentation. We then identified PTPN11 mutations in 3/19 (15.7%) human HS patients. Thus, we propose PTPN11 mutations as key events for a specific subset of human and canine HS: the visceral disseminated form. Finally, by testing drugs targeting the MAPK pathway in eight canine HS cell lines, we identified a better anti-proliferation activity of MEK inhibitors than PTPN11 inhibitors in canine HS neoplastic cells. In combination, these results illustrate the relevance of naturally affected dogs in deciphering genetic mechanisms and selecting efficient targeted therapies for such rare and aggressive cancers in humans.
abstract_id: PUBMED:33012982
Molecular diagnosis using RNAscope in-situ hybridization in canine malignancies. Immunohistochemistry has been used extensively to evaluate protein expression in clinical and research settings. However, immunohistochemistry is not always successful in veterinary medicine due to the lack of reliable antibody options, poor tissue preservation, labor-intensive staining, and antigen-retrieval optimization processes. RNAscope in-situ hybridization (ISH) is a powerful technology that uses a specific sequence probe to identify targeted mRNA. In this study, we demonstrate RNAscope ISH in 4 common canine malignancies, which are traditionally diagnosed by histopathology and immunohistochemistry. Probes were designed for commonly targeted mRNA markers of neoplastic tumors; these included c-kit in mast cell tumor, microphthalmia-associated transcription factor in malignant melanoma, ionized calcium-binding adapter molecule-1 in histiocytic sarcoma, and alkaline phosphatase in osteosarcoma. A strong staining signal was obtained by these 4 targets in each canine malignancy. These results support the use of RNAscope ISH for definitive diagnosis in canine malignancies.
abstract_id: PUBMED:30185195
Primary histiocytic sarcoma of the central nervous system: a case report with platelet derived growth factor receptor mutation and PD-L1/PD-L2 expression and literature review. Background: Histiocytic sarcoma (HS) is an aggressive malignant neoplasm. HS in the central nervous system is exceptionally rare and associated with a poor prognosis. This report documents a case of primary HS of the central nervous system with treatment including surgery, radiotherapy, and chemotherapy.
Case Presentation: Our patient was a 47 year old female presenting with progressive ataxia, headaches, imbalance, nausea, vomiting, and diplopia. MRI showed a heterogeneously enhancing lesion approximately 2.9 × 3.0 × 2.3 cm centered upon the cerebellar vermis with mild surrounding vasogenic edema and abnormal enhancement of multiple cranial nerves. The patient underwent surgical debulking, which revealed histiocytic sarcoma with grossly purulent drainage. Staging revealed diffuse leptomeningeal involvement, primarily involving the brain and lower thoracic and lumbar spine. She underwent adjuvant radiotherapy to the brain and lower spine and was started on high dose methotrexate. However, she experienced progressive disease in the cervical and thoracic spine as well as pulmonary involvement. Genomic sequencing of her tumor showed a mutation in the platelet-derived growth factor receptor A (p.V0681) which could be targeted with Dasatinib. However, she did not tolerate Dasatinib and she succumbed to progressive disseminated disease eight months from original diagnosis. Our pathologic evaluation also revealed expression of PD-L1 and PD-L2 by tumor cells raising the potential therapeutic role for immune checkpoint inhibition.
Conclusions: This case provides an example of effective CNS control with resection and moderate doses of radiation therapy. A review of the literature confirms aggressive multidisciplinary treatment is the most effective treatment against this disease. In addition, genomic sequencing may play an important role in determining new therapeutic options. However, CNS histiocytic sarcoma remains an aggressive disease with a propensity for early widespread dissemination and few long term survivors.
abstract_id: PUBMED:31807922
Subsequent development of histiocytic sarcoma and follicular lymphoma: cytogenetics and next-generation sequencing analyses provide evidence for transdifferentiation of early common lymphoid precursor-a case report and review of literature. Histiocytic sarcoma (HS) is a rare aggressive hematologic neoplasm that can be associated with low-grade B cell lymphoma. The development of both neoplasms is currently being considered a transdifferentiation mechanism but remains elusive. We report the case of a 65-year-old patient with synchronous development of peritoneal/abdominal HS and grade 1-2 follicular lymphoma (FL). Cytogenetic analysis and targeted next-generation sequencing of both FL and HS tumors identified common genomic alterations such as IGH-BCL2 rearrangement, CREBBP and KMT2D, and aberrations of chromosomes 9q and 19q. However, only the HS tumor had a KRAS mutation while the lymph node involved by FL harbored a TNFAIP3 mutation and both tumors also showed distinct chromosomal alterations. This report strengthens the hypothesis of a common lymphoid progenitor which accumulates genetic alterations leading to two different hematologic malignant diseases with significantly distinct prognoses.
Answer: Histiocytic sarcoma (HS) is a rare and highly aggressive disease with limited treatment options and a poor prognosis. Conventional chemotherapy and radiation therapy have shown limited response, and there is a need for novel therapeutic strategies (PUBMED:22846978). Recent case reports and studies have highlighted the potential of targeted therapies in the treatment of HS.
Immunohistochemical analysis has revealed the expression of various druggable receptors on HS tumor cells, such as platelet-derived growth factor receptor (PDGFR), vascular endothelial growth factor receptor (VEGFR), and epidermal growth factor receptor (EGFR), suggesting the potential for targeted therapy with agents like imatinib, sorafenib, and bevacizumab (PUBMED:22846978). Additionally, PD-L1 expression has been observed in HS, indicating the possibility of using immune checkpoint inhibitors like sintilimab, an anti-PD-1 monoclonal antibody, in combination with targeted therapy such as chidamide, which has shown favorable responses in some cases (PUBMED:36969238).
Genomic profiling of HS has identified mutations in the MAPK pathway, which are enriched among malignant histiocytosis, and a significant number of patient tumors harbor pathogenic mutations that could be targeted by approved or investigative therapies (PUBMED:33904632). In particular, MEK inhibitors like trametinib have shown promising results in preclinical models and could be effective in cases of HS with MAPK pathway dysregulation (PUBMED:30135215).
Furthermore, PTPN11 mutations have been identified in both canine and human HS, particularly in the aggressive visceral disseminated form, and MEK inhibitors have demonstrated better anti-proliferation activity than PTPN11 inhibitors in canine HS cell lines (PUBMED:32212266). This suggests that targeting the MAPK pathway could be a viable therapeutic strategy for HS.
In summary, targeted therapies represent a novel and promising approach for the treatment of HS. These include targeting specific receptors expressed on tumor cells, exploiting genomic alterations, and utilizing immune checkpoint inhibitors. However, due to the rarity of the disease, larger prospective studies are needed to definitively assess the efficacy of these agents (PUBMED:22846978). |
Instruction: Lumbar chemical sympathectomy in peripheral vascular disease: does it still have a role?
Abstracts:
abstract_id: PUBMED:19237331
Lumbar chemical sympathectomy in peripheral vascular disease: does it still have a role? Introduction: Lumbar chemical sympathectomy (LCS) is used principally in inoperable peripheral vascular disease (PVD) to alleviate symptoms of rest pain and as an adjunct to other treatments for ulcers. No guidelines currently exist in the UK for its use in PVD. The aim of this study was to evaluate the role of LCS with regard to indications and outcomes in the UK and Irish vascular surgical practice.
Methods: Specifically designed questionnaires were sent to Vascular Surgical Society members. The questions related to their current use of LCS including indications, outcome parameters, use in diabetics and complications encountered.
Results: Four hundred and ninety postal questionnaires were sent out and 242 responses (49%) were received. Seventy five percent of the respondents (n=183) felt that LCS had a role in current practice. Seventy eight percent (n=144) performed less than 10 procedures per year and 3% (n=5) more than 20 per year. Eighty percent (n=145) were performed by anaesthetists, 12% (n=23) by radiologists and 8% (n=15) by surgeons. Inoperable peripheral vascular disease with rest pain was the main indication in over 80% of responses with 27% using it for the treatment of ulcers. Only 21% used LCS in diabetics. Clinical improvement was used to assess the outcome following LCS in 96% of responses. Complications included neuralgia, ureteric damage and paraplegia following inadvertent extradural injection.
Conclusion: Although no clear guidance exists for the use of LCS in PVD, the majority of respondents continue to use it. Indications and outcomes are documented in this study of UK and Irish vascular surgical practice.
abstract_id: PUBMED:12060154
Computed tomography fluoroscopy-guided chemical lumbar sympathectomy: simple, safe and effective. Demographic, clinical and laboratory data were retrospectively collected from records of 146 cases of CT fluoroscopy-guided chemical lumbar sympathectomy for the palliation of inoperable peripheral vascular disease (PVD) between January 1997 and August 1999. Of these, 16% had claudication, 39% had rest pain and 44% had ischaemic ulcers or gangrene. Seventy-three percent of elective cases were outpatients. At 3 months, 27 cases were lost to follow up, leaving 119 cases. Within 3 months, improvement, defined as doubling of the walking distance, cessation of rest pain or healing of ulcers, occurred in 30.3% of cases. No change was observed in 45.4% of cases and 24.3% of cases deteriorated. Patients with ulcers or gangrene had significantly poorer results than those without any ischaemic lesions, as only 19% versus 39% of patients improved (P < 0.05). The presence of hypertension, diabetes mellitus, hyperlipidaemia and smoking had no value in predicting clinical outcome (P > 0.05). There were no major complications noted. CT fluoroscopy-guided chemical lumbar sympathectomy is safe and effective, with a complication rate of less than 1%, and efficacy of at least 30% measured within 3 months. It is a simple and minimally invasive procedure, easily performed on an outpatient basis. CT fluoroscopy-guided chemical lumbar sympathectomy should be considered for all patients in the early stages of inoperable PVD.
abstract_id: PUBMED:17334418
Fluoroscopy guided chemical lumbar sympathectomy for lower limb ischaemic ulcers. The purpose of this study was to assess the effectiveness of chemical lumbar sympathectomy in relieving pain and healing ischaemic ulcers in patients with peripheral vascular diseases. Thirty-one consecutive patients with ischaemic/ gangrenous lower limb ulcers, referred to the BPKIHS, Pain Clinic were observed prospectively after chemical lumbar sympathectomy using modified Reid Technique with 3 ml of 70% alcohol each at L2 and L3 level under fluoroscopic guidance. Pain relief and ulcer healing were noted in the follow up. Moreover, patients' abilities to resume at least part of their day to day work were also noted at three months follow up. Of the total 31 patients, 16 had Buerger's disease and the remaining 15 had non-Buerger's ischaemic ulcers of which 7 were diabetic. There was significant decrease in the pain score from mean+/-SD of 8.3+/-0.9 (pre-block) to 4.2+/-2.5 (post-block after 3 days) in zero to 10 Numerical Analogue Scale (NAS). By 3 months, 6 patients declined for follow up; 19(76%) of the remaining 25 patients reported pain relief, 18(72%) reported healing or decrease in the size of ulcers and 11(44%) were able to resume at least part of their usual work. Minor complications occurred in 5 patients and amputation was needed in 6 patients. Fluoroscopy- guided chemical lumbar sympathectomy is feasible, safe and effective in relieving pain and promoting ulcer healing in patients with ischaemic lower limb ulcers due both to Buerger's disease and non-Buerger's peripheral vascular diseases.
abstract_id: PUBMED:28051823
Lumbar sympathetic chain: anatomical variation and clinical perspectives. The sympathetic and parasympathetic nervous systems constitute the autonomic nervous system which controls the entropy of the body and maintain the equilibrium. The sympathetic chain forms a definitive anatomic entity which is quite variable with respect to its position and the number of ganglia. The sympathetic nervous system causes vasoconstriction and thus forms the basis of Lumbar sympathetic surgeries being performed in patients with peripheral vascular diseases. The anatomic variations in this region hence gains immense importance for the operating surgeons and consulting radiologists. In the present study, the rami communicantes of either side of lumbar sympathetic chain crossed the common iliac arteries from lateral to medial side and united in front of first piece of sacrum. These rami communicantes encircled the right gonadal artery and could be a threat to the gonadal vascularity causing infertility. This was an unusual feature of the lumbar sympathetic chain and its rami communicantes that were noted in this particular case.
abstract_id: PUBMED:17212913
Lumbar spinal stenosis. Lumbar spinal stenosis (LSS) is a narrowing of the spinal canal with cord or nerve root impingement resulting in radiculopathy or pseudoclaudication. It is a common diagnosis that is occurring with increased frequency in sports medicine clinics. Symptoms include radicular pain, numbness, tingling, and weakness. Peripheral vascular disease presents similarly and must be considered in the differential diagnosis. Imaging for LSS usually begins with plain radiographs, but often requires additional testing with MRI or CT myelography. There are currently limited controlled data regarding both conservative and surgical treatment of LSS. Most physicians agree that mild disease should be treated conservatively with medications, physical therapy, and epidural steroid injections. Severe disease appears to be best treated surgically; laminectomy continues to be the gold standard treatment.
abstract_id: PUBMED:1809432
Continuous lumbar sympathetic block. A 74-year-old woman with peripheral vascular disease suffered from rest pain in the right big toe and intermittent claudication. Because of concomitant venous congestion, a chemical lumbar sympathectomy was considered to carry an increased risk of leg edema. A continuous lumbar sympathetic block with local anesthetic abolished the pain in the toe without side effects. After this reversible block, a chemical lumbar sympathectomy was performed producing pain relief for 4 weeks when the patient was last seen.
abstract_id: PUBMED:34623492
Lumbar spine intervertebral disc desiccation is associated with medical comorbidities linked to systemic inflammation. Introduction: Symptomatic disc degeneration is a common cause of low back pain. Recently, the prevalence of low back pain has swiftly risen leading to increased patient disability and loss of work. The increase in back pain also coincides with a rapid rise in patient medical comorbidities. However, a comprehensive study evaluating a link between patient's medical comorbidities and their influence on lumbar intervertebral disc morphology is lacking in the literature.
Methods: Electronic medical records (EMR) were retrospectively reviewed to determine patient-specific medical characteristics. Magnetic resonance imaging (MRI) was evaluated for lumbar spine intervertebral disc desiccation and height loss according to the Griffith-modified Pfirrmann grading system. Bivariate and multivariable linear regression analyses assessed strength of associations between patient characteristics and lumbar spine Pfirrmann grade severity (Pfirrmann grade of the most affected lumbar spine intervertebral disc) and cumulative grades (summed Pfirrmann grades for all lumbar spine intervertebral discs).
Results: In total, 605 patients (304 diabetics and 301 non-diabetics) met inclusion criteria. Bivariate analysis identified older age, diabetes, American Society of Anesthesiologists (ASA) class, hypertension, chronic obstructive pulmonary disease (COPD), peripheral vascular disease, and hypothyroidism as being strongly associated with an increasing cumulative Pfirrmann grades. Multivariable models similarly found an association linking increased cumulative Pfirrmann grades with diabetes, hypothyroidism, and hypertension, while additionally identifying non-white race, heart disease, and previous lumbar surgery. Chronic pain, depression, and obstructive sleep apnea (OSA) were associated with increased Pfirrmann grades at the most affected level without an increase in cumulative Pfirrmann scores. Glucose control was not associated with increasing severity or cumulative Pfirrmann scores.
Conclusion: These findings provide specific targets for future studies to elucidate key mechanisms by which patient-specific medical characteristics contribute to the development and progression of lumbar spine disc desiccation and height loss.
Level Of Evidence: III (retrospective cohort).
abstract_id: PUBMED:7471546
Urogenital complications of anterior approaches to the lumbar spine. The superior hypogastric plexus of the sympathetic nervous system is the only major innervation of the urogenital system which is normally at risk in anterior exposures of the lower lumbar spine. When this is injured, one can expect to see disturbances of urogenital function with retrograde ejaculation or sterility in males. Failure of penile erection is not anticipated unless the patient has, in addition, advanced peripheral vascular disease. The superior hypogastric plexus may be spared by careful dissection about the iliac arteries and lumbosacral junction or by approaching the spine laterally through a retroperitoneal exposure.
abstract_id: PUBMED:24130103
Endovascular transluminal stent grafting: Treatment of choice for post lumbar spine surgery iliac arterio-venous fistulae. Iliac vessels are prone to injury during lumbar spine surgery due to their proximity to the lumbar spine. Arterio-venous fistula formation during lumbar spine surgery is an uncommon complication and can present as an asymptomatic incidental finding to rapidly deteriorating hemodynamics leading to cardiopulmonary collapse. We have reported three patients who had symptomatic iliac arterio-venous fistula detected soon after lumbar spine surgery. All these patients were successfully treated by endovascular transluminal stent grafting. © 2013 Wiley Periodicals, Inc.
abstract_id: PUBMED:32135302
Development and validation of a novel scoring tool for predicting facility discharge after elective posterior lumbar fusion. Background Context: Discharge to acute/intermediate care facilities is a common occurrence after posterior lumbar fusion and can be associated with increased costs and complications after these procedures. This is particularly relevant with the growing popularity of bundled payment plans, creating a need to identify patients at greatest risk.
Purpose: To develop and validate a risk-stratification tool to identify patients at greatest risk for facility discharge after open posterior lumbar fusion.
Study Design: Retrospective cohort study.
Patient Sample: Patients were queried using separate databases from the institution of study and the American College of Surgeons National Surgical Quality Improvement Program (NSQIP) for all patients undergoing open lumbar fusion between 2011 and 2018.
Outcome Measures: Discharge to intermediate care and/or rehabilitation facilities.
Methods: Using an 80:20 training and testing NSQIP data split, collected preoperative demographic and operative variables were used in a multivariate logistic regression to identify potential risk factors for postoperative facility discharge, retaining those with a p value <.05. A nomogram was generated to develop a scoring system from this model, with probability cutoffs determined for facility discharge. This model was subsequently validated within the NSQIP database, in addition to external validation at the institution of study. Overall model performance and calibration was assessed using the Brier score and calibration plots, respectively.
Results: A total of 11,486 patients (10,453 NSQIP, 1,033 local cohort) were deemed eligible for study, of which 16.1% were discharged to facilities (16.7% NSQIP, 9.6% local cohort). Utilizing training data, age (p<.001), body mass index (p<.001), female sex (p<.001), diabetes (p=.043), peripheral vascular disease (p=.001), cancer (p=.010), revision surgery (p<.001), number of levels fused (p<.001), and spondylolisthesis (p=.049) were identified as significant risk factors for facility discharge. The area under the receiver operating characteristic curve (AUC) indicated a strong predictive model (AUC=0.750), with similar predictive ability in the testing (AUC=0.757) and local data sets (AUC=0.773). Using this tool, patients identified as low- and high-risk had a 7.94% and 33.28% incidence of facility discharge in the testing data set, while rates of 4.44% and 16.33% were observed at the institution of study.
Conclusions: Using preoperative variables as predictors, this scoring system demonstrated high efficiency in risk-stratifying patients with an approximate four to fivefold difference in rates of facility discharge after posterior lumbar fusion. This tool may help inform medical decision-making and guide reimbursement under bundled-care repayment plans.
Answer: Lumbar chemical sympathectomy (LCS) continues to have a role in the management of peripheral vascular disease (PVD), particularly in cases where patients are inoperable. The majority of vascular surgeons in the UK and Ireland believe that LCS has a place in current practice, with 75% of respondents in a survey indicating its continued use. The main indication for LCS is inoperable PVD with rest pain, which was cited by over 80% of respondents, and 27% also used it for the treatment of ulcers. However, only 21% used LCS in diabetic patients, and complications such as neuralgia, ureteric damage, and paraplegia following inadvertent extradural injection have been reported (PUBMED:19237331).
Computed tomography (CT) fluoroscopy-guided chemical lumbar sympathectomy has been shown to be safe and effective, with a complication rate of less than 1%, and efficacy of at least 30% measured within 3 months. It is a simple and minimally invasive procedure that can be performed on an outpatient basis, and it is recommended for all patients in the early stages of inoperable PVD (PUBMED:12060154).
Fluoroscopy-guided chemical lumbar sympathectomy has also been found to be feasible, safe, and effective in relieving pain and promoting ulcer healing in patients with ischaemic lower limb ulcers due to both Buerger's disease and non-Buerger's peripheral vascular diseases (PUBMED:17334418).
In summary, LCS remains a viable option for symptom alleviation in inoperable PVD, particularly for rest pain and ulcer treatment. Its role is supported by the positive outcomes and low complication rates reported in the literature. However, the use of LCS should be carefully considered in the context of potential complications and the specific patient population, such as those with diabetes. |
Instruction: The timing of neonatal discharge: An example of unwarranted variation?
Abstracts:
abstract_id: PUBMED:35243787
Unwarranted variation in radiation therapy fractionation. The adoption of hypofractionation across multiple tumour sites has been slow despite robust evidence. There is considerable unwarranted variation in practice, both within and between jurisdictions. This has been attributed to inconsistencies in guidelines, physician preference, lack of technology and differing financial incentives. Unwarranted variation in the use of hypofractionation has a tremendous effect on cost to both patients and the healthcare system. This places an unnecessary burden on patients and poorly utilises scarce healthcare resources. A collaborative effort from clinicians, patients, healthcare providers and policymakers is needed to reduce unwarranted variation in practice. This will improve quality of care both for patients and at broader healthcare system level.
abstract_id: PUBMED:37933789
Identification and classification of principal features for analyzing unwarranted clinical variation. Rationale, Aims, And Objective: Unwarranted clinical variation (UCV) is an undesirable aspect of a healthcare system, but analyzing for UCV can be difficult and time-consuming. No analytic feature guidelines currently exist to aid researchers. We performed a systematic review of UCV literature to identify and classify the features researchers have identified as necessary for the analysis of UCV.
Methods: The literature search followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses. We looked for articles with the terms 'medical practice variation' and 'unwarranted clinical variation' from four databases: Medline, Web of Science, EMBASE and CINAHL. The search was performed on 24 March 2023. The articles selected were original research articles in the English language reporting on UCV analysis in adult populations. Most of the studies were retrospective cohort analyses. We excluded studies reporting geographic variation based on the Atlas of Variation or small-area analysis methods. We used ASReview Lab software to assist in identifying articles for abstract review. We also conducted subsequent reference searches of the primary articles to retrieve additional articles.
Results: The search yielded 499 articles, and we reviewed 46. We identified 28 principal analytic features utilized to analyze for unwarranted variation, categorised under patient-related or local healthcare context factors. Within the patient-related factors, we identified three subcategories: patient sociodemographics, clinical characteristics, and preferences, and classified 17 features into seven subcategories. In the local context category, 11 features are classified under two subcategories. Examples are provided on the usage of each feature for analysis.
Conclusion: Twenty-eight analytic features have been identified, and a categorisation has been established showing the relationships between features. Identifying and classifying features provides guidelines for known confounders during analysis and reduces the steps required when performing UCV analysis; there is no longer a need for a UCV researcher to engage in time-consuming feature engineering activities.
abstract_id: PUBMED:38053259
Reducing unwarranted variation: can a 'clinical dashboard' be helpful for hospital executive boards and top-level leaders? Background/aim: In the past decades, there has been an increasing focus on defining, identifying and reducing unwarranted variation in clinical practice. There have been several attempts to monitor and reduce unwarranted variation, but the experience so far is that these initiatives have failed to reach their goals. In this article, we present the initial process of developing a safety, quality and utilisation rate dashboard ('clinical dashboard') based on a selection of data routinely reported to executive boards and top-level leaders in Norwegian specialist healthcare.
Methods: We used a modified version of Wennberg's categorisation of healthcare delivery to develop the dashboard, focusing on variation in (1) effective care and patient safety and (2) preference-sensitive and supply-sensitive care.
Results: Effective care and patient safety are monitored with outcome measures such as 30-day mortality after hospital admission and 5-year cancer survival, whereas utilisation rates for procedures selected on cost and volume are used to follow variations in preference-sensitive and supply-sensitive care.
Conclusion: We argue that selecting quality indicators of patient safety, quality and utilisation rates and presenting them in a dashboard may help executive hospital boards and top-level leaders to focus on unwarranted variation.
abstract_id: PUBMED:31136047
Unwarranted clinical variation in health care: Definitions and proposal of an analytic framework. Rationale, Aims, And Objectives: Unwarranted clinical variation is a topic of heightened interest in health care systems around the world. While there are many publications and reports on clinical variation, few studies are conceptually grounded in a theoretical model. This study describes the empirical foundations of the field and proposes an analytic framework.
Method: Structured construct mapping of published empirical studies which explicitly address unwarranted clinical variation.
Results: A total of 190 studies were classified in terms of three key dimensions: perspective (assessing variation across geographical areas or across providers); criteria for assessment (measuring absolute variation against a standard, or relative variation within a comparator group); and object of analysis (using process, structure/resource, or outcome metrics).
Conclusion: Consideration of the results of the mapping exercise-together with a review of adjustment, explanatory and stratification variables, and the factors associated with residual variation-informed the development of an analytic framework. This framework highlights the role that agency and motivation, evidence and judgement, and personal and organizational capacity play in clinical decision making and reveals key facets that distinguish warranted from unwarranted clinical variation. From a measurement perspective, it underlines the need for careful consideration of attribution, aggregation, models of care, and temporality in any assessment.
abstract_id: PUBMED:28556526
Examining the role of the physician as a source of variation: Are physician-related variations necessarily unwarranted? Rationale, Aims, And Objectives: The physician is often implicated as an important cause of observed variations in health care service use. However, it is not clear if physician-related variation is problematic for patient care. This paper illustrates that observed physician-related variation is not necessarily unwarranted.
Methods: This is a narrative review.
Results: Many studies have attributed observed variations to the physician, but little attention is given towards discriminating between those variations that exist for good reasons and those that are unwarranted. Two arguments can be made for why physician-related variation is unwarranted. The first posits that physician-related factors should not play a role in management of care decisions because such decisions should be driven by science (which is imagined to be definitive). The second considers the possibility of supplier-induced demand as a factor driving observed variations. We show that neither argument is sufficient to rule out that physician-related variations may be warranted. Furthermore, the claim that such variations are necessarily problematic for patients has yet to be substantiated empirically.
Conclusions: It is not enough to simply show that physician-related variation can exist-one must also show where it is unwarranted and what is the magnitude of unwarranted variations. Failure to show this can have significant implications on how we interpret and respond to observed variations. Improved measurement of the sources of variation, especially with respect to patient preferences and context, may help us start to disentangle physician-related variation that is desirable from that which is unwarranted.
abstract_id: PUBMED:32997392
Identifying unwarranted variation in clinical practice between healthcare providers in England: Analysis of administrative data over time for the Getting It Right First Time programme. Rationale, Aims, And Objectives: The Getting It Right First Time programme aims to reduce variation in clinical practice that unduly impacts on outcomes for patients in the National Health Service (NHS) in England; often termed "unwarranted variation." However, there is no "gold standard" method for detecting unwarranted variation. The aim of this study was to describe a method to allow such variation in recorded practice or patient outcomes between NHS trusts to be detected using data over multiple time periods. By looking at variation over time, it was hoped that patterns that could be missed by looking at data at a single time point, or averaged over a longer time period, could be identified.
Methods: This was a retrospective time-series analysis of observational administrative data. Data were extracted from the Hospital Episodes Statistics database for two exemplar aspects of clinical practice within the field of urology: (a) use of ureteric stents on first emergency admission to treat urinary tract stones and (b) waiting times for definitive surgery for urinary retention. Data were categorized into 3-month time periods and three rules were used to detect unwarranted variation in the outcome metric relative to the national average: (a) two of any three consecutive values greater than two standard deviations above the mean, (b) four of any five consecutive values greater than one standard deviation above the mean, and (c) eight consecutive values above the mean.
Results: For the urinary tract stones dataset, 24 trusts were identified as having unwarranted variation in the outcomes using funnel plots and 23 trusts using the time-series method. For the urinary retention data, 18 trusted were identified as having unwarranted variation in the outcomes using funnel plots and 22 trusts using the time-series method.
Conclusions: The time-series method may complement other methods to help identify unwarranted variation.
abstract_id: PUBMED:36759306
The environmental cost of unwarranted variation in the use of magnetic resonance imaging and computed tomography scans. Background: Pollution is a major threat to global health, and there is growing interest on strategies to reduce emissions caused by health care systems. Unwarranted clinical variation, i.e. variation in the utilization of health services unexplained by differences in patient illness or preferences, may be an avoidable source of CO2 when related to overuse. Our objective was to evaluate the CO2 emissions attributable to unwarranted variation in the use of MRI and CT scans among countries of the G20-area.
Methods: We selected seven countries of the G20-area with available data on the use of CT and MRI scans from the organization for Economic Co-operation and Development repository. Each nation's annual electric energy expenditure per 1000 inhabitants for such exams (T-Enex-1000) was calculated and compared with the median and lowest value. Based on such differences we estimated the national energy and corresponding tons of CO2 that could be potentially avoided each year.
Results: With available data we found a significant variation in T-Enex-1000 (median value 1782 kWh, range 1200-3079 kWh) and estimated a significant amount of potentially avoidable emissions each year (range 2046-175120 tons of CO2). In practical terms such emissions would need, in the case of Germany, 71900 and 104210 acres of forest to be cleared from the atmosphere, which is 1.2 and 1.7 times the size of the largest German forest (Bavarian National Forest).
Conclusion: Among countries with a similar rate of development, unwarranted clinical variation in the use of MRI and CT scan causes significant emissions of CO2.
abstract_id: PUBMED:31948447
Can feedback approaches reduce unwarranted clinical variation? A systematic rapid evidence synthesis. Background: Assessment of clinical variation has attracted increasing interest in health systems internationally due to growing awareness about better value and appropriate health care as a mechanism for enhancing efficient, effective and timely care. Feedback using administrative databases to provide benchmarking data has been utilised in several countries to explore clinical care variation and to enhance guideline adherent care. Whilst methods for detecting variation are well-established, methods for determining variation that is unwarranted and addressing this are strongly debated. This study aimed to synthesize published evidence of the use of feedback approaches to address unwarranted clinical variation (UCV).
Methods: A rapid review and narrative evidence synthesis was undertaken as a policy-focused review to understand how feedback approaches have been applied to address UCV specifically. Key words, synonyms and subject headings were used to search the major electronic databases Medline and PubMed between 2000 and 2018. Titles and abstracts of publications were screened by two reviewers and independently checked by a third reviewer. Full text articles were screened against the eligibility criteria. Key findings were extracted and integrated in a narrative synthesis.
Results: Feedback approaches that occurred over a duration of 1 month to 9 years to address clinical variation emerged from 27 publications with quantitative (20), theoretical/conceptual/descriptive work (4) and mixed or multi-method studies (3). Approaches ranged from presenting evidence to individuals, teams and organisations, to providing facilitated tailored feedback supported by a process of ongoing dialogue to enable change. Feedback approaches identified primarily focused on changing clinician decision-making and behaviour. Providing feedback to clinicians was identified, in a range of a settings, as associated with changes in variation such as reducing overuse of tests and treatments, reducing variations in optimal patient clinical outcomes and increasing guideline or protocol adherence.
Conclusions: The review findings suggest value in the use of feedback approaches to respond to clinical variation and understand when action is warranted. Evaluation of the effectiveness of particular feedback approaches is now required to determine if there is an optimal approach to create change where needed.
abstract_id: PUBMED:34206452
The Impact of New Surgical Techniques on Geographical Unwarranted Variation: The Case of Benign Hysterectomy. Since the 1980s, the international literature has reported variations for healthcare services, especially for elective ones. Variations are positive if they reflect patient preferences, while if they do not, they are unwarranted, and thus avoidable. Benign hysterectomy is among the most frequent elective surgical procedures in developed countries, and, in recent years, it has been increasingly delivered through minimally invasive surgical techniques, namely laparoscopic or robotic. The question therefore arises over what the impact of these new surgical techniques on avoidable variation is. In this study we analyze the extent of unwarranted geographical variation of treatment rates and of the adoption of minimally invasive procedures for benign hysterectomy in an Italian regional healthcare system. We assess the impact of the surgical approach on the provision of benign hysterectomy, in terms of efficiency (by measuring the average length of stay) and efficacy (by measuring the post-operative complications). Geographical variation was observed among regional health districts for treatment rates and waiting times. At a provider level, we found differences for the minimally invasive approach. We found a positive and significant association between rates and the percentage of minimally invasive procedures. Providers that frequently adopt minimally invasive procedures have shorter average length of stay, and when they also perform open hysterectomies, fewer complications.
abstract_id: PUBMED:29766616
Addressing unwarranted clinical variation: A rapid review of current evidence. Introduction: Unwarranted clinical variation (UCV) can be described as variation that can only be explained by differences in health system performance. There is a lack of clarity regarding how to define and identify UCV and, once identified, to determine whether it is sufficiently problematic to warrant action. As such, the implementation of systemic approaches to reducing UCV is challenging. A review of approaches to understand, identify, and address UCV was undertaken to determine how conceptual and theoretical frameworks currently attempt to define UCV, the approaches used to identify UCV, and the evidence of their effectiveness.
Design: Rapid evidence assessment (REA) methodology was used.
Data Sources: A range of text words, synonyms, and subject headings were developed for the major concepts of unwarranted clinical variation, standards (and deviation from these standards), and health care environment. Two electronic databases (Medline and Pubmed) were searched from January 2006 to April 2017, in addition to hand searching of relevant journals, reference lists, and grey literature.
Data Synthesis: Results were merged using reference-management software (Endnote) and duplicates removed. Inclusion criteria were independently applied to potentially relevant articles by 3 reviewers. Findings were presented in a narrative synthesis to highlight key concepts addressed in the published literature.
Results: A total of 48 relevant publications were included in the review; 21 articles were identified as eligible from the database search, 4 from hand searching published work and 23 from the grey literature. The search process highlighted the voluminous literature reporting clinical variation internationally; yet, there is a dearth of evidence regarding systematic approaches to identifying or addressing UCV.
Conclusion: Wennberg's classification framework is commonly cited in relation to classifying variation, but no single approach is agreed upon to systematically explore and address UCV. The instances of UCV that warrant investigation and action are largely determined at a systems level currently, and stakeholder engagement in this process is limited. Lack of consensus on an evidence-based definition for UCV remains a substantial barrier to progress in this field.
Answer: The provided abstracts do not specifically address the timing of neonatal discharge as an example of unwarranted variation. Unwarranted variation refers to differences in healthcare practices that cannot be explained by patient preferences, patient illness, or evidence-based medicine (PUBMED:35243787; PUBMED:37933789; PUBMED:31136047; PUBMED:32997392; PUBMED:36759306; PUBMED:31948447; PUBMED:34206452; PUBMED:29766616). It is a phenomenon observed across various healthcare services and can be influenced by factors such as physician preference, inconsistencies in guidelines, lack of technology, differing financial incentives, and environmental costs (PUBMED:35243787; PUBMED:36759306).
To determine whether the timing of neonatal discharge is an example of unwarranted variation, one would need to examine the evidence and guidelines for neonatal discharge, assess the consistency of discharge timing across different healthcare providers or regions, and consider whether variations are justified by clinical indications or patient needs. If variations in the timing of neonatal discharge are found to be widespread and cannot be justified by clinical evidence or patient-specific factors, then it could be considered an example of unwarranted variation.
However, without specific data or studies on neonatal discharge practices, it is not possible to conclusively state whether the timing of neonatal discharge is an example of unwarranted variation based on the abstracts provided. Further research and analysis would be necessary to explore this issue in detail. |
Instruction: Are there therapy algorithms in isolated and combined atlas fractures?
Abstracts:
abstract_id: PUBMED:19693743
Are there therapy algorithms in isolated and combined atlas fractures? Aim: Injuries of the atlas are always a challenge in diagnostics and therapy. Different clinical manifestations, inconspicuous neurological results, uncertain findings of radiological diagnostics and possible accompanying injuries require individual therapeutic concepts.
Methods: Patients with injuries of C1 and C2 seen between 2001-2007 were evaluated and especially the morbidity and treatment of the C1-injured patients were verified. To systematise the injuries, a subdivision in isolated and combined trauma took place. Furthermore, the post-traumatic as well as post-therapeutic accompanying neurological deficits were evaluated.
Results: Altogether 121 fractures/injuries of the upper cervical spine (C1/C2) were counted, 22 (18.2 %) concerning the atlas. There were 11 fractures of type Gehweiler I, 9 of type III and 1 each of types II and IV. Isolated fractures of type I (5/11) were treated conservatively, combined injuries (6/11), depending on the stability and location of the attendant injuries, were treated with semi-rigid collars, anterior or posterior fusions. Stable fractures of type III (2/9) were primarily treated in Halo extension. Because of an attending dens fracture type Anderson II in 1 case, a spondylodesis of the dens was additionally performed in the conservative treatment of the atlas. The therapy of isolated unstable atlas fractures of type III (4/9) ranged, depending on the general conditions, from Halo extension, transoral C1 stabilisation, anterior transarticular C1/C2 fusion to posterior occipitocervical fusions. The therapeutic regime of combined unstable type III injuries (2/9) depended on the additional trauma: anterior fusion in C6/7 luxation fracture combined with Halo extension for C1, posterior C0/C3 fusion in unstable dens fractures of type Anderson II.
Conclusion: The therapy for atlas fractures orientates on the type of the C1 fracture, the accompanying injuries and the general condition of the patient. Isolated stable C1 fractures without dislocation can be treated conservatively (cervical collar), unstable fractures, depending on the general condition, should be referred to surgical therapy or halo extension. In combined atlas fractures the strategy of treatment has to take the stability of the C1 fractures into consideration, but also the additional injuries of the rest of the cervical spine and the attendant circumstances.
abstract_id: PUBMED:37440984
Management of combined atlas and axis fractures: a systematic review. Background: Combined atlas-axis fractures are rare occurrences with substantially higher rates of neurologic deficits compared with isolated injuries. Given the intricate anatomic relationship between the atlas and axis vertebra, variable fracture patterns may occur, warranting special considerations from surgeons.
Methods: A systematic search of PubMed and EMBASE was performed following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines. Relevant studies on acute combined atlas-axis fractures that provided data on patient demographics, presentation (injury mechanism, neurologic deficits, fracture type), management, complications, and study conclusions were reviewed.
Results: A total of 22 articles published from 1977 to 2022, comprising 230 patients, were included in the final analysis. Thirty-seven of the 213 patients (17%) presented with neurologic deficits. The most common atlas injuries were posterior arch fractures (54/169 patients; 32%), combined posterior arch/anterior arch fractures (44/169 patients; 26%), and anterior arch fractures (43/169 patients; 25%). The most common axis injuries were type II odontoid fractures (115/175 patients; 66%). Of the 127 patients managed operatively (127/230 patients; 55%), 45 patients (35%) were treated with C1-C2 posterior spinal fusion, 33 patients (26%) were treated with odontoid screw fixation and anterior/posterior C1-C2 trans-articular screws, 16 patients (13%) were treated with occiputocervical fusion and 12 patients (9%) were treated with odontoid screw fixation alone.
Conclusions: Management strategies are generally based on the type of axis fracture as well as the condition of the transverse ligament. Patients with stable fractures can be successfully managed nonoperatively with a cervical collar or halo immobilization. Combined atlas-axis fractures with an atlantodental interval >5 mm, C1 lateral mass displacement >7 mm, C2-C3 angulation >11° or an MRI demonstrating a disrupted transverse ligament are suggestive of instability and are often successfully managed with surgical intervention. There is no consensus regarding surgical technique.
abstract_id: PUBMED:36059318
Mortality From Combined Fractures of the Atlas (C1) and Axis (C2) in Adults. Study design A retrospective case report of all upper cervical spine fractures diagnosed by CT imaging between 01/01/2013 and 31/12/2015 in NHS Greater Glasgow and Clyde, Scotland. Objective To compare the mortality following combined fractures of the atlas and axis to that of isolated fractures of either vertebra. Background The mortality from axis fractures is well documented in the literature. However, a combined fracture of the atlas and axis is seldom reported, leading to relatively unknown outcomes and mortality. Methods A total of 171 patients with atlas and/or axis fractures. Thirty-three presented with concurrent lower cervical spine fractures and were excluded from further analysis. Kaplan-Meier curves were used to compare survivorship between 108 patients with isolated and 30 with combined fractures. Similar analysis adjusted for comorbidities, including dementia and previous fragility fractures. Results Patients were followed up for 47.3±10.3 months (SD). Patients with isolated atlas fractures were significantly younger than those with an axis or combined fracture. Nearly half (8/17) of combined fracture mortalities occurred within the first 120 days. The mortality at 120 days was 26.7% in the combined fractures group and 18.5% in the isolated fracture group. There was no significant difference in the 120-day and overall mortality between these injury patterns. Furthermore, cognitive impairment and previous fragility fractures bore no significant impact on mortality. Nevertheless, mortality in the combined fracture group with previous fragility fractures did trend to shorter survivorship. Conclusions Patients with combined fractures are older and with the ever-increasing elderly population, the incidence of these injuries is expected to rise. While our data show that the 120-day mortality is proportionally higher in the combined fractures group, no long-term statistically significant difference is demonstrated. This evidence contests the notion that combined fractures of the atlas and axis have higher mortality than isolated injuries of either cervical vertebra.
abstract_id: PUBMED:12431296
Isolated fractures of the atlas in adults. Standards: There is insufficient evidence to support treatment standards.
Guidelines: There is insufficient evidence to support treatment guidelines.
Options: Treatment options in the management of isolated fractures of the atlas are based on the specific atlas fracture type. It is recommended that isolated fractures of the atlas with an intact transverse atlantal ligament be treated with cervical immobilization alone. It is recommended that isolated fractures of the atlas with disruption of the transverse atlantal ligament be treated with either cervical immobilization alone or surgical fixation and fusion.
abstract_id: PUBMED:15069863
Isolated fractures of the atlas Purpose Of The Study: To present the current tends in the diagnosis and management of isolated atlas fractures based on the retrospectively evaluated group of patients with this trauma.
Material: In the period from 1995 to 2002, we treated 486 injuries to the cervical spine at our department. Out of these, 19 patients sustained an isolated fracture of the first cervical vertebra. This group consisted of 12 men and seven women; the average age was 46.6 years. Neurological findings in 18 patients were classified as Frankel E and, in one, as Frankel A. The causes of injury included a fall from height in five patient, a fall in the street in five pedestrians, a car accident in five patients, a dive into shallow water in three and a shooting injury in one patient.
Methods: We treated 16 patients conservatively, using a halo-vest in eight patients and a Philadelphia collar also in eight patients. In two patients with unstable atlas injury, we carried out C1-C2 transarticular stabilization according to Magerl. In the patient who had been shot, we removed the bullet transorally.
Results: All patients healed completely without signs of instability. One patient with postraumatic pentaplegia, who died within 24 h of surgery due to septic shock, had not been included in the follow-up. Two patients reported neck pain at rest, three after exercise and 13 were without any pain. The patient after C1-C2 transarticular stabilization had a significant restriction of the range of motion in the cervical spine; the rest of the patients were without limitation. None of the patients showed any deterioration of neurological findings during the treatment, nor was any post-traumatic atlantoaxial instability recorded after the therapy was completed.
Discussion: Isolated fractures of the atlas account for 1 to 2% of all spinal fractures. Many fractures may remain unnoticed and, therefore, it is important to X-ray patients with a symptomatic injury to the cervical spine in three standard projection planes (anteroposterior, lateral and transoral). When a fracture of the atlas is suspected, it is necessary to examine them by computed tomography to obtain a more accurate presentation of fracture lines. Views on the method of treating isolated fractures of the atlas, particularly unstable ones, are not consistent.
Conclusions: Isolated fractures of the first cervical vertebra, in terms of therapy, are stable and unstable. Stable fractures heal within 8 to 12 weeks. A Philadelphia collar or halo-vest provide sufficient immobilization. Surgical stabilization or a halo-vest immobilization for a period of 12 weeks are recommended in unstable injuries that are characterized by the lateral mass displacement of more than 7 mm or extension of the space before the dens (predental space) by more than 3 mm, or in which magnetic resonance imaging demonstrated injury to the transverse ligament. After the halo-vest removal, it is necessary to perform functional examination of the cervical spine for detection of potential atlantoaxial instability.
abstract_id: PUBMED:23887801
Atlas fracture due to aneurysmal bone cyst after minor trauma Aneurysmal bone cysts predominantly occur in young adults and the long bones, the lumbar spine and the pelvis are mainly affected. This article presents the case of a 22-year-old woman with the very rare localization of an aneurysmal bone cyst of the atlas and an atlas fracture after a minor trauma. The initial radiological diagnosis was a suspicted aneurysmal bone cyst which was confirmed histologically. Due to the unstable fracture it was decided to carry out surgical treatment with occipitocervical stabilization in combination with a transoral bone graft. After a period of 11 months the fracture had completely healed and the implants were removed without any complications.
abstract_id: PUBMED:16189450
Management of acute traumatic atlas fractures. Objective: A prospective review of a clinical series was performed. The treatment features of atlas fractures with and without associated axis injuries were investigated.
Methods: Twenty-nine patients were investigated.
Results: No displaced fractures were treated with a cervical orthosis. Patients with displaced fractures were managed with a halo vest immobilization; 96.4% patients had a solid fusion at their last follow-up evaluations.
Conclusions: Isolated not displaced or combined with not displaced axis fractures atlas fractures can be treated effectively with a rigid cervical collar alone. Isolated displaced fractures or not displaced but with concurrent displaced axis fractures require immobilization by the halo vest.
abstract_id: PUBMED:32002697
Epidemiology and management of atlas fractures. Purpose: The purpose of this study was to gain new insights into the epidemiologic characteristics of patients with atlas fractures and to retrospectively evaluate complication rates after surgical and non-surgical treatment.
Methods: In a retrospective study, consecutive patients diagnosed with a fracture of the atlas between 01/2008 and 07/2018 were analyzed. Data on epidemiology, concomitant injuries, fracture patterns and complications were obtained by chart and imaging review.
Results: In total, 189 patients (mean age 72 years, SD 19; 57.1% male) were treated. The most frequent trauma mechanism was a low-energy trauma (59.8%). A concomitant injury of the cervical spine was found in 59.8%, a combined C1/C2 injury in 56.6% and a concomitant fracture of the thoraco-lumbar spine in 15.4%. When classified according to Gehweiler, there were: 23.3% type 1, 22.2% type 2, 32.8% type 3, 19.0% type 4 and 1.1% type 5. Treatment of isolated atlas fractures (n = 67) consisted of non-operative management in 67.1%, halo fixation in 6.0% and open surgical treatment in 26.9%. In patients with combined injuries, the therapy was essentially dictated by the concomitant subaxial cervical injuries.
Conclusions: Atlas fractures occurred mainly in elderly people and in the majority of the cases were associated with other injuries of the head and spine. Most atlas fractures were treated conservatively. However, surgical treatment has become a safe and valid option in unstable fracture patterns involving the anterior and posterior arch (type 3) or those involving the articular surfaces (type 4).
Level Of Evidence: IV (Retrospective cohort study). These slides can be retrieved under Electronic Supplementary Material.
abstract_id: PUBMED:28246953
Atlas fractures. Fractures of the atlas account for 1-2 % of all vertebral fractures. We divide atlas fractures into 5 groups: isolated fractures of the anterior arch of the atlas, isolated fractures of the posterior arch, combined fractures of the anterior and posterior arch (so-called Jefferson fractures), isolated fractures of the lateral mass and fractures of the transverse process. Isolated fractures of the anterior or posterior arch are benign and are treated conservatively with a soft collar until the neck pain has disappeared. Jefferson fractures are divided into stable and unstable fracture depending on the integrity of the transverse ligament. Stable Jefferson fractures are treated conservatively with good outcome while unstable Jefferson fractures are probably best treated operatively with a posterior atlanto-axial or occipito-axial stabilization and fusion. The authors preferred treatment modality is the immediate open reduction of the dislocated lateral masses combined with a stabilization in the reduced position using a transarticular screw fixation C1/C2 according to Magerl. This has the advantage of saving the atlanto-occipital joints and offering an immediate stability which makes immobilization in an halo or Minerva cast superfluous. In late instabilities C1/2 with incongruency of the lateral masses occurring after primary conservative treatment, an occipito-cervical fusion is indicated. Isolated fractures of the lateral masses are very rare and may, if the lateral mass is totally destroyed, be a reason for an occipito-cervical fusion. Fractures of the transverse processes may be the cause for a thrombosis of the vertebral artery. No treatment is necessary for the fracture itself.
abstract_id: PUBMED:24605195
Congenital anomaly of the atlas misdiagnosed as posterior arch fracture of the atlas and atlantoaxial subluxation. Partial or complete absence of the posterior arch of the atlas is a well-documented anomaly but a relatively rare condition. This condition is usually asymptomatic so most are diagnosed incidentally. There have been a few documented cases of congenital defects of the posterior arch of the atlas combined with atlantoaxial subluxation. We report a very rare case of congenital anomaly of the atlas combined with atlantoaxial subluxation, that can be misdiagnosed as posterior arch fracture.
Answer: Yes, there are therapy algorithms for managing isolated and combined atlas fractures, although the specific treatment approach depends on the type of fracture, the presence of accompanying injuries, and the patient's general condition.
For isolated atlas fractures, the treatment options are based on the specific atlas fracture type. Isolated fractures of the atlas with an intact transverse atlantal ligament are typically treated with cervical immobilization alone. In contrast, isolated fractures of the atlas with disruption of the transverse atlantal ligament may be treated with either cervical immobilization or surgical fixation and fusion (PUBMED:12431296). Stable fractures can heal within 8 to 12 weeks using a Philadelphia collar or halo-vest for immobilization. Surgical stabilization or halo-vest immobilization for a period of 12 weeks are recommended for unstable injuries characterized by significant lateral mass displacement or injury to the transverse ligament (PUBMED:15069863).
For combined atlas fractures, the management strategies are generally based on the type of axis fracture as well as the condition of the transverse ligament. Patients with stable fractures can be managed nonoperatively with a cervical collar or halo immobilization. However, combined atlas-axis fractures with certain indicators of instability, such as an atlantodental interval greater than 5 mm or C1 lateral mass displacement greater than 7 mm, are often managed with surgical intervention. The specific surgical technique used may vary and there is no consensus regarding the best approach (PUBMED:37440984).
In cases of combined atlas and axis fractures, the therapy must consider the stability of the C1 fractures and the additional injuries of the rest of the cervical spine. Isolated stable C1 fractures without dislocation can be treated conservatively, while unstable fractures should be referred to surgical therapy or halo extension depending on the patient's general condition. The treatment strategy for combined unstable type III injuries, for example, depends on the additional trauma (PUBMED:19693743).
Overall, the therapy for atlas fractures is individualized, taking into account the fracture type, accompanying injuries, and the patient's overall health status. It is important to evaluate each case carefully to determine the most appropriate treatment plan. |
Instruction: Overweight in Latino preschoolers: do parental health beliefs matter?
Abstracts:
abstract_id: PUBMED:27869046
Latino mothers' beliefs about child weight and family health. Objective: There is a need to address cultural beliefs and parenting practices regarding childhood obesity to design effective weight-control programmes for overweight/obese US Latino children. The purpose of the current study was to explore cultural beliefs about children's weight, understand parent perceptions on feeding their children, and explore barriers that interfere with a healthy lifestyle.
Design: Four focus groups were conducted in Spanish with forty-one Latino mothers of elementary school-age children from San Diego County, California between April and May 2011. Cultural viewpoints about overweight status among children and barriers to leading a healthy lifestyle were explored. Focus group discussions were analysed based on a priori and emergent themes.
Results: Three themes were identified: (i) mothers' cultural beliefs about health that are barriers to family health; (ii) mothers as primary caretakers of their family's health; and (iii) attitudes about targeting children's weight. Mothers acknowledged the idea that 'chubby is better' is a misperception, yet having a 'chubby' child was preferred and even accepted. Mothers described fatalistic beliefs that contradicted existing knowledge of chronic disease and daily demands of Western culture as barriers to practising healthy behaviours in the home as the family caretaker.
Conclusions: These findings may be used to inform more culturally appropriate research to address US Latino health. Increasing awareness of cultural beliefs and daily circumstance could help to address obesity more directly and thereby overcome some of the potential underlying barriers that might exist when involving the Latino immigrant families in obesity treatment and prevention.
abstract_id: PUBMED:20001191
Overweight in Latino preschoolers: do parental health beliefs matter? Objective: To characterize the knowledge, attitudes, and beliefs (KAB) regarding childhood obesity among parents of Latino preschoolers.
Methods: Three hundred sixty-nine Mexican immigrant parents of children ages 2-5 were interviewed. Children were weighed and measured.
Results: Parents underestimated their own child's weight status and had high levels of perceived control over their children's eating and activity behaviors. Parents of overweight (>95%ile-for-age-and-sex BMI) versus nonoverweight (<95%ile BMI) children did not differ in their beliefs about ideal child body size.
Conclusion: Latino parents of overweight children did not differ from parents of nonoverweight children with respect to their KAB about childhood obesity.
abstract_id: PUBMED:24522435
Latino Mothers in Farmworker Families' Beliefs About Preschool Children's Physical Activity and Play. Document beliefs about the contribution of physical activity to preschool-aged children's health held by Latino mothers in farmworker families, and delineate their perceived barriers or constraints that impose limits on preschool-aged children's physical activity. Qualitative data obtained through semi-structured in-depth interviews (N = 33) with mothers of preschool-aged children living in Latino farmworker families in North Carolina. Mothers universally agree that regular vigorous physical activity is good for preschool-aged children's health, including obesity prevention. However, excessive physical activity can produce illnesses, as well as other physical and emotional problems, and should be limited. Mothers wanted their children to engage in more sedentary forms of activity because they believed it would benefit learning. Physical and chemical hazards in rural environments, distance to parks and play spaces, and lack of familiarity and concerns about neighbors constrained children's physical activity. Although physical activity is believed to be beneficial, strong cultural beliefs and real contextual barriers undermine preschool-aged Latino farmworker children's level of physical activity.
abstract_id: PUBMED:35323392
Exploration of Changes in Low-Income Latino Families' Beliefs about Obesity, Nutrition, and Physical Activity: A Qualitative Post-Intervention Study. Objective: To investigate changes in beliefs around obesity, nutrition, and physical activity among low-income majority Latino families who participated in a community-based family-inclusive obesity intervention.
Methods: Six focus groups were conducted with a predominately Latino low-income population, who completed the Healthy Living Program (HeLP). Two groups were conducted in English and four groups were conducted in Spanish, and were recorded, translated, transcribed, and analyzed for thematic content. Two coders independently coded transcripts then reflexive team analysis with three members was used to reach consensus.
Results: Thirty-seven caregivers representing thirty-three families participated in focus groups. A number of themes emerged around changes in beliefs about obesity, nutrition, and physical activity (PA) as a result of the HeLP curriculum. Regarding obesity, the themes that emerged focused on the acceptability of children being overweight and the importance of addressing weight at an early age. Changes in beliefs regarding nutrition emerged, noting changes in the use of food as a reward, the multiple benefits of a healthy diet, and for some participants change in their beliefs around the adaptability of traditional foods and habits. Regarding physical activity, themes emerged around the difficulty of engaging in PA due to unsafe conditions and finding creative indoor and outdoor activities with whole family participation and becoming aware of the benefits of PA.
Conclusions: Parental changes in beliefs about obesity, nutrition, and physical activity as a result of a family-inclusive weight management program in a population of low-income predominately Latino families can aid and inform the development of future weight management programs for this population.
abstract_id: PUBMED:27003152
Latina and Non-Latina Mothers' Perceived Health Barriers and Benefits and Their Relationship to Children's Health Behaviors. Objectives Disparities exist in rates of overweight/obesity between Latino and non-Latino populations. Attention should be given to risk factors that may be modifiable through interventions involving both the parent and child. The current study sought to identify ethnic differences in parental health beliefs and their relation to children's health behaviors. Methods Latina and non-Latina mothers (N = 203) at rural and urban clinics and health departments completed self-report questionnaires. Key information included beliefs about barriers and benefits to health practices and children's health behaviors. Results Children of Latina mothers consumed significantly more soda and fried foods and exercised less than children of non-Latina mothers. Latina mothers were significantly more likely to perceive barriers to healthy eating and significantly less likely to perceive benefits to healthy eating and physical activity than non-Latina mothers. Ethnicity mediated the relationship between maternal views of health benefits and soda consumption. Conclusions Policy changes are needed to promote health education and increase the accessibility of healthy foods and safe places to exercise for Latino families.
abstract_id: PUBMED:21442018
Latina mothers' beliefs and practices related to weight status, feeding, and the development of child overweight. Objective: To examine maternal beliefs and practices related to weight status, child feeding, and child overweight in the Latino culture that may contribute to the rising rates of overweight among preschool Latino children in the United States.
Design And Sample: This 2-phase qualitative study relies on data obtained in 6 focus groups with a total of 31 primarily Spanish-speaking, low-income mothers, followed by 20 individual, in-depth interviews with women participating in a health promotion educational program.
Measures: Child-feeding beliefs, practices, and weight status perceptions were elicited.
Results: The findings indicated that most respondents reported personal struggles with weight gain, particularly during and after pregnancy, and were concerned that their children would become obese. Although subjects understood the health and social consequences related to overweight, many discussed the pressures of familial and cultural influences endorsing a "chubby child."
Conclusions: Education and interventions that incorporate "culturally mediated" pathways to address mothers' feeding practices are essential for the prevention and control of childhood overweight among low-income Latinos. Nurses should be aware of the social and cultural influences on Latina mothers' beliefs and practices related to weight status and feeding practices and address these in their education approaches to prevent childhood overweight and obesity with this population group.
abstract_id: PUBMED:20453045
Beliefs and perceived norms concerning body image among African-American and Latino teenagers. Focus groups, utilizing the Theory of Planned Behavior, examined the beliefs and perceived norms regarding body image in a sample of urban African-American and Latino teenagers (N = 83, 18-19 years old) from Texas. Cultural eating (behavioral belief) explained the acceptance and tolerance of overweight. Popularity of hip-hop fashion and limited income explicated peer and familial normative beliefs, respectively. Thinness equated HIV infection in African-Americans (parental normative belief). Barriers to healthy eating and active living (control beliefs) included willpower, laziness, fast food, and excessive work. Findings can guide the development and implementation of culturally appropriate obesity interventions for African-American and Latino adolescents.
abstract_id: PUBMED:23317025
Caring for Latino patients. Latinos comprise nearly 16 percent of the U.S. population, and this proportion is anticipated to increase to 30 percent by 2050. Latinos are a diverse ethnic group that includes many different cultures, races, and nationalities. Barriers to care have resulted in striking disparities in quality of health care for these patients. These barriers include language, lack of insurance, different cultural beliefs, and in some cases, illegal immigration status, mistrust, and illiteracy. The National Standards for Culturally and Linguistically Appropriate Services address these concerns with recommendations for culturally competent care, language services, and organizational support. Latinos have disproportionately higher rates of obesity and diabetes mellitus. Other health problems include stress, neurocysticercosis, and tuberculosis. It is important to explore the use of alternative therapies and belief in traditional folk illnesses, recognizing that health beliefs are dependent on education, socioeconomic status, and degree of acculturation. Many-but not all-folk and herbal treatments can be safely accommodated with conventional therapy. Physicians must be sensitive to Latino cultural values of simpatia (kindness), personalismo (relationship), respeto (respect), and modestia (modesty). The LEARN technique can facilitate cross-cultural interviews. Some cultural barriers may be overcome by using the "teach back" technique to ensure that directions are correctly understood and by creating a welcoming health care environment for Latino patients.
abstract_id: PUBMED:19051842
Overweight in Latino/Hispanic adolescents: scope of the problem and nursing implications. Latino/Hispanic adolescents have the highest prevalence of overweight in this country. Causes of ethnic variation may include differences in the level of acculturation, ethnic beliefs, differences in ideal body images, lack of appreciation of weight management, questionable literacy levels, and/or socioeconomic status. Improved knowledge of behavioral, social-cultural, and environmental determinants of overweight individuals among adolescent Latino/Hispanics will increase the effectiveness and direction of these interventions to prevent and treat this epidemic. Research is needed to determine which interventions may specifically be most effective for preventing this overweight status. This article summarizes variables that have an impact on Latino/Hispanic youth and identifies potential strategies for pediatric nurses to use in addressing this health matter most efficaciously.
abstract_id: PUBMED:16534660
Weight, diet, and physical activity-related beliefs and practices among pregnant and postpartum Latino women: the role of social support. Objectives: Eating and physical activity patterns may contribute to excessive pregnancy weight gain and postpartum retention that increase the risks of obesity and diabetes for both Latino mothers and their children. Social support is an important health determinant and may affect health-related beliefs and behaviors. The objective of this study was to investigate the influence of social support on weight, diet, and physical activity-related beliefs and behaviors among pregnant and postpartum Latinas.
Methods: A community-based participatory project, Promoting Healthy Lifestyles among Women, was conducted in southwest Detroit to plan interventions aimed at reducing risks of obesity and type 2 diabetes. Qualitative analyses of in-depth semistructured interviews with dyads of 10 pregnant and postpartum Latinas, and 10 people who influenced them were conducted.
Results: Husbands and some female relatives were primary sources of emotional, instrumental, and informational support for weight, diet, and physical activity-related beliefs and behaviors for Latina participants. Holistic health beliefs and the opinions of others consistently influenced Latinas' motivation and beliefs about the need to remain healthy and the links between behavior and health. Absence of mothers, other female relatives, and friends to provide childcare, companionship for exercise, and advice about food were prominent barriers that limited women's ability to maintain healthy practices during and after pregnancy.
Conclusion: The findings support evidence that low-income, recently immigrated pregnant and postpartum Latinas could benefit from community-based, family-oriented interventions that provide social support necessary to promote and sustain healthy lifestyles.
Answer: Yes, parental health beliefs do matter when it comes to overweight in Latino preschoolers. The research indicates that Latino parents often underestimate their own child's weight status and have high levels of perceived control over their children's eating and activity behaviors. However, parents of overweight children do not differ significantly in their beliefs about ideal child body size compared to parents of nonoverweight children (PUBMED:20001191). Additionally, Latina mothers in farmworker families believe that while regular vigorous physical activity is good for their preschool-aged children's health, including obesity prevention, excessive physical activity can lead to illnesses and other problems. They also face barriers such as physical and chemical hazards in rural environments, distance to parks, and concerns about neighbors that constrain children's physical activity (PUBMED:24522435).
Latino mothers' cultural beliefs, such as the preference for a "chubby" child and fatalistic beliefs that contradict existing knowledge of chronic disease, act as barriers to practicing healthy behaviors in the home (PUBMED:27869046). Moreover, Latina mothers are more likely to perceive barriers to healthy eating and are less likely to perceive benefits to healthy eating and physical activity compared to non-Latina mothers, which is reflected in their children's health behaviors, such as higher consumption of soda and fried foods and less exercise (PUBMED:27003152).
Interventions that incorporate cultural beliefs and address social and cultural influences on Latina mothers' beliefs and practices related to weight status and feeding practices are essential for the prevention and control of childhood overweight among low-income Latinos (PUBMED:21442018). Furthermore, changes in beliefs about obesity, nutrition, and physical activity as a result of family-inclusive weight management programs can inform the development of future programs for low-income predominately Latino families (PUBMED:35323392).
In summary, Latino parental health beliefs significantly impact their children's risk of becoming overweight, and these beliefs are influenced by cultural norms, perceived barriers, and the social environment. Addressing these beliefs through culturally appropriate interventions is crucial for managing and preventing obesity in Latino preschoolers. |
Instruction: Is intravenous iron useful for reducing transfusions in surgically treated colorectal cancer patients?
Abstracts:
abstract_id: PUBMED:22552496
Is intravenous iron useful for reducing transfusions in surgically treated colorectal cancer patients? Background: The goal of the present study was to determine whether the intravenous administration of iron in the postoperative period of colon cancer surgical patients suffices to reduce the number of transfusions necessary.
Method: The study was designed as a retrospective observational study conducted over a three-year period. A paired case-control design was used to analyze the effect of postoperative iron on patients' blood transfusion needs. Two groups were established (the case group, which received postoperative iron and the control group, which did not) and matched for age (± 3 years), gender, type of operation, tumor stage, and surgical approach. Of 342 patients who underwent operation, 104 paired patients were obtained for inclusion in this study (52 in each group). A second analysis was made to assess the effect of intravenous iron on the evolution of hemoglobin between the first postoperative day and hospital discharge in the subgroup of patients with reduction in hemoglobin, in subjects without preoperative or postoperative transfusions. Finally, a total of 71 patients were paired in two groups: 37 and 31 patients in case and control, respectively.
Results: The mean hemoglobin concentration at discharge for the case group was 10 ± 1.1 g/dl, vs. 10.6 ± 1.2 in the controls (P = 0.012). The number of transfusions in the case group was 3 ± 1.6, vs. 3.3 ± 3 in the control group (P = 0.682). Thus, 28.8 % of the patients in the case group received transfusions, versus 30.8 % of those in the control group (P = 0.830). In the second analysis, the decrease in hemoglobin concentration was 0.88 g/dl and 0.82 g/dl in case and control, respectively.
Conclusions: Intravenous iron does not appear to reduce the blood transfusion requirements in the postoperative period of colorectal surgery patients with anemia. We consider that further studies are needed to more clearly define the usefulness of intravenous iron in reducing the transfusion needs in such patients.
abstract_id: PUBMED:34931280
Preoperative intravenous iron treatment reduces postoperative complications and postoperative anemia in preoperatively anemic patients with colon carcinoma. Purpose: Anemia is common among patients with colorectal cancer and is associated with an increased risk of complications and poorer survival rate. The main objective of our study was to determine the effect of preoperative intravenous iron supplementation therapy on the need for red blood cell transfusions, other postoperative complications, and length of hospital stay in colon cancer patients undergoing colon resection.
Methods: In this retrospective cohort study, data were collected from medical records of all 549 colon carcinoma patients who underwent a colon resection in Helsinki University Hospital during the years 2017 and 2018. The patients were divided into two cohorts: one with anemic patients treated with preoperative intravenous iron supplementation therapy (180 patients) and one with anemic patients without preoperative intravenous iron supplementation therapy (138 patients). Non-anemic patients and patients requiring emergency surgery were excluded (231 patients).
Results: Patients treated with intravenous iron had less postoperative complications (33.9% vs. 45.9%, p = 0.045) and a lower prevalence of anemia at 1 month after surgery (38.7% vs. 65.3%, p < 0.01) when compared with patients without preoperative iv iron treatment. No difference was found in the amount of red blood cell transfusions, length of stay, or mortality between the groups.
Conclusion: This is the first study demonstrating a significant decrease in postoperative complications in anemic colon cancer patients receiving preoperative intravenous iron supplementation therapy. This treatment also diminishes the rate of postoperative anemia, which is often associated with a facilitated recovery.
abstract_id: PUBMED:30963552
The impact of pre-operative intravenous iron on quality of life after colorectal cancer surgery: outcomes from the intravenous iron in colorectal cancer-associated anaemia (IVICA) trial. Anaemia is associated with a reduction in quality of life, and is common in patients with colorectal cancer . We recently reported the findings of the intravenous iron in colorectal cancer-associated anaemia (IVICA) trial comparing haemoglobin levels and transfusion requirements following intravenous or oral iron replacement in anaemic colorectal cancer patients undergoing elective surgery. In this follow-up study, we compared the efficacy of intravenous and oral iron at improving quality of life in this patient group. We conducted a multicentre, open-label randomised controlled trial. Anaemic colorectal cancer patients were randomly allocated at least two weeks pre-operatively, to receive either oral (ferrous sulphate) or intravenous (ferric carboxymaltose) iron. We assessed haemoglobin and quality of life scores at recruitment, immediately before surgery and at outpatient review approximately three months postoperatively, using the Short Form 36, EuroQoL 5-dimension 5-level and Functional Assessment of Cancer Therapy - Anaemia questionnaires. We recruited 116 anaemic patients across seven UK centres (oral iron n = 61 (53%), and intravenous iron n = 55 (47%)). Eleven quality of life components increased by a clinically significant margin in the intravenous iron group between recruitment and surgery compared with one component for oral iron. Median (IQR [range]) visual analogue scores were significantly higher with intravenous iron at a three month outpatient review (oral iron 70, (60-85 [20-95]); intravenous iron 90 (80-90 [50-100]), p = 0.001). The Functional Assessment of Cancer Therapy - Anaemia score comprises of subscales related to cancer, fatigue and non-fatigue items relevant to anaemia. Median outpatient scores were higher, and hence favourable, for intravenous iron on the Functional Assessment of Cancer Therapy - Anaemia subscale (oral iron 66 (55-72 [23-80]); intravenous iron 71 (66-77 [46-80]); p = 0.002), Functional Assessment of Cancer Therapy - Anaemia trial outcome index (oral iron 108 (90-123 [35-135]); intravenous iron 121 (113-124 [81-135]); p = 0.003) and Functional Assessment of Cancer Therapy - Anaemia total score (oral iron 151 (132-170 [69-183]); intravenous iron 168 (160-174 [125-186]); p = 0.005). These findings indicate that intravenous iron is more efficacious at improving quality of life scores than oral iron in anaemic colorectal cancer patients.
abstract_id: PUBMED:29937171
The effect of intravenous iron therapy on long-term survival in anaemic colorectal cancer patients: Results from a matched cohort study. Introduction: Intravenous iron therapy has been shown to be advantageous in treating anaemia and reducing the need for blood transfusions. Iron treatment, however, may also be hazardous by supporting cancer growth. Present clinical study explores, for the first time, the effect of preoperative intravenous iron therapy on tumour prognosis in anaemic colorectal cancer patients.
Methods: A retrospective cohort study was performed on consecutive patients who underwent surgery for colorectal cancer between 2010 and 2016 in a single teaching hospital. The primary outcomes were 5-year overall survival (OS) and disease-free survival (DFS). Survival estimates were calculated using the Kaplan-Meier method and patients were matched based on propensity score.
Results: 320 (41.0%) of all eligible patients were anaemic, of whom 102 patients received preoperative intravenous iron treatment (31.9%). After propensity score matching 83 patients were included in both intravenous and non-intravenous iron group. The estimated 1-, 3-, and 5-year OS (91.6%, 73.1%, 64.3%, respectively) and DFS (94.5%, 86.7%, 83.4%, respectively) in the intravenous iron group were comparable with the non-intravenous iron group (p = 0.456 and p = 0.240, respectively). In comparing patients with an event (death or recurrence) and no event in the intravenous iron group, a distinct trend was found for decreased transferrin in the event group (median 2.53 g/L vs 2.83 g/L, p = 0.052).
Conclusion: The present study illustrates that a dose of 1000-2000 mg preoperative intravenous iron therapy does not have a profound effect on long-term overall and disease-free survival in anaemic colorectal cancer patients. Future randomised trials with sufficient power are required to draw definite conclusions on the safety of intravenous iron therapy.
abstract_id: PUBMED:31930457
Use of intravenous iron therapy in colorectal cancer patient with iron deficiency anemia: a propensity-score matched study. Purpose: Iron deficiency anemia is common in colorectal cancer patients and is related to poor surgical outcome. Increasing evidence supports preoperative use of intravenous iron (IVI) to correct anemia. Our study investigates effect of preoperative IVI on hemoglobin level.
Methods: From August 2017 to March 2019, colorectal cancer patients with iron deficiency anemia received intravenous iron at least 2 weeks before their scheduled operations (IVI group). These patients' prospectively collected data were compared to a historic cohort of anemic patients who received elective colorectal surgery within 3 years before the study period (non-IVI).
Results: Forty-six patients were included after receiving intravenous iron. After propensity score matching on 1:2 ratio, 38 patients in IVI group were matched with 62 patients from non-IVI group. There was no statistical difference for preoperative mean hemoglobin level between the two groups (8.43 g/dL in IVI, 8.79 g/dL in non-IVI, p = 0.117), but IVI group has significantly higher mean hemoglobin level on admission (10.63 g/dL in IVI, 9.46 g/dL in non-IVI, p = < 0.001). IVI group had higher median hemoglobin rise (1.9 in IVI, 0.6 in non-IVI, p = <0.001) and significantly less red cell transfusion (8 patients in IVI, 30 in non-IVI, p = 0.006). Subgroup analysis showed that less patients in IVI group required transfusions in preoperative period (1 in IVI group, 20 in non-IVI, p < 0.001).
Conclusion: Our data suggested that IVI can significantly increase hemoglobin level in iron deficiency anemic patients before colorectal surgery, with reduction in red cell transfusions.
abstract_id: PUBMED:28092401
Randomized clinical trial of preoperative oral versus intravenous iron in anaemic patients with colorectal cancer. Background: Treatment of preoperative anaemia is recommended as part of patient blood management, aiming to minimize perioperative allogeneic red blood cell transfusion. No clear evidence exists outlining which treatment modality should be used in patients with colorectal cancer. The study aimed to compare the efficacy of preoperative intravenous and oral iron in reducing blood transfusion use in anaemic patients undergoing elective colorectal cancer surgery.
Methods: Anaemic patients with non-metastatic colorectal adenocarcinoma were recruited at least 2 weeks before surgery and randomized to receive oral (ferrous sulphate) or intravenous (ferric carboxymaltose) iron. Perioperative changes in haemoglobin, ferritin, transferrin saturation and blood transfusion use were recorded until postoperative outpatient review.
Results: Some 116 patients were included in the study. There was no difference in blood transfusion use from recruitment to trial completion in terms of either volume of blood administered (P = 0·841) or number of patients transfused (P = 0·470). Despite this, increases in haemoglobin after treatment were higher with intravenous iron (median 1·55 (i.q.r. 0·93-2·58) versus 0·50 (-0·13 to 1·33) g/dl; P < 0·001), which was associated with fewer anaemic patients at the time of surgery (75 versus 90 per cent; P = 0·048). Haemoglobin levels were thus higher at surgery after treatment with intravenous than with oral iron (mean 11·9 (95 per cent c.i. 11·5 to 12·3) versus 11·0 (10·6 to 11·4) g/dl respectively; P = 0·002), as were ferritin (P < 0·001) and transferrin saturation (P < 0·001) levels.
Conclusion: Intravenous iron did not reduce the blood transfusion requirement but was more effective than oral iron at treating preoperative anaemia and iron deficiency in patients undergoing colorectal cancer surgery.
abstract_id: PUBMED:16420822
Intravenous iron in general surgery Preoperative anemia is the main cause of blood transfusion in surgical patients. In digestive surgery high blood loss and allogenic blood transfusion (ABT) are associated with serious adverse events and higher mortality. Consequently, we believe that intravenous iron administration is justified to correct perioperative anemia. We present the case of a woman with metastatic colorectal adenocarcinoma in whom intravenous iron administration avoided the use of ABT. Subsequently, the iron metabolism profile improved. This had previously corresponded to a mixed pattern of iron deficiency, that is, to the association of organic and functional iron deficiency.
abstract_id: PUBMED:34694223
Single-dose intravenous ferric carboxymaltose infusion versus multiple fractionated doses of intravenous iron sucrose in the treatment of post-operative anaemia in colorectal cancer patients: a randomised controlled trial. Background: Recent clinical guidelines suggest that treatment of postoperative anaemia in colorectal cancer surgery with intravenous iron reduces transfusion requirements and improves outcomes. The study aimed at comparing two intravenous iron regimens in anaemic patients after colorectal cancer surgery.
Materials And Methods: This was a single-centre, open-label, randomised, controlled trial in patients undergoing elective colorectal cancer surgery. Patients with moderate to severe anaemia (haemoglobin [Hb] <11 g/dL) after surgery were randomly assigned 1:1 to receive ferric carboxymaltose (FC; 1,000 mg, single dose) or iron sucrose (IS; 200 mg every 48 hours until covering the total iron deficit or discharge). Randomisation was stratified by Hb level: <10 g/dL (Group A) or ≥10-10.9 (Group B). The primary endpoint was the change in Hb concentration at postoperative day 30. Secondary endpoints included iron status parameters, transfusion requirements, complications, and length of hospital stay.
Results: From September 2015 to May 2018, 104 patients were randomised (FC 50, IS 54). The median intravenous iron dose was 1,000 mg and 600 mg in the FC and IS groups, respectively. There were no between-group differences in mean change in Hb from postoperative day 1 to postoperative day 30 (FC: 2.5 g/dL, 95% CI: 2.1-2.9; IS: 2.4 g/dL, 95% CI: 2.0-2.8; p=0.52), in transfusion requirements or length of stay. The infection rate was lower in the FC group compared with the IS group (9.8% vs 37.2%, respectively).
Discussion: The administration of approximately 500 mg of IS resulted in an increase in Hb at postoperative day 30 similar to that of 1,000 mg of FC, but it was associated with a higher infection rate. Future research will be needed to confirm the results, and to choose the best regime in terms of effectiveness and side effects to treat postoperative anaemia in colorectal cancer patients.
abstract_id: PUBMED:35453055
Preoperative Intravenous Iron Treatment in Colorectal Cancer: Experience From Clinical Practice. Introduction: Anemia is associated with increased postoperative morbidity and mortality in abdominal surgery. In clinical trials, preoperative i.v. iron treatment increases the preoperative hemoglobin (Hb) concentration but the effect on transfusion rates are not consistent. This study reports on the experience with preoperative i.v. iron treatment in surgical colorectal cancer (CRC) patients in clinical practice.
Methods: A registry-based cohort study. Surgical colorectal cancer patients with iron deficiency anemia were compared after division into two groups; those who preoperatively received i.v. iron treatment and those who did not. Primary outcomes were preoperative changes in Hb and the difference in perioperative red blood cell transfusion (RBCT) rates. Postoperative complications and mortality rates were analyzed and a descriptive analysis on what triggered blood transfusions were performed.
Results: A total of 170 patients were included. Of these, 122 had received preoperative i.v. iron treatment and 48 had not. The perioperative transfusion rate was 45% (55/122) in the treatment group and 40% (19/48) in the control group (non-significant difference). The preoperative changes in Hb levels were not different between the two groups. Transfusion practice appeared more liberal and preceded by higher Hb levels that was guided by the National transfusion guideline. I.v. iron treated patients had a higher rate of postoperative complications. No differences were found on length of stay (LOS) or postoperative mortality.
Conclusions: Preoperative i.v. iron treatment was neither associated with a rise in Hb concentrations at the time of surgery, nor with a reduction in the likelihood of receiving perioperative red blood cell transfusions (RBCT) in colorectal cancer (CRC) patients with iron deficiency anemia.
abstract_id: PUBMED:19731228
Randomized clinical trial of preoperative intravenous iron sucrose to reduce blood transfusion in anaemic patients after colorectal cancer surgery. Background: The transfusion rate following colorectal cancer resection is between 10 and 30 per cent. Receipt of allogeneic blood is not without risk or cost. A preoperative adjunct that reduced the need for transfusion would mitigate these risks. This study was designed to determine whether iron sucrose reduces the likelihood of postoperative blood transfusion in patients undergoing elective colorectal cancer resection.
Methods: In this randomized prospective blinded placebo-controlled trial of patients undergoing resectional surgery with a preoperative diagnosis of colorectal cancer, 600 mg iron sucrose or placebo was given intravenously in two divided doses, at least 24 h apart, 14 days before surgery. The primary outcome measures were serum haemoglobin concentration, recorded at recruitment, immediately before surgery and at discharge, and perioperative blood transfusions.
Results: No difference was demonstrated between treatment groups (iron sucrose, 34 patients; placebo, 26) for any of the primary outcome measures, for either the whole study population or a subgroup of anaemic patients.
Conclusion: This pilot study provided no support for the use of intravenous iron sucrose as a preoperative adjunct to increase preoperative haemoglobin levels and thereby reduce the likelihood of allogeneic blood transfusion for patients undergoing resectional surgery for colorectal cancer.
Registration Number: 2005-003608-13UK (Medicines and Healthcare products Regulatory Agency).
Answer: The evidence on the usefulness of intravenous iron for reducing transfusions in surgically treated colorectal cancer patients is mixed. Some studies suggest that intravenous iron does not significantly reduce blood transfusion requirements in the postoperative period of colorectal surgery patients with anemia (PUBMED:22552496). However, other research indicates that preoperative intravenous iron treatment can reduce postoperative complications and the prevalence of postoperative anemia in anemic patients with colon carcinoma, although no difference was found in the amount of red blood cell transfusions between groups (PUBMED:34931280).
Further, a study from the IVICA trial showed that preoperative intravenous iron improved quality of life in anaemic colorectal cancer patients undergoing elective surgery compared to oral iron (PUBMED:30963552). Another study found that preoperative intravenous iron therapy did not have a profound effect on long-term overall and disease-free survival in anaemic colorectal cancer patients (PUBMED:29937171).
A propensity-score matched study suggested that preoperative intravenous iron can significantly increase hemoglobin levels in iron deficiency anemic patients before colorectal surgery, with a reduction in red cell transfusions (PUBMED:31930457). Conversely, a randomized clinical trial comparing preoperative intravenous and oral iron found that intravenous iron did not reduce the blood transfusion requirement but was more effective at treating preoperative anaemia and iron deficiency (PUBMED:28092401).
In terms of postoperative treatment, a study comparing two intravenous iron regimens in anaemic patients after colorectal cancer surgery found no between-group differences in hemoglobin change or transfusion requirements, although the infection rate was lower in the group receiving a single dose of ferric carboxymaltose (PUBMED:34694223).
Lastly, a registry-based cohort study reported that preoperative intravenous iron treatment was not associated with a rise in hemoglobin concentrations at the time of surgery, nor with a reduction in the likelihood of receiving perioperative red blood cell transfusions in colorectal cancer patients with iron deficiency anemia (PUBMED:35453055). Another randomized clinical trial also found no support for the use of intravenous iron sucrose as a preoperative adjunct to increase preoperative hemoglobin levels and thereby reduce the likelihood of allogeneic blood transfusion (PUBMED:19731228). |
Instruction: Are outcomes after total knee arthroplasty worsening over time?
Abstracts:
abstract_id: PUBMED:27717340
Increasing comorbidity is associated with worsening physical function and pain after primary total knee arthroplasty. Background: Previous studies suggested that pre-operative comorbidity was a risk factor for worse outcomes after TKA. To our knowledge, studies have not examined whether postoperative changes in comorbidity impact pain and function outcomes longitudinally. Our objective was to examine if increasing comorbidity postoperatively is associated with worsening physical function and pain after primary total knee arthroplasty (TKA).
Methods: We performed a retrospective chart review of veterans who had completed Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) and Short Form-36 (SF36) surveys at regular intervals after primary TKA. Comorbidity was assessed using a variety of scales: validated Charlson comorbidity index score, and a novel Arthroplasty Comorbidity Severity Index score (Including medical index, local musculoskeletal index [including lower extremity and spine] and TKA-related index subscales; higher scores are worse ), at multiple time-points post-TKA. We used mixed model linear regression to examine the association of worsening comorbidity post-TKA with change in WOMAC and SF-36 scores in the subsequent follow-up periods, controlling for age, length of follow-up, and repeated observations.
Results: The study cohort consisted of 124 patients with a mean age of 71.7 years (range 58.6-89.2, standard deviation (SD) 6.9) followed for a mean of 4.9 years post-operatively (range 1.3-11.4; SD 2.8). We found that post-operative worsening of the Charlson Index score was significantly associated with worsening SF-36 Physical Function (PF) (beta coefficient (ß) = -0.07; p < 0.0001), SF-36 Bodily Pain (BP) (ß = -0.06; p = 0.002), and WOMAC PF subscale (ß = 0.08; p < 0.001; higher scores are worse) scores, in the subsequent periods. Worsening novel medical index subscale scores were significantly associated with worsening SF-36 PF scores (ß = -0.03; p = 0.002), SF-36 BP (ß = -0.04; p < 0.001) and showed a non-significant trend for worse WOMAC PF scores (ß = 0.02; p = 0.11) subsequently. Local musculoskeletal index subscale scores were significantly associated with worsening SF-36 PF (ß = -0.05; p = 0.001), SF-36 BP (ß = -0.04; p = 0.03) and WOMAC PF (ß = 0.06; p = 0.01) subsequently. None of the novel index subscale scores were significantly associated with WOMAC pain scores. TKA complications, as assessed by TKA-related index subscale, were not significantly associated with SF-36 or WOMAC domain scores.
Conclusions: Increasing Charlson index as well as novel medical and local musculoskeletal index subscale scores (from novel Arthroplasty Comorbidity Severity Index) post-TKA correlated with subsequent worsening of physical function and pain outcomes post-TKA. Further studies should examine which comorbidity management could have the greatest impact on these outcomes.
abstract_id: PUBMED:31516982
Clinical and functional outcomes of primary total knee arthroplasty: a South American perspective. Background: The aim of this study was to report the clinical and functional outcomes as well as complications after primary total knee arthroplasty in a cohort of Chilean patients.
Methods: We retrospectively reviewed 191 total knee arthroplasties performed in 182 patients over an 8-year period, with a minimum follow-up of 2 years. The primary outcome measure was the rate of major complications. Secondary outcomes were minor complications, residual symptoms, level of satisfaction, and the Knee Injury and Osteoarthritis Outcome Score.
Results: Global complication rate was 15.5%, reintervention rate was 9.2%, and revision rate was 2.5%. Major and minor complications were seen in 9.2% and 5.1% of patients, respectively. Average Knee Injury and Osteoarthritis Outcome Score was 77 points (14-100), and 90% of patients reported satisfaction with the procedure. At 2-year follow-up, 45.8% of patients had some degree of range of motion limitations.
Conclusions: Our results show a medium-term follow-up complication rate comparable to those described in the literature. This is the first series to report on the clinical and functional outcomes after primary total knee arthroplasty in a Chilean population.
abstract_id: PUBMED:27474511
A Current Procedural Terminology Code for "Knee Conversion" Is Needed to Account for the Additional Surgical Time Required Compared to Total Knee Arthroplasty. Background: Previous knee injury requiring surgical intervention increases the rate of future arthroplasty. Coding modifiers for removal of previous hardware or increased complexity offer inconsistent results. A Current Procedural Terminology code for knee conversion does not currently exist as it does for conversion hip arthroplasty. We investigate the extra time associated with conversion knee arthroplasty.
Methods: Sixty-three total knee arthroplasty (TKA) cases in the setting of previous knee hardware were identified from our institution between 2008 and 2015. Knee conversions were matched to primary TKA by age, gender, body mass index, Charlson Comorbidity Index, and surgeon, in a 3:1 ratio. Patients who underwent knee conversions were compared to matched TKA with regard to operative time, length of stay, discharge destination, readmission, and repeat procedures within 90 days from index procedure.
Results: The mean operating room time for primary TKA was 71.7 minutes (range 36-138). The mean operating room time for knee conversion was significantly greater by an additional 31 minutes; mean 102.1 minutes (range 56-256 minutes, P < .0001). Rates of readmission, 0.5% vs 3.2%, and repeat procedures, 5.3% vs 12.7%, within 90 days were greater for knee conversions. There was no difference in length of stay or discharge destination.
Conclusion: Total knee conversion results in a 43% increase in operative time and more than twice the rate of readmission and repeat procedures within 90 days compared to TKA. This suggests the need for an additional Current Procedural Terminology code for knee conversion arthroplasty to compensate surgeons for the extra time required for conversions.
abstract_id: PUBMED:35685980
A Reliable Surgical Approach to Revision Total Knee Arthroplasty. Backgroud: The surgical exposure obtained in revision total knee arthroplasty should facilitate the utilisation of instrumentation and implants, including adjuncts such as stemmed prostheses, bone allograft, and artificial augments. We have previously identified within this cohort of revision total knee arthroplasty patients a high satisfaction rate of 93.5% at a mean 6.5 years of follow-up and a high level of postoperative function. We, therefore, seek to describe in detail the operative technique and perioperative care and report the early postoperative complications.
Methods: We report on the surgical approach, closure technique, and postoperative care used by the senior author for revision total knee arthroplasty procedures. The patient demographics, intraoperative details, and postoperative outcomes are also reported. We aim to provide a clear description of the intraoperative technique and postoperative outcome, facilitating adoption or comparison with other surgeons or techniques. Patient inclusion criteria were revision total knee arthroplasty performed by the senior author using the PFC (Depuy) prosthesis at John Flynn Private Hospital with a minimum of 2-year postoperative follow-up. A retrospective chart review was combined with a structured telephone assessment questionnaire to assess outcomes.
Results: A total of 202 revision total knee arthroplasties were available for follow-up in 185 patients. The mean 1-year postoperative range of motion was 110°. Key features of surgical approach include incision planning, soft-tissue plane development, parapatellar scar debridement, safe removal of implants, management of bone defects, and closure technique. The overall 90-day complication rate was 9%, including 4.4% requiring manipulation under anaesthesia and 3% superficial surgical site infections (1 patient requiring intravenous antibiotics).
Conclusions: We suggest that the described technique is reproducible and reliable. It rarely requires modification and facilitates successful postoperative outcomes with a low complication rate. The adoption of this surgical technique allows surgeons to approach complex knee arthroplasty with confidence in the appropriate exposure of anatomy, facilitating subsequent steps in their arthroplasty procedures.
abstract_id: PUBMED:36710140
Clinical Outcomes of Total Knee Arthroplasty With Concomitant Total Ankle Arthroplasty Versus Ankle Arthrodesis. Prior studies have demonstrated a high incidence of ankle osteoarthritis (OA) in patients undergoing total knee arthroplasty (TKA) as well as inferior outcomes in the setting of ankle OA or hindfoot malalignment. Little is known about the effect of the 2 most common surgical treatments for ankle OA, ankle arthrodesis and total ankle arthroplasty (TAA) on TKA. This hypothesis is that the preservation of ankle motion afforded by total ankle arthroplasty may reduce pathologic stresses across the knee joint. This study compares outcomes of patients who underwent both TKA and TAA versus those that underwent TKA and ankle arthrodesis. We retrospectively reviewed a cohort of patients who had undergone TKA and either TAA or ankle arthrodesis at this institution, examining knee injury and OA outcome scores, foot and ankle ability measure scores, revision surgery, knee range of motion, and pain. There were 69 eligible subjects, 13 who had undergone total knee arthroplasty and total ankle arthroplasty and 56 who had undergone TKA and ankle arthrodesis. No significant differences were observed in KOOS Jr scores, FAAM scores, incidence of revision, knee range of motion, or pain at final follow up (p > .05). Mean follow-up time was 46 months after both surgeries were completed. Equivalent outcomes were observed between the 2 groups. The presence of a TKA should not alter the indications for treatment of ankle OA with TAA versus arthrodesis. Further studies are needed as these relatively rare concomitant procedures are likely to become more common in the future.
abstract_id: PUBMED:22905072
Revision of unicondylar to total knee arthroplasty: a systematic review. Isolated unicompartmental osteoarthritis in the young patient is a difficult problem to treat; they may be too young to consider total knee arthroplasty due to difficulties with inevitable future revision. Unicompartmental knee arthroplasty is one possible solution as it is perceived by some as being a smaller surgical insult than total knee arthroplasty, with easier revision to total knee arthroplasty than a revision total knee arthroplasty. A total knee arthroplasty performed as a revision unicondylar knee arthroplasty is thought by some authors to have equivalent functional outcomes to a primary total knee replacement.However, there have been several studies suggesting that revision is not as simple as suggested, and that function is not as good as primary total knee arthroplasty.We performed a systematic review of the literature regarding outcomes after revision of a unicondylar knee arthroplasty.Although there are many studies proposing selective use of the unicondylar knee arthroplasty, there are a number of studies highlighting difficulties with revision and poorer outcomes, and, therefore, the unicondylar knee arthroplasty cannot be considered a small procedure that will 'buy time' for the patient, and have results equal to a primary knee arthroplasty when revised. Further controlled studies, ideally randomised, are required before final conclusions can be drawn.
abstract_id: PUBMED:28983215
Mid-Term Outcomes of Metal-Backed Unicompartmental Knee Arthroplasty Show Superiority to All-Polyethylene Unicompartmental and Total Knee Arthroplasty. Background: Two commonly used tibial designs for unicompartmental knee arthroplasty (UKA) are all-polyethylene "inlay" and metal-backed "onlay" components. Biomechanical studies showed that the metal baseplate in onlay designs better distributes forces over the tibia but studies failed to show differences in functional outcomes between both designs at mid-term follow-up. Furthermore, no studies have compared both designs with total knee arthroplasty (TKA).
Questions/purposes: The goal of this study was to compare outcomes of inlay UKA and onlay UKA at mid-term follow-up and compare these with TKA outcomes.
Methods: In this retrospective study, 52 patients undergoing inlay medial UKA, 59 patients undergoing onlay medial UKA, and 59 patients undergoing TKA were included. Western Ontario and McMaster Universities Arthritis Index scores were collected preoperatively and at mean 5.1-year follow-up (range 4.0-7.0 years).
Results: Preoperatively, no differences were observed in patient characteristics or outcome scores. At mid-term follow-up, patients undergoing onlay medial UKA reported significant better functional outcomes than those of inlay medial UKA (92.0 ± 10.4 vs. 82.4 ± 18.7, p = 0.010) and when compared to TKA (92.0 ± 10.4 vs. 79.6 ± 18.5, p < 0.001) while no significant differences between inlay medial UKA and TKA were noted. No significant differences in revision rates were found.
Conclusion: Functional outcomes following onlay metal-backed medial UKA were significantly better compared to inlay all-polyethylene medial UKA and to TKA. Based on the results of this study and on biomechanical and survivorship studies in the literature, we recommended using metal-backed onlay tibial components for unicompartmental knee arthroplasty.
abstract_id: PUBMED:35251543
Clinical Outcomes Following Revision Total Knee Arthroplasty: Minimum 2-Year Follow-up. Backgroud: The longer-term outcomes of revision total knee arthroplasty are not well described in the current literature. Managing patient expectations of revision total knee arthroplasty can be challenging for orthopedic surgeons due to a paucity of data to guide decision-making. We present outcomes of revision total knee arthroplasty performed by a single surgeon over a 12-year period from 2004 through 2015.
Methods: A retrospective review of hospital and private medical records demonstrated 202 revision total knee arthroplasties performed by the senior author in 178 patients from 2004 through 2015. Of these, 153 patients were available for assessment. Patients were contacted and invited to participate in a structured telephone interview to assess Oxford Knee Score (OKS) and patient satisfaction. All patients received the PFC (Depuy) prosthesis at a single institution and were followed up for minimum 2 years postoperatively at the time of review. Retrospective chart review was used to obtain other data for analysis including patient demographics, preoperative and postoperative range of motion (ROM), and intraoperative details.
Results: This cohort demonstrated a 93.5% survival rate and an 85% satisfaction rate at a mean of 6.5 years postoperatively. Mean ROM improved from 100° (range, 5°-145°) to 112° (range, 35°-135°) (p < 0.001). The mean OKS was 39.25 (range, 14-48). The factors associated with improved postoperative outcomes included male gender, fewer previous revision total knee arthroplasty procedures, increased preoperative ROM, and receiving a less constrained implant.
Conclusions: This study provides a comprehensive description of outcomes following revision total knee arthroplasty in a large patient cohort with a long follow-up. Although revision total knee arthroplasty is a challenging and complex aspect of arthroplasty surgery, high patient satisfaction and good functional outcomes can be achieved for the majority of patients.
abstract_id: PUBMED:36082284
Unicompartmental Knee Arthroplasty vs Total Knee Arthroplasty: A Risk-adjusted Comparison of 30-day Outcomes Using National Data From 2014 to 2018. Background: When clinically indicated, the choice of performing a total knee arthroplasty (TKA) vs a unicompartmental knee arthroplasty (UKA) is dictated by patient and surgeon preferences. Increased understanding of surgical morbidity may enhance this shared decision-making process. This study compared 30-day risk-adjusted outcomes in TKA vs UKA using a national database.
Methods: We analyzed data from the National Safety and Quality Improvement Program database, for patients who received TKA or UKA between 2014-2018. The main outcomes were blood transfusion, operation time, length of stay, major complication, minor complication, unplanned reoperation, and readmission. Comparisons of odds of the outcomes of interest between TKA and UKA patients were analyzed using multivariate regression models accounting for confounders.
Results: We identified 274,411 eligible patients, of whom 265,519 (96.7%) underwent TKA, while 8892 (3.3%) underwent UKA. Risk-adjusted models that compared perioperative and postoperative outcomes of TKA and UKA showed that the odds of complications such as blood transfusion (adjusted odds ratio [aOR], 19.74; 95% confidence interval [CI]: 8.19-47.60), major (aOR, 1.87; 95% CI: 1.27-2.77) and minor complications (aOR, 1.43; 95% CI: 1.14-1.79), and readmission (aOR, 1.41; 95% CI: 1.16-1.72) were significantly higher among patients who received TKA than among those who received UKA. In addition, operation time (aOR, 7.72; 95% CI: 6.72-8.72) and hospital length of stay (aOR, 1.11; 95% CI: 1.05-1.17) were also higher among the TKA recipients compared to those who received UKA.
Conclusions: UKA is associated with lower rates of adverse perioperative outcomes compared to TKA. Clinical indications and surgical morbidity should be considered in the shared-decision process.
abstract_id: PUBMED:33889700
Trends in Operative Time and Short-Term Outcomes After Conventional and Navigated Total Knee Arthroplasty. Background: Adoption of navigated total knee arthroplasty (Nav-TKA) is increasing. However, it has been suggested that a perceived decrease in surgical efficiency and a lack of proven superior functional outcomes associated with Nav-TKA have hindered its widespread adoption.
Methods: The American College of Surgeons National Surgical Quality Improvement Program was queried to identify patients who had undergone TKA with or without navigation between 2012 and 2018. Patients were further subclassified based on the type of navigation used, image-guided or imageless. Multivariate logistic regression was used to compare operative time and 30-day complication rates between conventional TKA (Conv-TKA) and Nav-TKA with and without image guidance.
Results: A total of 316,210 Conv-TKAs and 8554 Nav-TKAs (8270 imageless, 284 image-guided) were identified. Across the study period, the use of Nav-TKA was associated with a 1.5-minute increase in operative time. However, the overall time burden decreased over the study period, and by 2018, the mean operative time for Nav-TKA was 2.4 minutes less than that of Conv-TKA. Compared with Conv-TKA, Nav-TKA was associated with decreased rates of postoperative transfusion and surgical site complications but a similar incidence of systemic thromboembolism.
Conclusions: This is the first large-scale database study to examine the differences in operative time between Conv-TKA and Nav-TKA. The time burden associated with Nav-TKA decreased over the study period and even reversed by 2018. Nav-TKA was associated with lower rates of postoperative transfusion and surgical site complications. Further studies are needed to evaluate the long-term and functional outcomes between conventional and navigated knee arthroplasty techniques.
Answer: The abstracts provided do not directly address whether outcomes after total knee arthroplasty (TKA) are worsening over time on a general scale. However, some of the abstracts do discuss factors that can influence outcomes after TKA, such as comorbidities, previous surgeries, and the type of arthroplasty performed.
For instance, one study found that increasing comorbidity postoperatively was associated with worsening physical function and pain after primary TKA (PUBMED:27717340). This suggests that outcomes can worsen over time if patients develop more comorbid conditions after their surgery.
Another study highlighted that previous knee injury requiring surgical intervention increases the rate of future arthroplasty and that conversion knee arthroplasty (from previous hardware to TKA) results in a significant increase in operative time and higher rates of readmission and repeat procedures within 90 days compared to primary TKA (PUBMED:27474511). This could imply that outcomes may be less favorable in patients who have had previous knee surgeries.
A study comparing unicompartmental knee arthroplasty (UKA) to TKA found that UKA is associated with lower rates of adverse perioperative outcomes compared to TKA (PUBMED:36082284). This might suggest that the choice of procedure can affect the short-term outcomes, but it does not provide a trend over time for TKA specifically.
Lastly, a study on navigated TKA reported that the time burden associated with navigated TKA decreased over the study period and was associated with lower rates of postoperative transfusion and surgical site complications compared to conventional TKA (PUBMED:33889700). This indicates that with advancements in surgical techniques, such as the use of navigation, outcomes may actually improve over time.
In summary, the abstracts do not collectively suggest that outcomes after TKA are worsening over time. Instead, they highlight various factors that can affect outcomes, such as comorbidities, previous surgeries, the type of arthroplasty, and advancements in surgical techniques. |
Instruction: Is water out of vogue?
Abstracts:
abstract_id: PUBMED:17644930
In Vogue: North American and British representations of women smokers in Vogue, 1920s-1960s. The image of a cigarette in a woman's hand symbolizes independence, non-conformity and personal power, despite widespread awareness that smoking has serious health risks. Through a content analysis of North American and British editions of Vogue, we trace the representation of women smokers from the 1920s-1960s. Vogue located the cigarette within the culture of the feminine elite. We explore the place of cigarette smoking within the constellation of behaviours and appearances presented as desirable characteristics of elitism, through the themes of lifestyle, "the look," and feminine confidence. We chart these themes' transformations over time and national contexts.
abstract_id: PUBMED:13555971
Vogue, discredit & rehabilitation of chronic mediastinitis N/A
abstract_id: PUBMED:10842846
Status of domestic wastewater management in relation to drinking-water supply in two states of India. In India, supply of drinking water, treatment and disposal of domestic wastewater including faecal matter are managed by local bodies. The existing status of water supply, characteristics of domestic wastewater, modes of collection, treatment and disposal system for sewage and faecal matter in 82 municipalities and 4 municipal corporations were assessed in the States of Bihar and West Bengal in India. Domestic wastewater in the municipal areas is collected and discharged through open kachha (earthen), pucca (cement-concrete) and natural drains and discharged into water courses or disposed on land. Scavenger carriage system for night soil disposal is in-vogue at several places in the surveyed States. Open defecation by the inhabitants in some of the municipalities also occurs. The existing methods of collection, treatment and disposal of sewage impairs the water quality of different water sources. Techno-economically viable remedial measures for providing basic amenities, namely safe drinking-water supply and proper sanitation to the communities of these two States of India are suggested and discussed.
abstract_id: PUBMED:1413983
Water quality The quality of water is more and more important in the polluted world of today. The nephrologist is with his patients a great consumer of water; water for dialysis should be of the highest quality as each maintenance dialysis patient uses up to 150 liters 3 times per week for years, in direct contact with his blood through a very thin dialysis membrane. Water may be polluted at the collection-, treatment- or distribution centers or locally at the place of consumption. Aluminum, chloramine, pesticides, copper, herbicides and many other substances as well as bacteria, may be found in water. For these reasons the nephrologist uses a series of devices to clean or purify the water before use in his dialysis unit: softeners, de-ionisators, reverse-osmosis, charcoal filters, ultraviolet irradiation, a.o. A correct water circuit is important as well, as dead spaces could cause bacterial contamination. The ultra pure water has to be mixed before use in the dialyzer, with a concentrated electrolyte solution. The author discusses all these different aspects of water treatment in the particular situation of hemodialysis.
abstract_id: PUBMED:7702376
Is water out of vogue? A survey of the drinking habits of 2-7 year olds. Objective: To survey the drinking habits of young children with reference to the consumption of plain water, and to estimate the proportion of a child's recommended energy intake contributed by drinks.
Design: A prospective survey.
Setting: Health centres, mother and toddler groups, and infant schools in and around Southampton.
Subjects: 39 preschool and 66 infant schoolchildren.
Interventions: Parents kept a diary of all drinks consumed by the child over 48 hours. Parents were interviewed with a questionnaire about the drinking habits of their child.
Main Outcome Measures: The type of drinks and volume of fluid consumed over 48 hours; the proportion of a child's recommended energy intake consumed through drinks.
Results: 72.5% of the preschool group and 50% of the infant school group never drank plain water. Squash was by far the most frequently consumed drink. 15% of the preschool group consumed just under 50% of their recommended daily energy intake in drinks.
Conclusions: Young children consume large quantities of squash which constitutes a substantial energy supply. It is possible that they are conditioned at an early age to the sweet taste of drinks that may be no nutritional benefit to them.
abstract_id: PUBMED:29238002
Yin-yang regulating effects of cancer-associated genes, proteins, and cells: An ancient Chinese concept in vogue in modern cancer research. Great achievements have been made in human cancer research, but most of this research is focused on conditions at the microscopic rather than the systemic level. Recent studies have increasingly cited the ancient Chinese theory of yin-yang in an effort to expand beyond the microscopic level. Various cancer-associated genes and proteins such as mitogen-activated protein kinase (MAPK), p38, p53, c-Myc, tumor necrosis factor (TNF)-α, NF-κB, Cyclin D1, and cyclin-dependent kinase (CDK) and cells such as T cells, B cells, macrophages, neutrophils, and fibroblasts have been reported to regulate various types of cancers in a yin-yang manner. These studies have brought the theory of yin-yang into vogue in cancer research worldwide.
abstract_id: PUBMED:1944635
Water is not only water N/A
abstract_id: PUBMED:35231506
Soil carbon sequestration, greenhouse gas emissions, and water pollution under different tillage practices. Tillage is a common agricultural practice and a critical component of agricultural systems that is frequently employed worldwide in croplands to reduce climatic and soil restrictions while also sustaining various ecosystem services. Tillage can affect a variety of soil-mediated processes, e.g., soil carbon sequestration (SCS) or depletion, greenhouse gas (GHG) (CO2, CH4, and N2O) emission, and water pollution. Several tillage practices are in vogue globally, and they exhibit varied impacts on these processes. Hence, there is a dire need to synthesize, collate and comprehensively present these interlinked phenomena to facilitate future researches. This study deals with the co-benefits and trade-offs produced by several tillage practices on SCS and related soil properties, GHG emissions, and water quality. We hypothesized that improved tillage practices could enable agriculture to contribute to SCS and mitigate GHG emissions and leaching of nutrients and pesticides. Based on our current understanding, we conclude that sustainable soil moisture level and soil temperature management is crucial under different tillage practices to offset leaching loss of soil stored nutrients/pesticides, GHG emissions and ensuring SCS. For instance, higher carbon dioxide (CO2) and nitrous oxide (N2O) emissions from conventional tillage (CT) and no-tillage (NT) could be attributed to the fluctuations in soil moisture and temperature regimes. In addition, NT may enhance nitrate (NO3-) leaching over CT because of improved soil structure, infiltration capacity, and greater water flux, however, suggesting that the eutrophication potential of NT is high. Our study indicates that the evaluation of the eutrophication potential of different tillage practices is still overlooked. Our study suggests that improving tillage practices in terms of mitigation of N2O emission and preventing NO3- pollution may be sustainable if nitrification inhibitors are applied.
abstract_id: PUBMED:7209272
Clean water. What are the Kempische Steenkoolmijnen doing for this? Social obligations about the environment have always been a point of major concern at the N.V. Kempense Steenkoolenmijnen; so the company could not ignore environmental standards and existing regulations. This paper focusses attention on the importance of clean water for human health and civilization. Natural water occurs as rainwater, groundwater, surface water and seawater. Waterpollution, its measurement and the possible sources of this pollution are briefly discussed. Attention is also paid to existing regulations which implicate the introduction of discharge conditions. The N.V. Kempense Steenkoolenmijnen can't deny being a polluter. The company produces five types of wastewater, which are discharged into several surface waters. This is a specific contamination containing salts and coal particles. Flow rates are high but variable. To fight this pollution the N.V. Kempense Steenkoolenmijnen has worked out an investment program for each of its five collieries. The treatment of wastewater at a settling basin of the Zolder-colliery is described as an example.
abstract_id: PUBMED:7209282
Dosimetry disparities resulting from two accepted methods of calibration of cobalt-60 teletherapy machines. The primary concern of all involved in the management of radiotherapeutic patients is that the tumor dose delivered to each and every patient in one institution is identical to that delivered anywhere else. Despite the advocacy 1,2,3,4 for the calibration of high energy photon beams at 5 cm depth or beyond in a water phantom to reduce the effect of electron "contamination" from collimators, filters, applicators, etc., the "in-air" method of calibration is still in vogue in many institutions in the United States and elsewhere. The introduction of S.I. units in radiology is likely to create ambiguity in proper conversion factors from exposure to absorbed dose which can be avoided if field instruments are calibrated in terms of absorbed dose in water under specified conditions. The present study shows that the estimated error can be as much as 7% low when the "in-air" method of calibration is used instead of measurements in a large water phantom at 5 cm depth for cobalt-60 photon beams and hence recommends that the procedure for "in-air" calibration for high energy photon beam should be discontinued.
Answer: Based on the provided abstracts, the concept of water being "out of vogue" is not universally supported. While one study (PUBMED:7702376) suggests that young children may be consuming less plain water and more sweetened drinks like squash, which could condition them to prefer sweet tastes and provide substantial energy intake, this does not necessarily mean that water is out of vogue in all contexts.
Other abstracts highlight the critical importance of water in various sectors. For instance, PUBMED:1413983 discusses the necessity of high-quality water for dialysis patients, indicating that water is very much in vogue in medical treatments where purity is essential. Similarly, PUBMED:35231506 emphasizes the role of water in agricultural practices, particularly in relation to soil carbon sequestration, greenhouse gas emissions, and water pollution under different tillage practices. This suggests that water management and quality are significant concerns in agriculture and environmental science.
Furthermore, PUBMED:7209272 addresses the importance of clean water for human health and the environment, indicating ongoing efforts to reduce water pollution by a mining company. This demonstrates that water is still a critical resource that requires careful management and is not out of vogue in environmental considerations.
In the context of cancer research, PUBMED:29238002 mentions the yin-yang regulating effects of various cancer-associated genes, proteins, and cells, using an ancient Chinese concept that is in vogue in modern cancer research. While this abstract does not directly address water, it shows that traditional concepts can be relevant and "in vogue" in contemporary scientific research, which could be analogous to the enduring importance of water.
Overall, while there may be shifts in consumption patterns among certain populations, the significance of water in health, environmental management, and scientific research suggests that water is not out of vogue in a broader sense. |
Instruction: Is there really no benefit of vertebroplasty for osteoporotic vertebral fractures?
Abstracts:
abstract_id: PUBMED:22729693
Is there really no benefit of vertebroplasty for osteoporotic vertebral fractures? A meta-analysis. Background: Osteoporotic vertebral compressed fractures (VCFs) are the most common osteoporotic fractures. Although percutaneous vertebroplasty (PVP) reportedly relieves pain and improves function, a recent pooled analysis from two multicenter randomized controlled trials concluded the improvement in pain and disability treated with PVP was similar to those with sham surgery.
Questions/purpose: Using meta-analysis we therefore asked whether compared with either nonoperative therapy or a sham injection for patients with VCF, PVP would (1) better relieve pain, (2) provide greater improvement in pain-related disability, and (3) increase the recurrence of vertebral fractures.
Methods: We searched PubMed, EMBASE, Medline, and the Cochrane library using the keywords "vertebroplasty AND osteoporosis OR fracture". We included nine of the 469 articles identified. Using a random effects model, we calculated the weighted mean differences to evaluate the pain reduction at different times as the primary outcome. Pain-related disability was assessed by a quality of life (QOL) measure. Improvement of QOL and recurrence of vertebral fractures were the secondary outcomes. We used subgroup analysis to reinvestigate pain relief and function improvement of PVP based on two different controls: nonoperative therapy and sham injection. The total number of patients was 886.
Results: Pain scoring was similar between the PVP group and the sham injection group at 1 to 29 days and 90 days. However, compared with nonoperative therapy, PVP reduced pain at all times studied. QOL in the PVP group was improved or tended to be improved compared with QOL for both control groups. The risk of new fractures was similar between the PVP groups and both control groups.
Conclusions: Different control groups may have accounted for the different conclusions in the literature regarding the ability of PVP to relieve pain and restore function recovery. Compared with nonoperative treatment PVP relieved pain better and improved QOL. PVP did not increase the risk of new fractures.
Level Of Evidence: Level II, therapeutic study. See Guidelines for Authors for a complete description of levels of evidence.
abstract_id: PUBMED:32167251
Vertebroplasty: the new CHUV consensus Should we continue to treat patients suffering from an acute osteoporotic vertebral fracture with vertebroplasty ? What is the potential benefit ? What are its indications ? What are its risks ? Which way to perform it ? How to manage the osteoporosis evaluation and therapy ? In 2009 we published the « CHUV consensus » on the management of vertebral osteoporotic fractures by vertebroplasty. We here propose an update including recent knowledge on the management of vertebral fractures by bone insufficiency by percutaneous cementoplasty.
abstract_id: PUBMED:36656324
Early Vertebroplasty for Severely Painful Acute Osteoporotic Compression Fractures: A Critical Review of the Literature. Vertebroplasty has emerged over the last 30 years as a common treatment for painful osteoporotic vertebral fractures. Patient selection and the time at which vertebroplasty is offered to the patient varies between centres and regions. Vertebroplasty has been studied in comparison to placebo intervention in five blinded trials. One such trial showed more benefit from vertebroplasty than placebo when the procedure was mostly performed within 3 weeks of fracture onset. Others showed no additional benefit from vertebroplasty compared to placebo when it was performed later in the natural history of the fracture. In this review, we examine data from blinded and open label randomised studies of vertebroplasty for evidence relating specifically to the use of early vertebroplasty for patients with severely painful acute osteoporotic fractures.
abstract_id: PUBMED:23326986
Complications in vertebroplasty. Background: Vertebroplasty is one of the minimally invasive surgery that benefit in pain relief from the osteoporotic or malignancy related vertebral compression fractures. However, many literatures reported both asymptomatic and serious complications. The aim of the present study was to summarize, collect data and report the complication ofvertebroplastyfrom our experience at a single institute.
Material And Method: Three hundred and twenty five vertebroplasty procedures from 236 patients performed in our institute were retrospectively reviewed. Data of diagnosis, age at the time of procedure were collected. All complications found were reviewed in detail.
Results: Commonly performed procedures were at thoracolumbar junction (51.4%). Osteoporosis was the most common cause of fracture. The present study found 88 (27%) complications with 26 (8%) symptomatic patients. Most common complication was cement leakage, which intervertebral disc was the most common site (42.9). Spinal canal leakage was found in 14 cases (20%). Four out of 14 cases had neurological complications and need further managements. Two cases had neurologic complications from needle injury.Adjacent level collapse found in 13 patients (4%) and remote segment collapse occurred in 5 patients (1.5%). Three had progressive kyphosis required later surgical treatment. One asymptomatic cement pulmonary embolism was found in the present study.
Conclusion: The complications of vertebroplasty were mostly asymptomatic, but serious complication such as neurologic injury could occur. Vertebroplasty could be considered a quite safe treatment for osteoporotic vertebral fracture. Meticulous technique should be executed during the procedure to avoid the leakage complication.
abstract_id: PUBMED:20523998
Have recent vertebroplasty trials changed the indications for vertebroplasty? Two different investigators in the New England Journal of Medicine recently published two randomized controlled trials (RCTs) regarding the efficacy of vertebroplasty for painful osteoporotic vertebral compression fractures. In their results, both investigators concluded that there was no significant difference in pain relief between the vertebroplasty group and control group 1 month after treatment. The trials described a different patient cohort from the one we treat with vertebroplasty. Both enrolled patients had back pain for <or=12 months. This duration of pain was far too long for a vertebroplasty trial, resulting in parallel trials of vertebroplasty on healed fractures. Where a study is needed, it should be comprised of patients with acute osteoporotic compression fractures, particularly those who are hospitalized or bedridden because of the pain of such fractures. Magnetic resonance imaging was not systematically performed before vertebroplasty, and inpatients were excluded. Inpatients with acute fracture pain are the group most likely to respond well to vertebroplasty. Enrolment was a problem in both trials. Randomization in both RCTs took >4 years for completion. We advise that vertebroplasty be offered to patients with recent fractures <8 weeks old who have uncontrolled pain as well as patients progressing to osteonecrosis and the intravertebral vacuum phenomenon (Kummels disease). The availability of recent MRI scanning is also critical to proper patient selection.
abstract_id: PUBMED:24436552
Vertebroplasty. Percutaneous vertebroplasty has become widely accepted as a safe and effective minimally invasive procedure for the treatment of painful vertebral body compression fractures refractory to medical therapy. In this article, the authors review the indications and contraindications for vertebroplasty, principles of appropriate patient selection, useful techniques to achieve optimal outcomes, and the potential risks and complications of the procedure.
abstract_id: PUBMED:38171282
Treatment for Osteoporotic Vertebral Fracture - A Short Review of Orthosis and Percutaneous Vertebroplasty and Balloon Kyphoplasty. The management of osteoporotic vertebral fractures (OVFs) in the elderly includes nonoperative treatment and vertebroplasty, but has not been established due to the diversity of patient backgrounds. The purpose of this study was to compare the impact of 3 treatment modalities for the management of OVF: orthotic treatment, percutaneous vertebroplasty (PVP), and balloon kyphoplasty (BKP). The method was based on an analysis of the latest RCTs, meta-analyses, and systematic reviews on these topics. No study showed a benefit of bracing with high level of evidence. Trials were found that showed comparable outcomes without orthotic treatment. Only 1 randomized controlled trial (RCT) showed an improvement in pain relief up to 6 months compared with no orthosis. Rigid and nonrigid orthoses were equally effective. Four of 5 RCTs comparing vertebroplasty and sham surgery were equally effective, and one RCT showed superior pain relief with vertebroplasty within 3 weeks of onset. In open trials comparing vertebroplasty with nonoperative management, vertebroplasty was superior. PVP and BKP were comparable in terms of pain relief, improvement in quality of life, and adjacent vertebral fractures. BKP does not affect global sagittal alignment, although BKP may restore vertebral body height. An RCT was published showing that PVP was effective in chronic cases without pain relief. Vertebroplasty improved life expectancy by 22% at 10 years. The superiority of orthotic therapy for OVF was seen only in short-term pain relief. Soft orthoses proved to be a viable alternative to rigid orthoses. Vertebroplasty within 3 weeks may be useful. There is no significant difference in clinical efficacy between PVP and BKP. Vertebroplasty improves life expectancy.
abstract_id: PUBMED:38171283
Utilization of Vertebroplasty/Kyphoplasty in the Management of Compression Fractures: National Trends and Predictors of Vertebroplasty/Kyphoplasty. Objective: The purpose of this study is to examine the utilization of kyphoplasty/vertebroplasty procedures in the management of compression fractures. With the growing elderly population and the associated increase in rates of osteoporosis, vertebral compression fractures have become a daily encounter for spine surgeons. However, there remains a lack of consensus on the optimal management of this patient population.
Methods: A retrospective analysis of 91 million longitudinally followed patients from 2016 to 2019 was performed using the PearlDiver Patient Claims Database. Patients with compression fractures were identified using International Classification of Disease, 10th Revision codes, and a subset of patients who received kyphoplasty/vertebroplasty were identified using Common Procedural Terminology codes. Baseline demographic and clinical data between groups were acquired. Multivariable regression analysis was performed to determine predictors of receiving kyphoplasty/vertebroplasty.
Results: A total of 348,457 patients with compression fractures were identified with 9.2% of patients receiving kyphoplasty/vertebroplasty as their initial treatment. Of these patients, 43.5% underwent additional kyphoplasty/vertebroplasty 30 days after initial intervention. Patients receiving kyphoplasty/vertebroplasty were significantly older (72.2 vs. 67.9, p < 0.05), female, obese, had active smoking status and had higher Elixhauser Comorbidity Index scores. Multivariable analysis demonstrated that female sex, smoking status, and obesity were the 3 strongest predictors of receiving kyphoplasty/vertebroplasty (odds ratio, 1.27, 1.24, and 1.14, respectively). The annual rate of kyphoplasty/vertebroplasty did not change significantly (range, 8%-11%).
Conclusion: The majority of vertebral compression fractures are managed nonoperatively. However, certain patient factors such as smoking status, obesity, female sex, older age, osteoporosis, and greater comorbidities are predictors of undergoing kyphoplasty/vertebroplasty.
abstract_id: PUBMED:20230352
Invited editorial presents an accurate summary of the results of two randomised placebo-controlled trials of vertebroplasty. Our recent editorial in the Journal presents an accurate summary of our two randomised trials of vertebroplasty, which found no benefit of vertebroplasty over placebo. Participants in both trials are representative of patients seen in clinical practice and who would qualify for government-subsidised funding of vertebroplasty in Australia. Clinical experience and previous published literature are likely to have overestimated the treatment benefit of vertebroplasty for many reasons. This is why randomised placebo-controlled trials are required to determine the efficacy of treatment interventions, particularly when the condition being treated is self-limiting and the primary end point is improvement of symptoms. Based on the best evidence currently available, the routine use of vertebroplasty outside of the research setting for painful osteoporotic vertebral fractures appears unjustified.
abstract_id: PUBMED:34193518
Vertebroplasty for osteoporotic vertebral fracture. Appropriate care of patients with a recent painful osteoporotic vertebral fracture (VF) requires immobilisation, analgesics and spinal orthoses. Some VFs are however responsible for disabling pain and prolonged bed rest. In this context, vertebroplasty techniques have been proposed with a large benefit in case series and open-label randomised studies, but lack efficacy in three among four double-blind randomised studies. The objectives of the treatment of a recent painful VF are to relieve pain and to preserve mechanical conditions. With this in mind, we report an experts' opinion paper on the indications for vertebroplasty and research agenda for clinical studies.
Answer: The question of whether vertebroplasty offers benefits for osteoporotic vertebral fractures has been a subject of debate. A meta-analysis suggested that compared with nonoperative therapy, percutaneous vertebroplasty (PVP) reduced pain at all times studied and improved quality of life (QOL), without increasing the risk of new fractures (PUBMED:22729693). However, the same study found that pain scoring was similar between the PVP group and the sham injection group at 1 to 29 days and 90 days, indicating that the benefit might not be as clear when compared with a placebo procedure.
Recent literature has also been mixed. Some studies have shown that vertebroplasty does not offer significant benefits over placebo (PUBMED:20230352), while others have suggested that vertebroplasty can be beneficial, especially when performed early, within 3 weeks of fracture onset (PUBMED:36656324). It has been proposed that patient selection and timing of the procedure are critical factors in determining the success of vertebroplasty (PUBMED:20523998).
Complications associated with vertebroplasty, such as cement leakage, can occur, but the procedure is generally considered safe when performed with meticulous technique (PUBMED:23326986). The procedure has been widely accepted as a minimally invasive option for treating painful vertebral body compression fractures that are refractory to medical therapy (PUBMED:24436552).
Furthermore, a short review comparing orthosis, PVP, and balloon kyphoplasty for the management of osteoporotic vertebral fractures found that vertebroplasty may be useful when performed within 3 weeks and that it does not differ significantly in clinical efficacy from balloon kyphoplasty (PUBMED:38171282). National trends indicate that certain patient factors, such as smoking status, obesity, female sex, older age, osteoporosis, and greater comorbidities, are predictors of undergoing vertebroplasty/kyphoplasty (PUBMED:38171283).
In summary, the benefit of vertebroplasty for osteoporotic vertebral fractures is not entirely dismissed. While some studies have shown no significant difference compared to placebo, others have highlighted the importance of early intervention and appropriate patient selection for achieving pain relief and improved quality of life. The procedure is generally safe, but like any medical intervention, it carries The procedure is generally safe, but like any medical intervention, it carries some risk of complications. The evidence suggests that vertebroplasty may offer benefits, particularly when performed early in the course of the fracture and in carefully selected patients. It is important to consider individual patient factors and to manage expectations regarding the potential outcomes of the procedure (PUBMED:24436552, PUBMED:38171282).
Experts have called for further research to clarify the indications for vertebroplasty and to establish a more definitive understanding of its role in the treatment of recent painful osteoporotic vertebral fractures (PUBMED:34193518). Given the mixed results from various studies, the decision to use vertebroplasty should be made on a case-by-case basis, taking into account the specific clinical scenario, the timing of the intervention, and the patient's overall health status and preferences. |
Instruction: Point-of-care (POCT) prothrombin time monitors: is a periodical control of their performance useful?
Abstracts:
abstract_id: PUBMED:18814903
Point-of-care (POCT) prothrombin time monitors: is a periodical control of their performance useful? Introduction: Point-of-care testing (POCT) prothrombin time monitors are now widely used to monitor oral anticoagulant treatment. Although portable coagulometers are extremely easy to use, checking the quality of their performance presents some difficulties.
Materials And Methods: The aims of this study were to investigate on a quarterly basis the performance of 95 Coagucheck S assigned to 99 anticoagulated patients at home. This was done checking the monitors versus a reference coagulometer in the laboratory at our Thrombosis Centre (TC). The other aims were to carry out an external quality assessment employing different sets of INR certified plasmas with 5 different ranges of anticoagulation and to assess the performance of the different lots of strips employed by the patients during the study.
Results: No difference between the PT INR obtained with both the systems at the first quarterly check was noted but a significant difference was found when the two systems were compared at the second and third quarterly checks. The Bland-Altman test showed increased disagreement between the first and the third controls. The percentage of INR values that showed a difference of more or less than 0.5 INR units in the PT values performed with both the systems was: 1.0% (first control), 7.5% (second control) and 11.5% (third control) (Chi-Square: 8.315, p=0.0156). Lots with differences higher than 10% in terms of +/- 0.5 INR Units at the first, second and third controls were 16%, 20.8% and 61%, respectively. Seven monitors (7.3%) failed to test one or two of the INR certified plasmas of one set but performed well using a second set of plasmas. Three monitors (3.1%) failed to test two sets of plasmas but performed well using a different lot of strips (from 279A to 483A). One monitor (1%) gave unsatisfactory results with different sets of plasmas and strips. All the other PT INR obtained with the monitors fell well within the different ranges of the INR certified plasmas.
Conclusions: Anticoagulated patient in self-testing or self-management should periodically bring their portable coagulometer to a reference Thrombosis Centre especially when the lot of strips have to be changed. The role of Thrombosis Centre appears therefore crucial in this regard.
abstract_id: PUBMED:11369346
Accuracy and precision of point-of-care testing for glucose and prothrombin time at the critical care units. The use of point-of-care testing (POCT) in critical care patient units has continued to increase since the 1980s. This increase is due to the need for prompt therapeutic interventions that may impact mortality and morbidity, and reduce the overall cost of healthcare for critically ill patients. The diagnostic manufacturing industry has risen to this challenge by introducing portable and/or handheld analyzers for use at the point-of-care. In order to ensure the public safety in the USA, the Food and Drug Administration (FDA) must approve the use of each POCT analyzer. The FDA approval is based on established performance criteria that includes relative accuracy and precision documentation. This study evaluated the precision and accuracy of the POCT prothrombin time and glucose analyzers relative to the manufacturers' specifications, to the internal QC in the main laboratory, and to the results of the external proficiency-testing program. The QC for the prothrombin time had a precision that ranged from 2.84% to 3.45% (POCT) and from 1.27-1.66% (main laboratory). The precision for the glucose QC ranged from 5% to 5.2% (POCT) and 0.9-2.7% (main laboratory). Using the results of the external proficiency testing, the inter-laboratory CV% for the POCT prothrombin time ranged from 3.5% to 5.0% and the main laboratory had a range of 2.5-2.9%. The inter-laboratory CV% ranges for glucose POCT and the main laboratory were 4.9-10.6% and 1.8-3.5%, respectively. The main laboratory analyzers proved to be more accurate than the POCT analyzers as indicated by comparison to the mean prothrombin time and glucose results of all participating laboratories in the proficiency testing program.
abstract_id: PUBMED:33157410
Fully printed prothrombin time sensor for point-of-care testing. With an increasing number of patients relying on blood thinners to treat medical conditions, there is a rising need for rapid, low-cost, portable testing of blood coagulation time or prothrombin time (PT). Current methods for measuring PT require regular visits to outpatient clinics, which is cumbersome and time-consuming, decreasing patient quality of life. In this work, we developed a handheld point-of-care test (POCT) to measure PT using electrical transduction. Low-cost PT sensors were fully printed using an aerosol jet printer and conductive inks of Ag nanoparticles, Ag nanowires, and carbon nanotubes. Using benchtop control electronics to test this impedance-based biosensor, it was found that the capacitive nature of blood obscures the clotting response at frequencies below 10 kHz, leading to an optimized operating frequency of 15 kHz. When printed on polyimide, the PT sensor exhibited no variation in the measured clotting time, even when flexed to a 35 mm bend radius. In addition, consistent PT measurements for both chicken and human blood illustrate the versatility of these printed biosensors under disparate operating conditions, where chicken blood clots within 30 min and anticoagulated human blood clots within 20-100 s. Finally, a low-cost, handheld POCT was developed to measure PT for human blood, yielding 70% lower noise compared to measurement with a commercial potentiostat. This POCT with printed PT sensors has the potential to dramatically improve the quality of life for patients on blood thinners and, in the long term, could be incorporated into a fully flexible and wearable sensing platform.
abstract_id: PUBMED:32287536
Point-of-care testing (POCT): Current techniques and future perspectives. Point-of-care testing (POCT) is a laboratory-medicine discipline that is evolving rapidly in analytical scope and clinical application. In this review, we first describe the state of the art of medical-laboratory tests that can be performed near the patient. At present, POCT ranges from basic blood-glucose measurement to complex viscoelastic coagulation assays. POCT shortens the time to clinical decision-making about additional testing or therapy, as delays are no longer caused by transport and preparation of clinical samples, and biochemical-test results are rapidly available at the point of care. Improved medical outcome and lower costs may ensue. Recent, evolving technological advances enable the development of novel POCT instruments. We review the underlying analytical techniques. If new instruments are not yet in practical use, it is often hard to decide whether the underlying analytical principle has real advantage over former methods. However, future utilization of POCT also depends on health-care trends and new areas of application. But, even today, it can be assumed that, for certain applications, near-patient testing is a useful complement to conventional laboratory analyses.
abstract_id: PUBMED:22996982
Point-of-care test (POCT) INR: hope or illusion? In the last decade, point-of-care tests were developed to provide rapid generation of test results. These tests have increasingly broad applications. In the area of hemostasis, the international normalized ratio, INR point-of-care test (POCT INR), is the main test of this new proposal. This test has great potential benefit in situations where the quick INR results influences clinical decision making, as in acute ischemic stroke, before surgical procedures and during cardiac surgery. The INR POCT has the potential to be used for self-monitoring of oral anticoagulation in patients under anticoagulant therapy. However, the precision and accuracy of INR POCT still need to be enhanced to increase effectiveness and efficiency of the test. Additionally, the RDC / ANVISA Number 302 makes clear that the POCT testing must be supervised by the technical manager of the Clinical Laboratory in the pre-analytical, analytical and post-analytical. In practice, the Clinical Laboratory does not participate in the implementation of POCT testing or release of the results. Clinicians have high expectation with the incorporation of INR POCT in clinical practice, despite the limitations of this method. These professionals are willing to train the patient to perform the test, but are not legally responsible for the quality of it and are not prepared for the maintenance of equipment. The definition of who is in charge for the test must be one to ensure the quality control.
abstract_id: PUBMED:37169439
Practical Challenges of Point-of-Care Testing. The practical challenges of point-of-care testing (POCT) include analytical performance and quality compared with testing performed in a central laboratory and higher cost per test compared with laboratory-based tests. These challenges can be addressed with new test technology, consensus, and practice guidelines for the use of POCT, instituting a quality management system and data connectivity in the POCT setting, and studies that demonstrate evidence of clinical and economic value of POCT.
abstract_id: PUBMED:24838333
Preparation of control blood for external quality assessment of point-of-care international normalized ratio testing in the Netherlands. Objectives: The aim of this study was to prepare control blood for an external quality assessment scheme (EQAS) for international normalized ratio (INR) point-of-care testing (POCT) in the Netherlands and to assess the performance of the participants.
Methods: Control blood was prepared from dialyzed pooled patient plasma and washed human erythrocytes. Samples of control blood were mailed to participants of the Netherlands EQAS from October 2006 through December 2012.
Results: Most participants used CoaguChek XS (Roche Diagnostics, Mannheim, Germany) devices for POCT. The median between-center coefficient of variation (CV) of the reported INR decreased from 4.5% in 2006 to 2.6% in 2012. A few participants used the ProTime Microcoagulation System (ITC, Edison, NJ) for POCT. The median CV (per year) of the INR with the latter system was 7.0% to 10.6%.
Conclusions: The control blood samples were useful for external quality assessment in the Netherlands. The participants' performance with the CoaguChek XS system improved with time, demonstrating the value of external quality assessment.
abstract_id: PUBMED:33403928
Organisation and quality monitoring for point-of-care testing (POCT) in Belgium: proposal for an expansion of the legal framework for POCT into primary health care. Background: There is a trend towards decentralisation of laboratory tests by means of Point-of-Care testing (POCT). Within hospitals, Belgian law requires a POCT policy, coordinated by the clinical laboratory. There is however no legal framework for POCT performed outside the hospital: no reimbursement, no compulsory quality monitoring and no limits nor control on the prices charged to the patient. Uncontrolled use of POCT can have negative consequences for individual and public health.
Proposal: We propose that POCT outside hospitals would only be reimbursed for tests carried out within a legal framework, requiring evidence-based testing and collaboration with a clinical laboratory, because clinical laboratories have procedures for test validation and quality monitoring, are equipped for electronic data transfer, are familiar with logistical processes, can provide support when technical issues arise and can organise and certify training. Under these conditions the government investment will be offset by health benefits, e.g. fall in antibiotic consumption with POCT for CRP in primary care, quick response to SARS-CoV2-positive cases in COVID-19 triage centres.
Priorities: 1° extension of the Belgian decree on certification of clinical laboratories to decentralised tests in primary care; 2° introduction of a separate reimbursement category for POCT; 3° introduction of reimbursement for a limited number of specified POCT; 4° setup of a Multidisciplinary POCT Advisory Council, the purpose of which is to draw up a model for reimbursement of POCT, to select tests eligible for reimbursement and to make proposals to the National Institute for Health and Disability Insurance (RIZIV/INAMI).
abstract_id: PUBMED:27501250
Validation of a point-of-care prothrombin time test after cardiopulmonary bypass in cardiac surgery. Point-of-care coagulation monitoring can be used for the guidance of haemostasis management. However, the influence of time on point-of-care prothrombin time testing following protamine administration after cardiopulmonary bypass has not been investigated. Bland-Altman and error grid analysis were used to analyse the level of agreement between prothrombin time measurements from point-of-care and laboratory tests before cardiopulmonary bypass, and then 3 min, 6 min and 10 min after protamine administration. Prothrombin times were expressed as International Normalised Ratios. While the point-of-care and laboratory prothrombin time measurements showed a high level of agreement before bypass, this agreement deteriorated following protamine administration to a mean (SD) bias of -0.22 (0.13) [limits of agreement 0.48-0.04]. Error grid analysis revealed that 35 (70%) of the paired values showed a clinically relevant discrepancy in international normalised ratio. At 3 min, 6 min and 10 min after cardiopulmonary bypass there is a clinical unacceptable discrepancy between the point-of-care and laboratory measurement of prothrombin time.
abstract_id: PUBMED:24703296
Level of agreement between laboratory and point-of-care prothrombin time before and after cardiopulmonary bypass in cardiac surgery. Background: Hemostasis monitoring in cardiac surgery could benefit from an easy to use and fast point-of-care coagulation monitor, since routine laboratory tests have a delay of 30-45minutes. This study investigated the level of agreement between the point-of-care prothrombin time (PT) with central laboratory PT before and after cardiopulmonary bypass.
Methods: Bland Altman and error grid analysis were used to analyze the agreement between the point-of-care Coaguchek XS Pro device (POC-PT) and the central laboratory prothrombin time (LAB-PT) before cardiopulmonary bypass (CPB) and 3minutes after protamine administration. Prothrombin times were expressed in international normalized ratios (INR).
Results: The average POC-PT and LAB-PT values of 73 patients were 1.06±0.14 and 1.09±0.13 (P=0.10) before CPB. POC-PT measurements before CPB showed a good agreement with the LAB-PT, with a bias of -0.02±0.07 INR and 94% of the values being represented in the clinical acceptable zone of error grid analysis. The mean POC-PT 3minutes after protamine administration was significantly lower than the LAB-PT (1.35±0.12 vs. 1.70±0.18; P<0.001). The PT at 3minutes after protamine administration showed a bias of 0.36±0.14, and 82% of the values were located outside of the clinical acceptable zone in the error grid analysis.
Conclusions: Point-of-care prothrombin time testing was in concordance with conventional laboratory PT prior to cardiopulmonary bypass. At 3minutes following protamine administration, PT values of the point-of-care device were structurally lower than the laboratory PT values, leading to a disagreement between both tests at that time point.
Answer: Yes, periodic control of the performance of point-of-care (POCT) prothrombin time monitors is useful. A study that investigated the performance of 95 CoaguChek S monitors assigned to anticoagulated patients at home found significant differences between the monitors and a reference coagulometer at the second and third quarterly checks. The disagreement between the two systems increased over time, with a growing percentage of INR values showing a difference of more or less than 0.5 INR units. This suggests that anticoagulated patients in self-testing or self-management should periodically bring their portable coagulometer to a reference Thrombosis Centre, especially when the lot of strips is changed, to ensure accurate performance (PUBMED:18814903).
Additionally, another study highlighted the importance of external quality assessment in improving the performance of POCT devices over time, as demonstrated by a decrease in the median between-center coefficient of variation (CV) for INR measurements from 4.5% in 2006 to 2.6% in 2012 among participants using the CoaguChek XS system (PUBMED:24838333).
Moreover, the practical challenges of POCT, such as analytical performance and quality compared with central laboratory testing, can be addressed by instituting a quality management system and data connectivity in the POCT setting. Studies that demonstrate the clinical and economic value of POCT are also important for addressing these challenges (PUBMED:37169439).
In summary, periodic control of POCT prothrombin time monitors is beneficial for maintaining accuracy and reliability, which is crucial for patient safety and effective anticoagulation management. |
Instruction: Treatment of chemotherapy-induced anemia in ovarian cancer patients: does the use of erythropoiesis-stimulating agents worsen survival?
Abstracts:
abstract_id: PUBMED:22552832
Treatment of chemotherapy-induced anemia in ovarian cancer patients: does the use of erythropoiesis-stimulating agents worsen survival? Objective: Considering the paucity of data relating erythropoiesis-stimulating agent (ESA) use to ovarian cancer survival, our objective was to evaluate the effect of ESA as used for the treatment of chemotherapy-induced anemia (CIA) on survival in ovarian cancer patients.
Materials And Methods: A multi-institution retrospective chart review was performed on ovarian cancer patients. Data collection included patient demographic, surgicopathologic, chemotherapy, ESA, and survival data. Patients were stratified by ever-use of ESA and were compared using appropriate statistical methods.
Results: A total of 581 patients were eligible for analysis with 39% (n = 229) patients with ever-use of ESA (ESA-YES) and 61% (n = 352) never-use ESA (ESA-NO). Mean age was 60.4 years with most patients having stage IIIC (60%) of papillary serous histological diagnosis (64%) with an optimal cytoreduction (67%). Median follow-up for the cohort was 27 months. Both ESA-YES and ESA-NO groups were similar regarding age, body mass index, race, stage, histological diagnosis, and debulking status. Compared with the ESA-NO group, ESA-YES patients were significantly more likely to experience recurrence (56% vs 80%, P < 0.001) and death (46% vs 59%, P = 0.002). Kaplan-Meier curves demonstrated a significant reduction in progression-free survival for ESA-YES patients (16 vs 24 months, P < 0.001); however, overall survival was statistically similar between the 2 groups (38 vs 46 months, P = 0.10). When stratifying by ever experiencing a CIA, ESA-YES patients demonstrated a significantly worse progression-free survival (17 vs 24 months, P = 0.02) and overall survival (37 vs 146 months, P < 0.001).
Conclusions: Our data evaluating the use of ESA as a treatment of CIA in ovarian cancer patients are similar to reports in other tumor sites. Considering that patients who used ESA were more likely to experience recurrence and death and to have decreased survival, the use of ESA in ovarian cancer patients should be limited.
abstract_id: PUBMED:23266649
The effect of the APPRISE mandate on use of erythropoiesis-stimulating agents and transfusion rates in patients with ovarian cancer receiving chemotherapy. Objective: Erythropoiesis-stimulating agents (ESAs) support chemotherapy-induced anemia in patients with epithelial ovarian cancer (EOC). In response to research demonstrating that ESAs increase tumor growth and shorten survival, the Food and Drug Administration mandated the new APPRISE (Assisting Providers and Cancer Patients with Risk Information for the Safe use of ESAs) guidelines for consenting patients before ESAs administration. We sought to quantify the change in ESA and red blood cell (RBC) transfusion use after the APPRISE mandate was instituted.
Methods/materials: After institutional review board approval, a retrospective chart review compared patients with EOC undergoing chemotherapy before and after the APPRISE mandate. Abstracted data included patient demographics, chemotherapy treatment status and regimen, and number of patients requiring ESAs or RBCs. A cost savings analysis was also performed.
Results: Eighty-four patients who underwent 367 cycles of chemotherapy after the APPRISE guidelines were compared with a matched set of 88 patients receiving 613 cycles of chemotherapy before the APPRISE guidelines. There were no statistically significant differences between the groups. Most patients had advanced stage disease and received primary taxane-/platinum-based chemotherapy. Of 88 patients, 45 (51%) in the pre-APPRISE group received a total of 196 ESA injections compared with 0 patients in the post-APPRISE group. Red blood cell transfusion in the post-APPRISE group was similar to that in the pre-APPRISE group (8.3% vs 14.8%, P = 0.28). Omission of ESAs in the post-APPRISE group resulted in a roughly $700,000 savings in billable charges.
Conclusions: In our institution, the APPRISE guidelines have resulted in complete cessation of the use of ESAs in patients with primary or recurrent EOC, resulting in considerable cost savings. Importantly, RBC transfusion rates did not significantly increase after the guidelines were imposed.
abstract_id: PUBMED:21381011
The use of recombinant erythropoietin for the treatment of chemotherapy-induced anemia in patients with ovarian cancer does not affect progression-free or overall survival. Background: Studies have suggested that erythropoietin-stimulating agents (ESAs) may affect progression-free survival (PFS) and overall survival (OS) in a variety of cancer types. Because this finding had not been explored previously in ovarian or primary peritoneal carcinoma, the authors of this report analyzed their ovarian cancer population to determine whether ESA treatment for chemotherapy-induced anemia affected PFS or OS.
Methods: A retrospective review was conducted of women who were treated for ovarian cancer at the corresponding author's institution over a 10-year period (from January 1994 to May 2004). Treatment groups were formed based on the use of an ESA. Two analyses of survival were conducted to determine the effect of ESA therapy on PFS and OS. Disease status was modeled as a function of treatment group using a logistic regression model. Kaplan-Meier curves were generated to compare the groups, and a Cox proportional hazards model was fit to the data.
Results: In total, 343 women were identified. The median age was 57 (interquartile range, 48-68 years). The majority of women were Caucasian (n = 255; 74%) and were diagnosed with stage III (n = 210; 61%), epithelial (n = 268; 78%) ovarian cancer. Although the disease stage at diagnosis and surgical staging significantly affected the rates of disease recurrence and OS, the receipt of an ESA had no effect on PFS (P = .9) or OS (P = .25).
Conclusions: The current results indicated that there was no difference in cancer-related PFS or OS with use of ESA in this cohort of women treated for ovarian cancer.
abstract_id: PUBMED:22123713
Does severe anemia caused by dose-dense paclitaxel-Carboplatin combination therapy have an effect on the survival of patients with epithelial ovarian cancer? Retrospective analysis of the Japanese gynecologic oncology group 3016 trial. Introduction: To evaluate the incidence of anemia in patients with epithelial ovarian cancer receiving paclitaxel-carboplatin combination therapy (TC) using data from the Japanese Gynecologic Oncology Group (JGOG) 3016 trial, and to examine the effect of severe anemia on survival during dose-dense TC.
Methods: Retrospective analysis was conducted in patients enrolled in the JGOG 3016 trial who underwent at least one cycle of the protocol therapy (n = 622). Hemoglobin values at enrollment and during each cycle of TC were collected. One-to-one matching was performed between patients with and patients without grade 3/4 anemia during TC (anemia and nonanemia groups) to adjust the baseline characteristics of the patients. The cumulative survival curve and median progression-free survival were estimated using the Kaplan-Meier method.
Results: Grades 2 to 4 anemia was observed in 19.8% of patients before first-line TC. The incidence of grade 3/4 anemia rapidly increased to 56.1% after the fourth cycle of dose-dense TC. After matching, the median progression-free survival in the anemia (hemoglobin <8.0 g/dL) and nonanemia (hemoglobin >8.0 g/dL) groups was 777 and 1100 days, respectively (P = 0.3493) for patients receiving dose-dense TC. The median progression-free survival in patients receiving conventional TC was similar between the 2 groups.
Conclusions: The difference in progression-free survival between patients with epithelial ovarian cancer with and those without severe anemia during TC was not statistically significant, but for patients receiving dose-dense TC, severe anemia seems to have prognostic relevance. Prospective trials are needed to investigate whether the optimal management of chemotherapy-induced anemia, including appropriate use of erythropoiesis-stimulating agents, would further improve the survival of patients with ovarian cancer receiving dose-dense TC.
abstract_id: PUBMED:10511589
Chemotherapy-induced anemia in adults: incidence and treatment. Anemia is a common complication of myelosuppressive chemotherapy that results in a decreased functional capacity and quality of life (QOL) for cancer patients. Severe anemia is treated with red blood cell transfusions, but mild-to-moderate anemia in patients receiving chemotherapy has traditionally been managed conservatively on the basis of the perception that it was clinically unimportant. This practice has been reflected in the relative inattention to standardized and complete reporting of all degrees of chemotherapy-induced anemia. We undertook a comprehensive review of published chemotherapy trials of the most common single agents and combination chemotherapy regimens, including the new generation of chemotherapeutic agents, used in the treatment of the major nonmyeloid malignancies in adults to characterize and to document the incidence and severity of chemotherapy-induced anemia. Despite identified limitations in the grading and reporting of treatment-related anemia, the results confirm a relatively high incidence of mild-to-moderate anemia. Recent advances in assessing the relationships of anemia, fatigue, and QOL in cancer patients are providing new insights into these closely related factors. Clinical data are emerging that suggest that mild-to-moderate chemotherapy-induced anemia results in a perceptible reduction in a patient's energy level and QOL. Future research may lead to new classifications of chemotherapy-induced anemia that can guide therapeutic interventions on the basis of outcomes and hemoglobin levels. Perceptions by oncologists and patients that lesser degrees of anemia must be endured without treatment may be overcome as greater emphasis is placed on the QOL of the oncology patient and as research provides further insights into the relationships between hemoglobin levels, patient well-being, and symptoms.
abstract_id: PUBMED:19421679
Utilisation review of epoetin alfa in cancer patients at a cancer centre in Singapore. Introduction: Recombinant erythropoietin-stimulating agents have been used to ameliorate the symptoms of anaemia in cancer patients. However, there have been concerns about an increased risk of thromboembolic events and mortality. This study reviews the usage of epoetin alfa in treating chemotherapy-induced anaemia at the National Cancer Centre Singapore (NCCS), as well as the prescribing and monitoring practices employed.
Methods: Cancer patients who have received at least one dose of epoetin alfa at the NCCS between January 1, 2005 and October 15, 2007 were included in this study.
Results: A total of 121 patients were identified and 91 patients were eligible for data collection. The majority of patients manifested breast cancer (30.8 percent) and ovarian cancer (15.4 percent). Over 90 percent of the patients were receiving either chemotherapy or radiotherapy when epoetin alfa was initiated. Epoetin alfa was initiated at a median haemoglobin level of 8.7 (range 7-14.3) g/dL. Approximately 41.8 percent of the patients had a positive response after the initiation of epoetin alfa. Baseline iron studies were performed in 12.1 percent of the patients. Blood pressure was uncontrolled, according to the Singapore Ministry of Health Hypertension guideline, in a substantial number of patients (32.6 percent) prior to the initiation epoetin alfa. There were no documented thromboembolic events.
Conclusion: This study identified a broad range of practices in the utilisation of epoetin alfa at NCCS, which may explain the variable patient response to epoetin alfa. The results of this study will be used to improve the management of chemotherapy-induced anaemia at the institution.
abstract_id: PUBMED:9785333
Recombinant human erythropoietin in the treatment of cancer-related or chemotherapy-induced anaemia in patients with solid tumours. Patients with cancer frequently develop anaemia, due either to the malignant disease itself or to its treatment. Various factors, including the type of malignancy and the type and intensity of chemotherapy, influence the prevalence of anaemia and the need for transfusions. Among patients with solid tumours, those with lung cancer and ovarian cancer are reported to have the highest frequency of anaemia (52% and 51%, respectively) and the highest rate of transfusion requirements (28% and 25%, respectively). Patients with a low level of haemoglobin (Hb) (10-12 g/dl) at the start of chemotherapy are particularly at risk of developing anaemia and requiring transfusions. Similarly, patients treated with platinum-based regimens more often develop anaemia and need transfusions. The frequency of transfusion requirements in these patients can amount to 47%-100%, depending on the cumulative dose of platinum and other risk factors, such as advanced age, loss of body weight before treatment, advanced disease stage, and particularly a low primary level of Hb (11 g/dl) and a decrease in Hb level (1-2 g/dl) after the first cycle of treatment. The causative mechanism of platinum-induced anaemia is reported to be, beside myelosuppression, a deficient production of erythropoietin (EPO) resulting from drug-induced renal tubular damage. In a number of randomised and nonrandomised studies, recombinant human (rh) EPO has been shown to be effective in the treatment of cancer-related anaemia (CRA) and in the prevention and treatment of chemotherapy-induced anaemia. An appropriate dose of rhEPO for the start of treatment is 150 U/kg given subcutaneously three times per week (t.i.w.). The response rate of anaemia ranges from 40% to 85%. rhEPO is well tolerated, but the cost of treatment requires patient selection and parameters predicting response as early as possible after the start of treatment. Appropriate groups of patients for treatment with rhEPO are those with an Hb level of < 10 g/dl and those with a higher Hb level, but symptomatic anaemia. Other groups are patients who are going to receive chemotherapy and have a low primary level of Hb (10-12 g/dl) and patients who receive platinum-based chemotherapy and have experienced a marked decrease in their Hb level (1-2 g/dl) from baseline to the second cycle of treatment. These patients have a high risk of becoming anaemic and requiring transfusions during chemotherapy. In anaemic cancer patients treated with rhEPO, an early indicator of response is an increase in Hb level of at least 0.5 g/dl in patients not receiving chemotherapy and 1 g/dl in those receiving chemotherapy, combined with an increase in reticulocyte count of at least 40,000 cells/microliter after 2 weeks of treatment in the first group of patients and after 4 weeks in the second.
abstract_id: PUBMED:14871172
Epoetin Beta: a review of its clinical use in the treatment of anaemia in patients with cancer. Unlabelled: Epoetin beta (NeoRecormon) is a recombinant form of erythropoietin. It increases reticulocyte counts, haemoglobin (Hb) levels and haematocrit. Epoetin beta administered subcutaneously once weekly corrected anaemia and had equivalent efficacy to that of epoetin beta administered three times weekly in patients with haematological malignancies. Subcutaneous epoetin beta reduced transfusion requirements and increased Hb levels versus no treatment in patients with solid tumours and chemotherapy-induced anaemia in nonblind, randomised trials. Anaemia and quality of life were also improved, and blood transfusion requirements were reduced to a significantly greater extent than placebo or no treatment (with supportive blood transfusion) in patients with haematological malignancies. Most patients were receiving chemotherapy. Subcutaneous epoetin beta was well tolerated by patients with cancer; adverse events with the drug occurred with a similar incidence to those with placebo or no treatment (with supportive blood transfusion). Hypertension was relatively uncommon with epoetin beta in clinical trials. Patients with haematological malignancies and a baseline platelet count > or =100 x 10(9)/L, Hb levels of > or =9 g/dL or lower erythropoietin levels have demonstrated better responses to epoetin beta than other patients in clinical trials. However, neither baseline erythropoietin level nor the observed to predicted ratio of erythropoietin levels correlated with the response to epoetin beta in patients with solid tumours and chemotherapy-induced anaemia. A decrease of <1 g/dL or an increase in Hb with epoetin beta during the first chemotherapy cycle indicated a low transfusion need in subsequent cycles in patients with ovarian carcinoma. In general, the efficacy of epoetin beta is not limited by tumour type. Response to the drug occurred irrespective of the nature (platinum- or nonplatinum-based) or presence of chemotherapy treatment in randomised trials.
Conclusion: Epoetin beta has shown efficacy in the management of cancer-related anaemia in patients with haematological malignancies and of chemotherapy-induced anaemia in patients with solid tumours. Once-weekly administration provides added convenience for patients and may be cost saving, although additional research into the potential pharmacoeconomic benefits of this regimen are required. The drug is well tolerated in patients with cancer and is associated with little injection-site pain when administered subcutaneously. Epoetin beta is an important option in the prevention of chemotherapy-induced anaemia, and a valid and valuable alternative to blood transfusion therapy for the treatment of cancer-related or chemotherapy-induced anaemia.
abstract_id: PUBMED:17984024
Frequency of blood transfusions for anemia caused by chemotherapy in ovarian cancer patients. Thoughts about the guidelines modified in 2006 for the treatment of anemic cancer patients by the European Organisation for Research and Treatment of Cancer Introduction: The European Organisation for Research and Treatment of Cancer distributed the modified guidelines for blood transfusions and erythropoietic drugs in the treatment of cancer anemia in the year 2006. According to this document the blood transfusions are indicated at the level of 9 g/dL of hemoglobin. Up to this year a definitive limit for applying blood transfusion in chemotherapy-induced anemia has not been determined in Hungary.
Aim: The authors evaluated their practice in the treatment of anemia with blood transfusions in ovarian cancer patients treated in 2005. In lack of international or domestic guidelines, considering also the clinical status of the patients, the authors applied blood transfusions at a hemoglobin level of 10 g/dL.
Material And Methods: 190 epithelial ovarian cancer patients were given chemotherapy in the Gynecological Department at the National Institute of Oncology, Hungary. Selected for the patient packed red blood cell transfusion was administered if the hemoglobin has fallen below 10 g/dL, and together with it most patients (51/64 = 79,6%) were given erythropoietic drugs as well.
Results: Blood transfusion was given in 64 of 190 (34%) chemotherapies patients and almost the half of these latter patients (34/64 = 53%) were transfused more than once. In 86% of patients blood transfusion was given for G2 anemia. The largest rate (16/16) of blood transfusions according to the different types of chemotherapy was done in patients receiving combined therapy with gemcitabine and carboplatin.
Conclusion: Chemotherapy for ovarian cancer causes severe anemia (hemoglobin level < 10 g/dL) in one third of patients. Besides blood transfusions, physicians have to pay attention to the necessity of erythropoietic drugs, as well.
abstract_id: PUBMED:17250880
Incidence of symptomatic deep venous thrombosis with epoetin alfa or darbepoetin alfa treatment of anemia in patients with ovarian or primary peritoneal cancer. Objectives: Recombinant epoetin alfa and darbepoetin alfa (r-HuEPO) have been shown to be safe and effective treatments for anemia, but recent reports have suggested an increased risk of thromboembolic events when these agents are used to treat chemotherapy-induced anemia among patients with breast or ovarian cancer. We examined the possible risk of such events among patients with ovarian or primary peritoneal carcinomas and chemotherapy-induced anemia.
Methods: We retrospectively analyzed data over 10 years from women at one hospital with ovarian or primary peritoneal carcinoma and chemotherapy-induced anemia. The incidence and odds ratio for development of deep venous thrombosis, unadjusted and adjusted for baseline differences and risk factors, was assessed between patients who had received r-HuEPO versus no treatment for anemia.
Results: Of the 364 women, 90 had received r-HuEPO and 253 had not. The incidence of deep venous thrombosis was 6.7% in the group that had received r-HuEPO and 5.1% in the group that had not (unadjusted odds ratio, 1.31; 95% confidence interval [CI], 0.48-3.55). After adjustment for differences in age, body-mass index, prior thromboembolic disease or cancer, and tobacco use, the odds ratio for developing deep venous thrombosis with the use of r-HuEPO was 1.35 (95% CI, 0.49-3.75).
Conclusions: The use of r-HuEPO was not associated with an increased risk of deep venous thrombosis in this population. A randomized trial is needed to further explore this issue and to detail the safety and efficacy of these agents in patients with various other cancers.
Answer: The use of erythropoiesis-stimulating agents (ESAs) for the treatment of chemotherapy-induced anemia (CIA) in ovarian cancer patients has been a subject of investigation due to concerns about their impact on survival. According to a multi-institution retrospective chart review, ESA use in ovarian cancer patients was associated with a higher likelihood of recurrence and death, as well as decreased progression-free survival (PFS) and overall survival (OS) when compared to patients who did not use ESAs (PUBMED:22552832). However, another retrospective review found no effect of ESA therapy on PFS or OS in women treated for ovarian cancer (PUBMED:21381011). This suggests that the impact of ESAs on survival may vary and is not definitively negative across all studies.
The APPRISE (Assisting Providers and Cancer Patients with Risk Information for the Safe use of ESAs) guidelines were introduced in response to research indicating that ESAs might increase tumor growth and shorten survival. Following the implementation of these guidelines, one institution reported a complete cessation of ESA use in patients with primary or recurrent epithelial ovarian cancer (EOC), without a significant increase in red blood cell (RBC) transfusion rates, and with considerable cost savings (PUBMED:23266649).
A retrospective analysis of the Japanese Gynecologic Oncology Group 3016 trial suggested that severe anemia during dose-dense paclitaxel-carboplatin combination therapy might have prognostic relevance, although the difference in PFS between patients with and without severe anemia was not statistically significant (PUBMED:22123713).
Overall, while some studies suggest that ESA use could be associated with worse survival outcomes in ovarian cancer patients, other studies do not find a significant impact on survival. The decision to use ESAs should be made cautiously, considering the potential risks and benefits, and in line with current guidelines and individual patient circumstances. |
Instruction: Are routine chest radiographs needed after fluoroscopically guided percutaneous insertion of central venous catheters in children?
Abstracts:
abstract_id: PUBMED:18280286
Are routine chest radiographs needed after fluoroscopically guided percutaneous insertion of central venous catheters in children? Unlabelled: Current guidelines for children still mandate routine postprocedural chest x-ray to confirm placement and detect complications. This is in spite of the risk of unnecessary exposure to radiation, the additional stress to children and their parents, and the cost of this practice. We studied the impact and cost-effectiveness of this practice on the management of children after percutaneous fluoroscopically guided central venous catheter (CVC) insertions.
Methods: A retrospective review of children who underwent percutaneous fluoroscopically guided CVC insertions between January 2000 and December 2005. Only patients with reported postprocedural radiographs in the electronic database were included, and we referred to the medical notes when the report indicated a complication.
Results: Two hundred eighty consecutive patients aged between 4 and 16 years were identified. Two hundred seventy-eight (99.3%) of the reports indicated absence of complications, whereas only 2 reports (0.7%) indicated any form of complications. Of the 2 complications detected, 1 was an asymptomatic pneumothorax, and the other was a slight kink in the line; on review of the medical notes, both lines were fully functional and neither required treatment.
Conclusion: After percutaneous fluoroscopically guided CVC insertions and in the absence of clinical indications, the use of routine postprocedural radiographs in children cannot be justified and is not cost-effective.
abstract_id: PUBMED:37046386
Fluoroscopically guided repositioning of peripherally inserted central catheter in patients failing ultrasound-guided-only placement: a retrospective five-year study. Background: Ultrasound (US)-guided-only insertion at the bedside is safe and improves the success rates of peripherally inserted central catheter (PICC). However, PICC insertion procedures remain challenging for special cases.
Purpose: To show that fluoroscopically guided tip repositioning, for failed US-guided PICC placement, safely led to satisfactory positioning in difficult cases and, importantly, improved success rates of PICC placements.
Material And Methods: A retrospective study of 1560 patients who underwent US-guided PICC placement were performed. Patients who failed US-guided PICC placement were transferred to the interventional radiology department for fluoroscopically guided tip repositioning. Baseline characteristics as well as insertion-related factors were collected. All data were analyzed using SPSS software.
Results: In total, 37 (2.4%) patients who failed US-guided PICC placement accepted fluoroscopically guided adjustment or re-insertion. Of these 37 patients, 32 were enrolled. We observed no significant differences between right and left arm PICC access (P > 0.05), even though a higher percentage of PICCs were inserted into left arms (56.3%). The basilic vein (65.6%) was the most common insertion site. Only four patients experienced slight angiospasm (3.1%) and venous thrombosis (9.4%). US-guided PICC insertion failures were primarily due to line tip malposition (84.4%). All patients successfully underwent fluoroscopically guided tip repositioning, which resulted in optimal catheter tip positioning. PICC lines were adjusted in most patients (n=28, 87.5%).
Conclusion: Malposition was the primary issue causing US-guided PICC insertion failure. Fluoroscopically guided tip repositioning safely and efficaciously led to satisfactory positioning in difficult cases; thus, we recommend this method for patients failing US-guided PICC placement.
abstract_id: PUBMED:9456941
Are routine chest radiographs necessary after image-guided placement of internal jugular central venous access devices? Objective: The purpose of this study was to determine the value and cost of obtaining routine chest radiographs after image-guided placement of internal jugular central venous catheters.
Materials And Methods: We reviewed the records of 424 patients in whom 572 internal jugular catheters were placed by sonographic and fluoroscopic guidance over a 2-year period. Inspiratory and expiratory chest radiographs obtained immediately after each procedure were also reviewed.
Results: Routine postprocedural chest radiographs revealed no complications and did not alter the treatment of any patient. Delayed pneumothorax was detected after placement of two catheters (0.5%) when patient symptoms prompted additional radiographs.
Conclusion: Immediate postprocedural chest radiographs are not routinely needed after image-guided insertion of internal jugular central venous catheters and unnecessarily add to the cost of patient care.
abstract_id: PUBMED:35334177
ULTRASOUND-GUIDED CENTRAL LINE INSERTION IN CHILDREN: HOW MUCH IMAGING IS REALLY NEEDED? Introduction: A recent survey revealed that most pediatric surgeons use intraoperative fluoroscopy and routine postoperative chest radiography for catheter tip location in central line placement. The aim of this study is to review all cases of ultrasound-guided central line placements and to evaluate the role of postoperative chest radiography.
Methods: Retrospective data analysis of children submitted to percutaneous central line insertion under ultrasound control over a 2-year period in a pediatric surgery department. Data collected included: age, indication for central venous access, catheter type, usage of intraoperative fluoroscopy and postoperative chest radiography, complications, and whether chest radiography dictated any catheter-related intervention.
Results: Fifty-five long-term central lines were successfully established in children aged between 1 month and 17 years. All patients had the catheter tip position confirmed either by intraoperative fluoroscopy (96%), chest radiography (85%) or both (82%). Catheter tip overlying the cardiac silhouette (right atrium) on chest radiography was reported in 4 cases; these findings led to no change in catheter positioning or other catheter-related intervention. There were no catheter-related complications.
Conclusions: Percutaneous central line insertion under US-control is safe and effective even in small children. Post- operative chest radiography did not dictate any modification of catheter tip positioning after central line placement with ultrasound and fluoroscopic control or identified any other complication, thus should not be used routinely.
abstract_id: PUBMED:10501889
Routine chest radiographs after central line insertion: mandatory postprocedural evaluation or unnecessary waste of resources? Purpose: To study the cost and impact on patient management of the routine performance of chest radiographs in patients undergoing imaged-guided central venous catheter insertion.
Methods: Six hundred and twenty-one catheters placed in 489 patients over a 42-month period formed the study group. Catheters were placed in the right internal jugular vein (425), left internal jugular vein (133), and subclavian veins (63). At the end of the procedure fluoroscopy was used to assess catheter position and check for complications. A postprocedural chest radiograph was obtained in all patients.
Results: Postprocedural chest fluoroscopy showed no evidence of pneumothorax, hemothorax, or mediastinal hematoma. Inappropriate catheter tip position or catheter kinks were noted with 90 catheters. These problems were all corrected while the patient was on the interventional table. Postprocedural chest radiographs showed no complications but proximal catheter tip migration was noted in six of 621 catheters (1%). These latter six catheters required further manipulation. The total technical and related charges for the postprocedural chest radiographs in this series were estimated at pound15,525.
Conclusion: Postprocedural chest radiographs after image-guided central venous catheter insertion are not routinely required. A postprocedural chest radiograph can be performed on a case-by-case basis at the discretion of the interventional radiologist.
abstract_id: PUBMED:26637319
Radiographic signs of non-venous placement of intended central venous catheters in children. Background: Central venous catheters (CVCs) are commonly used in children, and inadvertent arterial or extravascular cannulation is rare but has potentially serious complications.
Objective: To identify the radiographic signs of arterial placement of CVCs.
Materials And Methods: We retrospectively reviewed seven cases of arterially malpositioned CVCs on chest radiograph. These cases were identified through departmental quality-assurance mechanisms and external consultation. Comparison of arterial cases was made with 127 age-matched chest radiographs with CVCs in normal, expected venous location. On each anteroposterior (AP) radiograph we measured the distance of the catheter tip from the right lateral border of the thoracic spine, and the angle of the vertical portion of the catheter relative to the midline. On each lateral radiograph we measured the angle of the vertical portion of each catheter relative to the anterior border of the thoracic spine. When bilateral subclavian catheters were present, the catheter tips were described as crossed, overlapping or uncrossed.
Results: On AP radiographs, arterially placed CVCs were more curved to the left, with catheter tip positions located farther to the left of midline than normal venous CVCs. When bilateral, properly placed venous catheters were present, all catheters crossed at the level of the superior vena cava (SVC). When one of the bilateral catheters was in arterial position, neither of the catheters crossed or the inter-catheter crossover distance was exaggerated. On lateral radiographs, there was a marked anterior angulation of the vertical portion of the catheter (mean angle 37 ± 15° standard deviation [SD] in arterial catheters versus 5.9 ± 8.3° SD in normally placed venous catheters).
Conclusion: Useful radiographic signs suggestive of unintentional arterial misplacement of vascular catheters include leftward curvature of the vertical portion of the catheter, left-side catheter tip position, lack of catheter crossover on the frontal radiograph, as well as exaggerated anterior angulation of the catheter on the lateral chest radiograph.
abstract_id: PUBMED:27957644
Tip malposition of peripherally inserted central catheters: a prospective randomized controlled trial to compare bedside insertion to fluoroscopically guided placement. Objective: Peripherally inserted central catheter (PICC) use continues to increase, leading to the development of a blind bedside technique (BST) for placement. The aim of our study was to compare the BST with the fluoroscopically guided technique (FGT), with specific regard to catheter tip position (CTP).
Materials And Methods: One hundred eighty patients were randomized to either the BST or the FGT. All procedures were done by the same interventional team and included postprocedural chest X-ray to assess CTP. Depending on the international guidelines for optimal CTP, patients were classified in three types: optimal, suboptimal not needing repositioning, and nonoptimal requiring additional repositioning procedures. Fisher's test was used for comparisons.
Results: One hundred seventy-one PICCs were successful inserted. In the BST groups, 23.3% of placements were suboptimal and 30% nonoptimal, requiring repositioning. In the FGT group, 5.6% were suboptimal and 1.1% nonoptimal. Thus, suboptimal and nonoptimal CTP were significantly lower in the FGT group (p < 0.001).
Conclusion: Tip malposition rates are high when using blind BST, exposing the patient to an increased risk of deep venous thrombosis and catheter malfunction. Using the FGT or emerging technologies that could help tip positioning are recommended, especially for long-term indications.
Key Points: • Bedside and fluoroscopy guided techniques are commonly used for PICC placement. • Catheter malposition is the major technical issue with the bedside technique. • Catheter malposition occurred in 53% of patients with the bedside technique.
abstract_id: PUBMED:29706443
Tunnelled central venous access devices in small children: A comparison of open vs. ultrasound-guided percutaneous insertion in children weighing ten kilograms or less. Purpose: Ultrasound-guided (USG) percutaneous insertion of tunnelled central venous access devices (CVADs) has been shown to be safe and effective in adults. However, there have been concerns over the safety of this technique in small children. This paper analyses the safety of USG percutaneous CVAD insertion in the pediatric population weighing ten kilograms or less.
Method: All surgically inserted CVADs for children weighing ten kilograms or less, between January 2010 and December 2015 at the Children's Hospital at Westmead were retrospectively reviewed. Open and USG percutaneous techniques were compared with intraoperative complications as the primary outcome variable. Secondary outcome measures included conversion to open technique, postoperative complications, operating time and catheter longevity.
Results: 232 cases were identified: 96 (41.4%) open, 136 (58.6%) USG percutaneous. Age ranged <1-48 months; weight 0.7-10 kg. CVADs ranged 2Fr-9Fr in size. Eleven USG percutaneous cases required conversion to open. There was no significant difference in intraoperative complication rate between open (11/96, 11.5%) and USG percutaneous (19/136, 14.0%) groups (p = 0.574). There was no significant difference in overall postoperative complications, operative time or catheter longevity. Mechanical blockage was significantly higher in the open group than the USG percutaneous group (21% vs 10%, p = 0.015).
Conclusion: USG percutaneous CVAD insertion is safe in children weighing ten kilograms or less. Open catheter insertion may be associated with higher rates of post-operative catheter blockage in small children.
Level Of Evidence: Level III.
abstract_id: PUBMED:37808595
Umbilical Venous Catheter Position: The Value of Acquiring a Lateral in Addition to a Frontal Chest Radiograph. Introduction Umbilical venous catheters (UVCs) are standardly used for central venous access in acutely sick neonates. Complications associated with UVCs include thrombosis, infection, diffuse intravascular coagulopathy, arrhythmia, tamponade, and liver injury, many of which are related to misplacement of the catheters. Therefore, this study aimed to institute a policy of obtaining lateral and frontal radiographs to improve the determination of the UVC position. Methods We retrospectively reviewed UVC placement from 132 radiographs. We compared interpretations by different reviewers of frontal versus frontal and lateral chest radiographs for the most accurate determination of the UVC position. The reviewers completed questionnaires indicating their assessment of the catheter tip position, as well as the appropriate catheter manipulation required for optimal positioning. Their assessment was derived from frontal chest radiographs followed by frontal plus lateral view radiographs a week later. Results The reviewers (junior neonatology fellow, senior neonatology fellow, pediatric radiology fellow, and senior pediatric radiologist) revised their assessment with regard to the UVC positioning between frontal and frontal plus lateral radiographs in 24.6%, 22.7%, 19.6%, and 15.9% of cases, respectively, and indicated that the lateral view was helpful in 18%, 13.6%, 19.6%, and 31% of the cases, respectively. UVCs were placed appropriately at the first attempt in only 13.6% of the cases. Conclusion Correct initial placement of a UVC is uncommon. A lateral radiograph is beneficial in determining the UVC position. Hence, we suggest the inclusion of a lateral view along with the frontal chest radiograph for the evaluation of the UVC position if real-time ultrasound cannot be performed before UVC usage.
abstract_id: PUBMED:29943891
Outcome of landmark-guided percutaneously inserted tunneled central venous catheters in infants and children under 3 years with cancer. Background: There is a paucity of information on procedural and long-term outcomes of tunneled central venous catheters (TCVC) in infants and children younger than 3 years undergoing anticancer therapy. This study aims to evaluate the success, safety, and complications leading to surgical revision or premature removal of TCVC in this particular patient group.
Methods: The clinical course of pediatric patients with percutaneous inserted TCVC, including Groshong (GC) and Hickman/Broviac (HB) catheters, has been analyzed retrospectively. The data analysis includes patient and device characteristics, adverse events during insertion, and dwell period complications.
Results: A consecutive series of 238 children undergoing implantation of 273 TCVC, including 148 (54.2%) GC and 125 (45.8%) HB catheters, with a total of 38,209 catheter days at risk (cdr) were reviewed. The patient cohort consisted of 65 (23.8%) infants, 77 (28.2%) children aged 1-2 years, and 131 (48.0%) aged 2-3 years. The overall rate of adverse events during catheter insertion was 12.8% (n = 35) with no differences between age groups or devices. The overall rate of long-term complication was 28.2% (n = 77, catheter risk [CR] per 1,000 cdr = 1.75), with the highest prevalence in infants (P = 0.01). The most common complication was late dislocation (n = 24, 8.8%, CR = 0.47), followed by early dislocation (n = 20, 7.3%) and infection (n = 18, 7.4%, CR = 0.42).
Conclusion: Percutaneous landmark-guided insertion of TCVC in neonates and small children with cancer is safe. Patterns of long-term complications are different from those for older children and should be prevented through appropriate management.
Answer: Based on the provided abstracts, routine chest radiographs after fluoroscopically guided percutaneous insertion of central venous catheters (CVCs) in children may not be necessary in all cases. A retrospective review of children who underwent percutaneous fluoroscopically guided CVC insertions indicated that 99.3% of postprocedural radiographs showed no complications, and the two complications detected did not require treatment (PUBMED:18280286). Another study found that routine postprocedural chest radiographs did not reveal any complications or alter patient treatment after image-guided placement of internal jugular central venous catheters (PUBMED:9456941). Similarly, a study on ultrasound-guided central line insertions in children concluded that postoperative chest radiography did not dictate any modification of catheter tip positioning or identify any other complication, suggesting it should not be used routinely (PUBMED:35334177). Furthermore, a study examining the cost and impact of routine chest radiographs after image-guided central venous catheter insertion found that postprocedural chest radiographs are not routinely required and can be performed on a case-by-case basis at the discretion of the interventional radiologist (PUBMED:10501889).
In summary, the evidence from these studies suggests that routine postprocedural chest radiographs may not be justified and are not cost-effective after fluoroscopically guided percutaneous CVC insertions in children, especially in the absence of clinical indications of complications. However, it is important to note that clinical judgment should be used to determine the necessity of a chest radiograph on a case-by-case basis, considering the specific circumstances and potential risks for each patient. |
Instruction: SARS outbreak in Germany 2003: workload of local health departments and their compliance in quarantine measures--implications for outbreak modeling and surge capacity?
Abstracts:
abstract_id: PUBMED:16614559
SARS outbreak in Germany 2003: workload of local health departments and their compliance in quarantine measures--implications for outbreak modeling and surge capacity? Objective: Public health management of severe acute respiratory syndrome epidemic must be evaluated to improve contingency planning for epidemics.
Methods: Standardized questionnaires on case management were sent to local health departments of 15 of 16 states in Germany.
Results: Of the 384 local health departments who received the questionnaire, 280 (72%) completed them. They reported 271 suspect or probable severe acute respiratory syndrome cases under investigation (average 4.7). The average duration of quarantine was 5.4 days. Contacts without professional activity were 2.78 times more likely to stay under 10-day quarantine than those with professional activity (CI: 0.80-9.86). Local health departments with at least one case under investigation had invested an average of 104.5 working hours.
Conclusions: Our contact-case ratios may serve for planning for modeling in epidemics. We found discrepancies between local and national surveillance figures; home quarantine was frequently not applied as recommended and the burden on urban health departments was disproportionally higher. Flexibility of the national surveillance system and surge capacity for the prevention of future epidemics need improvement, particularly in urban health departments.
abstract_id: PUBMED:33623806
Effective Containment of a COVID-19 Subregional Outbreak in Italy Through Strict Quarantine and Rearrangement of Local Health Care Services. Background: Since the beginning of the pandemic, the epidemiology of coronavirus disease 2019 (COVID-19) in Italy has been characterized by the occurrence of subnational outbreaks. The World Health Organization recommended building the capacity to rapidly control COVID-19 clusters of cases in order to avoid the spread of the disease. This study describes a subregional outbreak of COVID-19 that occurred in the Emilia Romagna region, Italy, and the intervention undertaken to successfully control it.
Methods: Cases of COVID-19 were defined by a positive reverse transcriptase polymerase chain reaction (RT-PCR) for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) on nasopharyngeal swab. The outbreak involved the residential area of a small town, with ~10 500 inhabitants in an area of 9 km2. After the recognition of the outbreak, local health care authorities implemented strict quarantine and a rearrangement of health care services, consisting of closure of general practitioner outpatient clinics, telephone contact with all residents, activation of health care units to visit at-home patients with symptoms consistent with COVID-19, and a dedicated Infectious Diseases ambulatory unit at the nearest hospital.
Results: The outbreak lasted from February 24 to April 6, 2020, involving at least 170 people with a cumulative incidence of 160 cases/10 000 inhabitants; overall, 448 inhabitants of the municipality underwent at least 1 nasopharyngeal swab to detect SARS-CoV-2 (positivity rate, 38%). Ninety-three people presented symptoms before March 11 (pre-intervention period), and 77 presented symptoms during the postintervention period (March 11-April 6).
Conclusions: It was possible to control this COVID-19 outbreak by prompt recognition and implementation of a targeted local intervention.
abstract_id: PUBMED:16396407
Risk perception and compliance with quarantine during the SARS outbreak. Purpose: To explore the experience of being on quarantine for severe acute respiratory syndrome (SARS) with a focus on the relationship between perceived risk of contracting SARS and reported compliance with the quarantine order and protocols.
Design: Descriptive, qualitative.
Methods: Semi-structured interviews were conducted with people who had been quarantined during the SARS outbreak in Toronto in 2003. Data analysis was completed using an iterative and collaborative approach of reading and re-reading the transcribed interviews, identifying common themes, and comparing and contrasting the data.
Findings: To varying extents, participants wavered between fear and denial about their risk of contracting or spreading SARS. Reported compliance with the actual quarantine order was high. However, within households quarantine protocols were followed unevenly.
Conclusions: This research indicates the need for greater credibility in public health communications to increase compliance with quarantine protocols and to contain outbreaks of new and deadly infectious diseases.
abstract_id: PUBMED:32662390
Disease Outbreak Surge Response: How a Singapore Tertiary Hospital Converted a Multi-story Carpark Into a Flu Screening Area to Respond to the COVID-19 Pandemic. Coronavirus disease 2019 (COVID-19), first documented in December 2019, was declared a public health emergency by the World Health Organization (WHO) on January 30, 2020 (https://www.who.int/westernpacific/emergencies/covid-19). The disease, caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus, has affected more than 9 million people and contributed to at least 490,000 deaths globally as of June 2020, with numbers on the rise (https://www.worldometers.info/coronavirus/#countries).Increased numbers of patients seeking medical attention during disease outbreaks can overwhelm healthcare facilities, hence requiring an equivalent response from healthcare services. Surge capacity is a concept that has not only been defined as the "ability to respond to a sudden increase in patient care demands" (Hick et al., Disaster Med Public Health Prep. 2008;2:S51-S57) but also to "effectively and rapidly expand capacity" (Watson et al., Milbank Q. 2013;91(1):78-122).This narrative review discusses how Singapore's largest tertiary hospital has encapsulated the elements of surge capability and transformed a peacetime multi-story carpark into a flu screening area in response to the COVID-19 disease outbreak.
abstract_id: PUBMED:17055533
Impact of quarantine on the 2003 SARS outbreak: a retrospective modeling study. During the 2003 Severe Acute Respiratory Syndrome (SARS) outbreak, traditional intervention measures such as quarantine and border control were found to be useful in containing the outbreak. We used laboratory verified SARS case data and the detailed quarantine data in Taiwan, where over 150,000 people were quarantined during the 2003 outbreak, to formulate a mathematical model which incorporates Level A quarantine (of potentially exposed contacts of suspected SARS patients) and Level B quarantine (of travelers arriving at borders from SARS affected areas) implemented in Taiwan during the outbreak. We obtain the average case fatality ratio and the daily quarantine rate for the Taiwan outbreak. Model simulations is utilized to show that Level A quarantine prevented approximately 461 additional SARS cases and 62 additional deaths, while the effect of Level B quarantine was comparatively minor, yielding only around 5% reduction of cases and deaths. The combined impact of the two levels of quarantine had reduced the case number and deaths by almost a half. The results demonstrate how modeling can be useful in qualitative evaluation of the impact of traditional intervention measures for newly emerging infectious diseases outbreak when there is inadequate information on the characteristics and clinical features of the new disease-measures which could become particularly important with the looming threat of global flu pandemic possibly caused by a novel mutating flu strain, including that of avian variety.
abstract_id: PUBMED:34774108
Validation of the French ADNM-20 in the assessment of emotional difficulties resulting from COVID-19 quarantine and outbreak. Background: Multiple psychological consequences of the COVID-19 outbreak and quarantine have been described. However, there is a lack of global conceptualization. We argue that the stressful aspects of the situation, the multiple environmental consequences of the outbreak, and the diversity of symptoms observed in such a situation, suggest that Adjustment disorder (AD) is a promising way to conceptualize the psychological consequences of the outbreak and quarantine. The first aim of the study was to validate the French version of the ADNM. The second aim was to set out adjustment difficulties resulting from COVID-19 outbreak and quarantine.
Method: We recruited 1010 (840 women, 170 men) who consented online to participate. They filled out the French ADNM, visual analogic scales, HADS, IES, and the COPE, to evaluate coping strategies.
Results: We confirmed the factor structure of the ADNM and we found good psychometric properties. We found that 61.3% of participants presented an adjustment disorder related to COVID-19 outbreak. We found multiple risk factors and protective factors to AD due to quarantine and outbreak. We also identified the coping strategies negatively and positively associated with AD.
Conclusion: Adjustment disorder is a relevant concept to understand psychological manifestations caused by quarantine and outbreak. The French ANDM has good psychometric properties to evaluate such manifestations. The association between coping strategies and AD symptoms suggest that CBT may be the best intervention to help people suffering from AD.
abstract_id: PUBMED:33186679
Lessons from managing a campus mumps outbreak using test, trace, and isolate efforts. In 2017, Penn State University's campus experienced a mumps outbreak that coincided with unrelated restrictions on social gatherings. University Health Services implemented testing, contact tracing, and quarantine and isolation protocols. Approximately half of the supplied contact tracing information was usable, ∼70% of identified contacts were reached, and <50% of those contacted complied with quarantine protocol. Students with confirmed mumps reported ∼7.4 (1-35) contacts on average. Findings from this outbreak can inform future outbreak management on college campuses, including COVID-19, by estimating average contacts per case, planning capacity for testing and quarantine/isolation, and strategically increasing compliance with suggested interventions.
abstract_id: PUBMED:35574268
Reconstruction of the origin of the first major SARS-CoV-2 outbreak in Germany. The first major COVID-19 outbreak in Germany occurred in Heinsberg in February 2020 with 388 officially reported cases. Unexpectedly, the first outbreak happened in a small town with little to no travelers. We used phylogenetic analyses to investigate the origin and spread of the virus in this outbreak. We sequenced 90 (23%) SARS-CoV-2 genomes from the 388 reported cases including the samples from the first documented cases. Phylogenetic analyses of these sequences revealed mainly two circulating strains with 74 samples assigned to lineage B.3 and 6 samples assigned to lineage B.1. Lineage B.3 was introduced first and probably caused the initial spread. Using phylogenetic analysis tools, we were able to identify closely related strains in France and hypothesized the possible introduction from France.
abstract_id: PUBMED:34592263
Differential impacts of contact tracing and lockdowns on outbreak size in COVID-19 model applied to China. The COVID-19 pandemic has led to widespread attention given to the notions of "flattening the curve" during lockdowns, and successful contact tracing programs suppressing outbreaks. However a more nuanced picture of these interventions' effects on epidemic trajectories is necessary. By mathematical modeling each as reactive quarantine measures, dependent on current infection rates, with different mechanisms of action, we analytically derive distinct nonlinear effects of these interventions on final and peak outbreak size. We simultaneously fit the model to provincial reported case and aggregated quarantined contact data from China. Lockdowns compressed the outbreak in China inversely proportional to population quarantine rates, revealing their critical dependence on timing. Contact tracing had significantly less impact on final outbreak size, but did lead to peak size reduction. Our analysis suggests that altering the cumulative cases in a rapidly spreading outbreak requires sustained interventions that decrease the reproduction number close to one, otherwise some type of swift lockdown measure may be needed.
abstract_id: PUBMED:34212347
Outbreak of COVID-19 on an industrial ship. Background: People on ships are at high risk for outbreaks of infectious diseases including coronavirus disease 2019 (COVID-19). A rapid and well-coordinated response is important to curb transmission of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). We studied an outbreak on an industrial ship to improve outbreak control for ships and coordination between participating harbour partners.
Materials And Methods: Public Health Service (PHS) Rotterdam-Rijnmond performed an epidemiological investigation during the outbreak of COVID-19 among 77 seafarers on a ship in their port. The captain was interviewed about ship details and his experiences during the outbreak. The seafarers were asked to fill in questionnaires about symptoms suspicious of COVID-19 and date of symptom onset. Information about stakeholders involved in outbreak control was registered.
Results: The captain first contacted PHS about probable cases on March 31st 2020 via a physician ashore. One crewmember was hospitalised on April 8th and another died unexpectedly aboard on April 10th. Questionnaires distributed mid-April to the 75 remaining seafarers showed that 38 of 60 responders (63%) had had suspicious symptoms between February 15th and April 13th. None of them were tested but a total of 8 other crewmembers tested positive for COVID-19 after leaving the ship, including the hospitalised crewmember and the one who died aboard. On May 5th, the last case left isolation and the quarantine ended. Many different stakeholders were involved in the outbreak response and responsibilities were not always fully clear beforehand, causing coordination issues.
Conclusions: Testing crew with COVID-19 symptoms underpins control measures and clarifies communication between stakeholders. Building a network beforehand to develop outbreak guidelines tailored to ships and local circumstances is essential to control future outbreaks on ships.
Answer: The SARS outbreak in Germany in 2003 placed a significant workload on local health departments, with those departments that had at least one case under investigation investing an average of 104.5 working hours. The outbreak led to the investigation of 271 suspect or probable severe acute respiratory syndrome cases, with an average of 4.7 cases per department and an average quarantine duration of 5.4 days. However, compliance with quarantine measures was not consistent, particularly for contacts with professional activity, who were 2.78 times less likely to stay under a 10-day quarantine than those without professional activity (PUBMED:16614559).
The experience from the SARS outbreak highlighted the need for improvements in the flexibility of the national surveillance system and surge capacity, especially in urban health departments. The discrepancies between local and national surveillance figures, as well as the frequent non-adherence to recommended home quarantine, suggest that better planning and resources are necessary to prevent future epidemics. The contact-case ratios reported in the study may serve as a reference for planning and modeling in future epidemics (PUBMED:16614559).
In the context of outbreak modeling and surge capacity, the SARS outbreak in Germany demonstrated the importance of having a responsive and well-resourced public health infrastructure that can quickly adapt to the demands of an epidemic. The ability to effectively quarantine individuals and ensure compliance is crucial for containing the spread of infectious diseases. The lessons learned from the SARS outbreak can inform the development of strategies to enhance surge capacity and improve the effectiveness of quarantine measures during future outbreaks. |
Instruction: Are warm ischemia and ischemia time still predictive factors of poor renal function after partial nephrectomy in the setting of elective indication?
Abstracts:
abstract_id: PUBMED:24700310
Are warm ischemia and ischemia time still predictive factors of poor renal function after partial nephrectomy in the setting of elective indication? Purpose: To evaluate renal function and to identify factors associated with renal dysfunction in the elective indications setting of nephron-sparing surgery (NSS).
Methods: We retrospectively reviewed operative data and glomerular filtration rate (GFR) of 519 patients treated by NSS in an elective indications setting between 1984 and 2006 in eight academic institutions. A GFR decrease under the thresholds of 60 or 45 ml/min at last follow-up was considered a significant renal dysfunction. Univariate and multivariate regression models were used to assess multiple factors of renal function.
Results: Median age, tumor size, preoperative, and final GFR were 59.5 years (27-84), 2.7 cm (0.9-11), 79 (45-137), and 69 ml/min (p < 0.0001), respectively, with a median follow-up of 23 months (1-416). Hilar clamping was performed in 375 procedures (72.3 %). Significant GFR decrease was observed in 89 patients (17.1 %). Median operating time, hilar clamping duration, and blood loss were 137 min (55-350), 22 min (0-90), and 150 ml (0-4150), respectively. At univariate analysis, age (p = 0.002), preoperative GFR (p = 0.001), pedicular clamping (p = 0.01), and ischemia time (p = 0.0001) were associated with renal dysfunction. Age (p = 0.004; HR 1.2), pedicular clamping (p = 0.04; HR 1.3), and ischemia time (p = 0.0001; HR 1.8) remained independent risk factors for renal function deterioration in multivariate analysis.
Conclusion: Non- or time-limited clamping techniques are associated with preservation of renal function in the elective indications setting of NSS.
abstract_id: PUBMED:27409986
Impact of ischaemia time on renal function after partial nephrectomy: a systematic review. Objective: To assess the impact of ischaemia on renal function after partial nephrectomy (PN).
Materials And Methods: A literature review was performed according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) criteria. In January 2015, the Medline and Embase databases were systematically searched using the protocol ('warm ischemia'[mesh] OR 'warm ischemia'[ti]) AND ('nephrectomy'[mesh] OR 'partial nephrectomy'[ti]). An updated search was performed in December 2015. Only studies based on a solitary kidney model or on a two-kidney model but with assessment of split renal function were included in this review.
Results: Of the 1119 studies identified, 969 abstracts were screened after duplicates were removed: 29 articles were finally included in this review, including nine studies that focused on patients with a solitary kidney. None of the nine studies adjusting for the amount of preserved parenchyma found a negative impact of warm ischaemia time on postoperative renal function, unless this was extended beyond a 25-min threshold. The quality and the quantity of preserved parenchyma appeared to be the main contributors to postoperative renal function.
Conclusion: Currently, no evidence supports that limited ischaemia time (i.e. ≤25 min) has a higher risk of reducing renal function after PN compared to a 'zero ischaemia' technique. Several recent studies have suggested that prolonged warm ischaemia (>25-30 min) could cause an irreversible ischaemic insult to the surgically treated kidney.
abstract_id: PUBMED:30929839
Development and Internal Validation of a Nomogram for Predicting Renal Function after Partial Nephrectomy. Loss of renal function can be a clinically impactful event after partial nephrectomy (PN). We aimed to create a model to predict loss of renal function in patients undergoing PN. Data for 1897 consecutive patients who underwent PN with warm ischemia between 2008 and 2017 were extracted from our institutional database. Loss of renal function was defined as upstaging of chronic kidney disease in terms of the estimated glomerular filtration rate (eGFR) at 3 mo after PN. A nomogram was built based on a multivariable model comprising age, sex, body mass index, baseline eGFR, RENAL score, and ischemia time. Interval validation and calibration were performed using data from 676 patients for whom complete data were available. Receiver operator characteristic (ROC) curves with 1000 bootstrap replications were plotted, as well as the observed incidence versus the nomogram-predicted probability. We also applied the extreme training versus test procedure known as leave-one-out cross-validation. After internal validation, the area under the ROC curve was 76%. The model demonstrated excellent calibration. At an upstaging cutoff of 27% probability, upstaging was predicted with a positive predictive value of 86%. PATIENT SUMMARY: In this report, we created a model to predict postoperative loss of renal function after partial nephrectomy for renal tumors. Inputting baseline characteristics and ischemia time into our model allows early identification of patients at higher risk of renal function decline after partial nephrectomy with good predictive power.
abstract_id: PUBMED:24917728
Comparison of the loss of renal function after cold ischemia open partial nephrectomy, warm ischemia laparoscopic partial nephrectomy and laparoscopic partial nephrectomy using microwave coagulation. Purpose: Nephron sparing surgery is an effective surgical option in patients with renal cell carcinoma. Laparoscopic partial nephrectomy involves clamping and unclamping techniques of the renal vasculature. This study compared the postoperative renal function of partial nephrectomy using an estimation of the glomerular filtration rate (eGFR) for a Japanese population in 3 procedures; open partial nephrectomy in cold ischemia (OPN), laparoscopic partial nephrectomy in warm ischemia (LPN), and microwave coagulation using laparoscopic partial nephrectomy without ischemia (MLPN).
Materials And Methods: A total of 57 patients underwent partial nephrectomy in Yokohama City University Hospital from July 2002 to July 2008. 18 of these patients underwent OPN, 17 patients received MLPN, and 22 patients had LPN. The renal function evaluation included eGFR, as recommended by The Japanese Society of Nephrology.
Results: There was no significant difference between the 3 groups in the reduction of eGFR. eGFR loss in the OPN group was significantly higher in patients that experienced over 20 minutes of ischemia time. eGFR loss in LPN group was significantly higher in patients that experienced over 30 minutes of ischemia time.
Conclusion: This study showed that all 3 procedures for small renal tumor resection were safe and effective for preserving postoperative renal function.
abstract_id: PUBMED:30662055
FACTORS AFFECTING SHORT-TERM RENAL FUNCTION AFTER PARTIAL NEPHRECTOMY (Objectives) Recently, partial nephrectomy has been recommended for patients with T1 renal cell carcinoma to preserve renal function. In this study, we retrospectively investigated the factors that affect renal function after laparoscopic or robotic partial nephrectomy using cold or warm ischemia. (Patients and methods) We reviewed 105 patients who underwent laparoscopic or robotic partial nephrectomy between March 2006 and July 2016. Patients who had a single kidney were excluded. Thirty-nine patients were managed with cold ischemia, and 66 were managed with warm ischemia. Renal function was assessed using the estimated glomerular filtration rate (eGFR) and glomerular filtration rate (GFR) categories of the stage of chronic kidney disease (CKD). (Results) In the cold and warm ischemia groups, the duration of ischemia was significantly correlated with deterioration of the eGFR at 12 months postoperatively, but the duration of ischemia was not significantly correlated with exacerbation of the GFR categories for the stage of CKD in multivariate analyses. (Conclusions) These results suggest that the ischemia time may not have an impact on prognosis. However, due to the lack of deaths from renal carcinoma or cardiovascular events postoperatively in this study, the influence of each factor on overall survival or cardiovascular events could not be evaluated. More investigations are necessary to discern the acceptable level of deterioration and the corresponding clinical implications for postoperative eGFR.
abstract_id: PUBMED:28723490
Renal Preservation and Partial Nephrectomy: Patient and Surgical Factors. Context: Optimization of the partial nephrectomy (PN) procedure in terms of preservation of functional outcomes is of special importance.
Objective: To review the most important patient and surgical factors that may influence the three elements that ultimately define the preservation of renal function (RF) after PN: preoperative RF, quantity of parenchyma preserved, and nephron recovery from ischemic insult.
Evidence Acquisition: A nonsystematic review of the literature was conducted. Relevant databases were searched for studies providing data on surgical, patient, and tumour factors predictive of RF preservation after PN.
Evidence Synthesis: Many renal cell carcinoma patients have low RF at baseline or are at risk of rapid progression of chronic kidney disease. A glomerular filtration rate (GFR) of ≤45ml/min/1.73m2 after PN is associated with higher risk of a 50% drop in GFR or dialysis. Greater tumor size and complexity are nonmodifiable factors that predict worse postoperative RF, longer warm ischemia time (IT), and greater healthy parenchymal volume loss (HPVL). Global renal ischemic injury can be minimized using off-clamp or selective minimal renal ischemia techniques that vary from simple regional ischemia to more complex techniques such as tertiary or higher-order renal arterial branch clamping. However, the quality and quantity of parenchymal mass preserved are the main predictors of RF after PN, and IT seems to have a secondary role, as long as warm IT is limited or ischemia is hypothermic. HPVL is minimized using enucleation techniques (oncologically equivalent to traditional PN for low-grade tumors in retrospective studies) and reduction of the parenchyma incorporated in renorrhaphy. Evidence on the comparative effectiveness of the various PN surgical approaches (open, laparoscopic, robotic, and thermoablation) in terms of functional outcomes is characterized by low overall quality.
Conclusions: Efforts should be made to optimize the modifiable surgical factors identified for maximum RF preservation after PN. The low quality of evidence regarding the various surgical strategies for preserving RF prevents definitive conclusions.
Patient Summary: We reviewed the literature to determine the most important modifiable and non-modifiable factors that ultimately influence renal function after partial nephrectomy. The most important factors are the preoperative renal function and the volume of healthy renal parenchyma that the surgeon can spare during tumor resection, as long as the time of renal ischemia is limited. We discuss the strategies that allow optimization of the modifiable factors, ultimately leading to maximization of renal function after partial nephrectomy.
abstract_id: PUBMED:36708175
Impact of ischemia time during partial nephrectomy on short- and long-term renal function. Objective: Partial nephrectomy is the gold standard treatment in small renal tumours. During partial nephrectomy, the renal artery is clamped which creates transient ischemia. This can damage nephrons and may affect kidney function immediately postoperatively and on long-term.In the present study, we investigated the effect of ischemia time during partial nephrectomy with regards to affection of renal function immediately post-operatively and 1-year post-surgery.
Materials And Method: A retrospective cohort study including 124 patients who underwent partial nephrectomy at a single regional hospital in the period from 2018 to 2020 was conducted.
Results: We divided patients into subgroups based on the ischemia time: [0-8], [9-13] and [14-29] minutes. The mean value for kidney function was an eGFR (mL/min) of 73.9 before and 66.8 at a 12-month post-surgery. We found no significant correlation between ischemia time and renal function. Noticeably, none of the patients had ischemia time greater than 30 min.
Conclusion: In this cohort, the duration of ischemia time was not associated with differences in renal affection neither on short term nor long term parameters if the ischemia time was kept below 30 min.
abstract_id: PUBMED:26994776
Early impact of robot-assisted partial nephrectomy on renal function as assessed by renal scintigraphy. To measure the early impact of robot-assisted partial nephrectomy (RAPN) on renal function as assessed by renal scan (Tc 99m-DTPA), addressing the issue of risk factors for ischemic damage to the kidney. All patients undergoing RAPN for cT1 renal masses between June 2013 and May 2014 were included in this prospective study. Renal function as expressed by glomerular filtration rate (GFR) was assessed by Technetium 99m-diethylenetriaminepentaacetic acid (Tc 99m-DTPA) renal scan preoperatively and postoperatively at 1 month in every patient. A multivariable analysis was used for the determination of independent factors predictive of GFR decrease of the operated kidney. Overall, 32 patients underwent RAPN in the time interval. Median tumor size, blood loss, and ischemia time were 4 cm, 200 mL, and 24 min, respectively. Two grade III complications occurred (postoperative bleeding in the renal fossa, urinoma). The GFR of the operated kidney decreased significantly from 51.7 ± 15.1 mL/min per 1.73 m(2) preoperatively to 40, 12 ± 12.4 mL/min per 1.73 m(2) 1 month postoperatively (p = 0.001) with a decrease of 22.4 %. On multivariable analysis, only tumor size (p = 0.05) was a predictor of GFR decrease of the operated kidney. Robotic-assisted partial nephrectomy had a detectable impact on early renal function in a series of relatively large tumors and prevailing intermediate nephrometric risk. A mean decrease of 22 % of GFR as assessed by renal scan in the operated kidney was found at 1 month postoperatively. In multivariable analysis, tumor size only was a significant predictor of renal function loss.
abstract_id: PUBMED:34448346
Outcomes in robot-assisted partial nephrectomy for imperative vs elective indications. Objectives: To assess and compare peri-operative outcomes of patients undergoing robot-assisted partial nephrectomy (RAPN) for imperative vs elective indications.
Patient And Methods: We retrospectively reviewed a multinational database of 3802 adults who underwent RAPN for elective and imperative indications. Laparoscopic or open partial nephrectomy (PN) were excluded. Baseline data for age, gender, body mass index, American Society of Anaesthesiologists score and PADUA score were examined. Patients undergoing RAPN for an imperative indication were matched to those having surgery for an elective indication using propensity scores in a 1:3 ratio. Primary outcomes included organ ischaemic time, operating time, estimated blood loss (EBL), rate of blood transfusions, Clavien-Dindo complications, conversion to radical nephrectomy (RN) and positive surgical margin (PSM) status.
Results: After propensity-score matching for baseline variables, a total of 304 patients (76 imperative vs 228 elective indications) were included in the final analysis. No significant differences were found between groups for ischaemia time (19.9 vs 19.8 min; P = 0.94), operating time (186 vs 180 min; P = 0.55), EBL (217 vs 190 mL; P = 0.43), rate of blood transfusions (2.7% vs 3.7%; P = 0.51), or Clavien-Dindo complications (P = 0.31). A 38.6% (SD 47.9) decrease in Day-1 postoperative estimated glomerular filtration rate was observed in the imperative indication group and an 11.3% (SD 45.1) decrease was observed in the elective indication group (P < 0.005). There were no recorded cases of permanent or temporary dialysis. There were no conversions to RN in the imperative group, and seven conversions (5.6%) in the elective group (P = 0.69). PSMs were seen in 1.4% (1/76) of the imperative group and in 3.3% of the elective group (7/228; P = 0.69).
Conclusion: We conclude that RAPN is feasible and safe for imperative indications and demonstrates similar outcomes to those achieved for elective indications.
abstract_id: PUBMED:24033600
Different methods of hilar clamping during partial nephrectomy: Impact on renal function. Objectives: To evaluate the impact of different hilar clamping methods on changes in renal function after partial nephrectomy.
Methods: We analyzed the clinical data of 369 patients who underwent partial nephrectomy for a single renal tumor of size ≤4.0 cm and a normal contralateral kidney. Patients were separated into three groups depending on hilar clamping method: non-clamping, cold ischemia and warm ischemia. Estimated glomerular filtration rate was examined at preoperative, nadir and 1 year postoperatively. Percent change in estimated glomerular filtration rate was used as the parameter to assess the renal functional outcome.
Results: Percent change in nadir estimated glomerular filtration rate in the non-clamping group was significantly less compared with the cold ischemia and warm ischemia groups (P < 0.001). However, no significant differences among the groups were noted in percent change of estimated glomerular filtration rate at 1 year (P = 0.348). The cold ischemia group had a similar serial change of postoperative renal function compared with the warm ischemia group. Percent change in 1-year estimated glomerular filtration rate increased with increasing ischemia time in the cold ischemia (P for trend = 0.073) and warm ischemia groups (P for trend = 0.010). On multivariate analysis, hilar clamping (both warm ischemia and cold ischemia) were significantly associated with percent change in nadir estimated glomerular filtration rate, but not in 1-year estimated glomerular filtration rate.
Conclusions: Non-clamping partial nephrectomy results in a lower percent change in nadir estimated glomerular filtration rate, whereas it carries an estimated glomerular filtration rate change at 1 year that is similar to partial nephrectomy with cold ischemia and warm ischemia. Cold ischemia and warm ischemia provide a similar effect on renal function. Therefore, when hilar clamping is required, minimization of ischemia time is necessary.
Answer: Yes, warm ischemia and ischemia time are still considered predictive factors of poor renal function after partial nephrectomy in the setting of elective indication. Studies have shown that warm ischemia time, particularly when extended beyond a 25-minute threshold, can negatively impact postoperative renal function (PUBMED:27409986). Additionally, ischemia time has been identified as an independent risk factor for renal function deterioration in multivariate analysis, with non- or time-limited clamping techniques being associated with better preservation of renal function (PUBMED:24700310).
A nomogram developed for predicting loss of renal function after partial nephrectomy also includes ischemia time as one of the variables, indicating its significance in the assessment of postoperative renal function decline (PUBMED:30929839). Furthermore, the duration of ischemia has been correlated with deterioration of estimated glomerular filtration rate (eGFR) postoperatively, although the impact on prognosis may require further investigation (PUBMED:30662055).
It is also noted that while the quality and quantity of preserved parenchyma are the main contributors to postoperative renal function, ischemia time seems to have a secondary role, provided that warm ischemia time is limited or ischemia is hypothermic (PUBMED:28723490). Moreover, a study found no significant correlation between ischemia time and renal function when the ischemia time was kept below 30 minutes (PUBMED:36708175).
In summary, while the preservation of renal parenchyma is crucial, warm ischemia and ischemia time are still important factors to consider for renal function outcomes after partial nephrectomy, and efforts should be made to minimize ischemia time where possible (PUBMED:24033600). |
Instruction: Do statin medications impact renal functional or oncologic outcomes for robot-assisted partial nephrectomy?
Abstracts:
abstract_id: PUBMED:24934083
Do statin medications impact renal functional or oncologic outcomes for robot-assisted partial nephrectomy? Objective: To evaluate if statin medications (3-hydroxyl-3-methylglutaryl coenzyme A [HMG-CoA] reductase inhibitors) improve either oncologic or renal functional outcomes for patients undergoing robot-assisted partial nephrectomy (RPN).
Patients And Methods: Patients undergoing RPN between March 2008 and October 2013 were evaluated from a prospectively maintained database for statin usage. The rate of perioperative acute kidney injury (AKI), as defined according to the RIFLE criteria, and the progression of chronic kidney disease (CKD) were compared between users and nonusers. Oncologic outcomes and rate of progression were compared between users and nonusers.
Results: One hundred four (31%) of 339 patients were on statin therapy preoperatively and continued this medication peri- and postoperatively. Statin patients were older and had higher rates of comorbidities, including coronary artery disease, diabetes, and hypertension (p<0.0001 for all).The rate of AKI in the statin (16%) and nonstatin patients (14%) (p=0.60) and CKD progression based on Kaplan-Meier estimates (p=0.57) were similar between both the groups. Subgroup analysis of the 271 (80%) patients with hilar clamping also had similar rates of AKI, in statin users 10% vs 12% in nonusers (p=0.50). Multivariate analysis of factors affecting CKD progression confirmed these findings. Oncologic progression was not affected by statin therapy (p=0.90).
Conclusion: Statin medications do not appear to influence perioperative renal function following RPN, in either clamped or unclamped procedures. Continuation of these medications may be continued perioperatively, but any effect on renal functional or oncologic outcomes was not elucidated in this study.
abstract_id: PUBMED:32420203
Long-term oncologic outcomes of positive surgical margins following robot-assisted partial nephrectomy. Background: Previous reports on positive surgical margin (PSM) after robot-assisted partial nephrectomy (RAPN) have reached inconsistent conclusions as to the impact of a PSM on oncologic outcomes. We sought to determine the effect of PSM on long-term cancer recurrence and survival outcomes.
Methods: We queried our renal oncology database for patients having undergone RAPN and compared recurrence-free survival (RFS) and overall survival (OS) between patients with PSM and negative surgical margin (NSM). Kaplan-Meier analysis was also performed for RFS and OS for PSM versus NSM.
Results: Of the 432 patients who underwent RAPN we identified 29 (6.7%) patients with PSM and 403 (93.3%) patients with NSM. Median follow-up for the overall cohort was 45.1 months. Three of the 29 patients with PSM and fourteen of the 403 patients with NSM had disease recurrence (P=0.09). RFS at 24, 48, and 72 months was 95.8%, 90%, and 85.5% for patients with NSM and 96.6%, 86.6%, and 80.4% for patients with PSM, respectively (log-rank P value =0.382). OS at 24, 48, and 72 months was 98%, 93.1%, and 89.7% for patients with NSM and 96.3%, 91.2%, and 85.2% for patients with PSM, respectively (log-rank P value =0.584).
Conclusions: While PSM are relatively uncommon, their presence still serves as a potential risk factor for worse oncologic outcomes. In instances of PSM, immediate secondary intervention is most likely unnecessary and more attentive long-term clinical follow-up, especially in patients with high-risk features, may be more advisable.
abstract_id: PUBMED:31100229
Is Robot-assisted Surgery Contraindicated in the Case of Partial Nephrectomy for Complex Tumours or Relevant Comorbidities? A Comparative Analysis of Morbidity, Renal Function, and Oncologic Outcomes. Background: Available comparisons between open partial nephrectomy (OPN) and robot-assisted partial nephrectomy (RAPN) are scarce, incomplete, and affected by non-negligible risk of bias.
Objective: To compare RAPN and OPN.
Design, Setting, And Participants: This was an observational study of 472 patients diagnosed with a cT1-2cN0cM0 renal mass and treated with RAPN or OPN assessed in two prospective institutional databases.
Outcome Measurements And Statistical Analysis: The study outcomes were morbidity, complications, warm ischaemia time, renal function, positive surgical margins, and oncologic outcomes. Propensity score matching for age at diagnosis, gender, Charlson comorbidity index, preoperative estimated glomerular filtration rate (eGFR), single kidney status, tumour size and side, total PADUA score, any individual PADUA score item, and year of surgery was used to account for baseline confounders. The effect of surgical approach was estimated using linear and logistic regressions for continuous and categorical outcomes. An interaction test was used for subgroup analyses.
Results And Limitations: Relative to OPN, RAPN was associated with lower rates for overall (21% vs 36%; p<0.0001) and major (3% vs 9%; p=0.03) complications. This benefit was consistent in patients with high PADUA scores, high CCI, large tumours, and low preoperative eGFR (all p>0.05, interaction test). No difference between the groups was observed for warm ischaemia time, postoperative and 1-yr eGFR, and positive surgical margins (all p>0.05). After median follow-up of 41 mo, there was no difference between the groups for the 5-yr rates of local recurrence-free, systemic progression-free, and disease-free survival (all p>0.05).
Conclusions: RAPN is associated with overall better perioperative morbidity and lower rates of complications, regardless of characteristics such as tumour complexity and patient comorbidity status. Functional and oncologic outcomes are equal after RARP and OPN.
Patient Summary: Robot-assisted partial nephrectomy is associated with a better morbidity profile than open partial nephrectomy (OPN) and provides the same cancer control and renal function preservation observed after OPN.
abstract_id: PUBMED:23336101
Robot-assisted partial nephrectomy in contemporary practice. Laparoscopic renal surgery is associated with reduced blood loss, shorter hospital stay, enhanced cosmesis, and more rapid convalescence relative to open renal surgery. Laparoscopic partial nephrectomy (LPN) is a minimally invasive, nephron-sparing alternative to laparoscopic radical nephrectomy (RN) for the management of small renal masses. While offering similar oncological outcomes to laparoscopic RN, the technical challenges and prolonged learning curve associated with LPN limit its wider dissemination. Robot-assisted partial nephrectomy (RAPN), although still an evolving procedure with no long-term data, has emerged as a viable alternative to LPN, with favorable preliminary outcomes. This article provides an overview of the role of RAPN in the management of renal cell carcinoma. The clinical indications and principles of surgical technique for this procedure are discussed. The oncological, renal functional, and perioperative outcomes of RAPN are also evaluated, as are complication rates.
abstract_id: PUBMED:36283875
Perioperative outcomes of open and robot-assisted partial nephrectomy in patients with renal tumors of moderate to high complexity. Objective: To compare the perioperative outcomes of patients with complex renal tumors treated with open versus robot-assisted partial nephrectomy.
Methods: This retrospective study included 273 patients diagnosed with localized renal tumors at our institution between January 2007 and October 2020. Patients with moderate to high complexity tumors based on the RENAL nephrometry score were included. Perioperative outcomes were compared between open and robot-assisted partial nephrectomy patients. Remnant renal function was defined as the estimated glomerular filtration rate at 12 months after surgery.
Results: Open and robot-assisted partial nephrectomy were performed in 43 and 77 patients, respectively. There was no significant difference in overall, cancer-specific, recurrence-free, and metastasis-free survival between the two groups. Remnant renal function was significantly better preserved in the open group, and body mass index was identified as an independent predictive factor (odds ratio 3.05, P = 0.017). Ischemia or type of surgery were not related to remnant renal function. The trifecta achievement rate was 51.2% in the open group and 71.4% in the robot-assisted group (P = 0.031), and the incidence of complications was significantly higher in the open partial nephrectomy group (P = 0.0030). Multivariate analysis revealed that open partial nephrectomy was an independent predictive factor for the incidence of complications (odds ratio 3.92, P = 0.0020).
Conclusion: Robot-assisted partial nephrectomy can provide good and acceptable oncological and functional outcomes with fewer complications in patients with more complex renal tumors. Further research is needed to establish appropriate treatment strategies and guidelines in current clinical practice.
abstract_id: PUBMED:26373545
Robot-assisted Partial Nephrectomy for Endophytic Tumors. Robot-assisted partial nephrectomy (RAPN) has gained increasing popularity in the management of renal masses due to its technical feasibility and shorter learning curve with superior perioperative outcomes compared to laparoscopic partial nephrectomy (LPN). Given the cumulation of surgical experience on RAPN, the indication for RAPN has been extended to more challenging, complex cases, such as hilar or endophytic tumors. Renal masses that are completely endophytic can be very challenging to surgeons. These cases are associated with poor recognition of mass extension, higher risk of inadvertent vascular, or pelvicalyceal system injury. As a result, this can lead to potential positive surgical margin, difficulty in performing renorrhaphy as well as higher perioperative complication rates. There is few evidence of oncologic and functional outcomes of RAPN on treating endophytic masses. Therefore, the objective of this review is to critically analyze the current evidence and to provide a summary on the outcomes of RAPN for endophytic renal masses.
abstract_id: PUBMED:36294486
Robot-Assisted Partial Nephrectomy Mid-Term Oncologic Outcomes: A Systematic Review. Background: Robot-assisted partial nephrectomy (RAPN) is used more and more in present days as a therapy option for surgical treatment of cT1 renal masses. Current guidelines equally recommend open (OPN), laparoscopic (LPN), or robotic partial nephrectomy (PN). The aim of this review was to analyze the most representative RAPN series in terms of reported oncological outcomes. (2) Methods: A systematic search of Webofscience, PUBMED, Clinicaltrials.gov was performed on 1 August 2022. Studies were considered eligible if they: included patients with renal cell carcinoma (RCC) stage T1, were prospective, used randomized clinical trials (RCT) or retrospective studies, had patients undergo RAPN with a minimum follow-up of 48 months. (3) Results: Reported positive surgical margin rates were from 0 to 10.5%. Local recurrence occurred in up to 3.6% of patients. Distant metastases were reported in up to 6.4% of patients. 5-year cancer free survival (CFS) estimates rates ranged from 86.4% to 98.4%. 5-year cancer specific survival (CSS) estimates rates ranged from 90.1% to 100%, and 5-year overall survival (OS) estimates rated ranged from 82.6% to 97.9%. (4) Conclusions: Data coming from retrospective and prospective series shows very good oncologic outcomes after RAPN. Up to now, 10-year survival outcomes were not reported. Taken together, RAPN deliver similar oncologic performance to OPN and LPN.
abstract_id: PUBMED:31931607
A Multi-Institutional Analysis of the Effect of Positive Surgical Margins Following Robot-Assisted Partial Nephrectomy on Oncologic Outcomes. Objective: To determine the effect of positive surgical margins (PSMs) on oncologic outcomes following robot-assisted partial nephrectomy (RAPN) and to identify factors that increase the likelihood of adverse oncologic outcomes. Methods: A multi-institutional database of patients who underwent RAPN with complete follow-up data was used to compare recurrence-free survival (RFS) and overall survival (OS) between 42 (5.1%) patients with a PSM and 797 (94.9%) patients with a negative surgical margin. Analysis was performed with univariable and multivariable Cox proportional hazard regression models adjusting for confounding variables. A Kaplan-Meier method was used to evaluate the relationship between PSM and oncologic outcomes (RFS and OS), and the equality of the curves was assessed using a log-rank test. Results: The rate of PSM was 5.1%. RFS at 12, 24, and 36 months was 97.8%, 95.2%, and 92.9%. OS at 12, 24, and 36 months was 98.6%, 97.7%, and 93.3%. PSM was not associated with worse RFS in both univariable and multivariable analyses (hazard ratio [HR] = 1.43; 95% confidence interval [CI] = 0.37, 5.55; p = 0.607). Factors associated with worse RFS include pT3a upstaging (HR = 4.97; 95% CI = 1.63, 15.12; p = 0.005), a higher Charlson comorbidity index (HR = 1.68; 95% CI = 1.20, 2.34; p = 0.002); and advanced clinical stage (cT1a vs cT1b, HR = 4.22; 95% CI = 1.84, 9.68; p = 0.001 vs cT2a, HR = 14.09; 95% CI = 3.85, 51.53; p < 0.001). PSM was not associated with worse OS in both univariable and multivariable analyses (HR = 0.87; 95% CI = 0.26, 2.94; p = 0.821). Higher R.E.N.A.L. nephrometry score was found to be associated with worse OS (HR = 1.26; 95% CI = 1.01, 1.57; p = 0.041). Conclusions: Given the absence of association between PSM and worse oncologic outcomes, patients with PSM following RAPN should be carefully monitored for recurrence rather than undergo immediate secondary intervention. As advanced clinical stage (cT1b, cT2a) and pathologic upstaging (pT3a) were independently associated with disease recurrence, their presence may warrant more attentive postoperative surveillance.
abstract_id: PUBMED:20407566
Robot-assisted laparoscopic partial nephrectomy: Current review of the technique and literature. Aim: To visit the operative technique and to review the current published English literature on the technique, and outcomes following robot-assisted laparoscopic partial nephrectomy (RPN).
Materials And Methods: We searched the published English literature and the PubMed(()) for published series of 'robotic partial nephrectomy' (RPN) using the keywords; robot, robot-assisted laparoscopic partial nephrectomy, laparoscopic partial nephrectomy, partial nephrectomy and laparoscopic surgery.
Results: The search yielded 15 major selected series of 'robotic partial nephrectomy'; these were reviewed, tracked and analysed in order to determine the current status and role of RPN in the management of early renal neoplasm(s), as a minimally invasive surgical alternative to open partial nephrectomy. A review of the initial peri-operative outcome of the 350 cases of select series of RPN reported in published English literature revealed a mean operating time, warm ischemia time, estimated blood loss and hospital stay, of 191 minutes, 25 minutes, 162 ml and 2.95 days, respectively. The overall computed mean complication rate of RPN in the present select series was about 7.4%.
Conclusions: RPN is a safe, feasible and effective minimally invasive surgical alternative to laparoscopic partial nephrectomy for early stage (T(1)) renal neoplasm(s). It has acceptable initial renal functional outcomes without the increased risk of major complications in experienced hands. Prospective randomised, controlled, comparative clinical trials with laparoscopic partial nephrectomy (LPN) are the need of the day. While the initial oncological outcomes of RPN appear to be favourable, long-term data is awaited.
abstract_id: PUBMED:37152053
Perioperative, oncologic, and functional outcomes of robot-assisted partial nephrectomy for special types of renal tumors (hilar, endophytic, or cystic): an evidence-based analysis of comparative outcomes. Purpose: This study aims to perform a pooled analysis to compare the outcomes of robot-assisted partial nephrectomy (RAPN) between complex tumors (hilar, endophytic, or cystic) and non-complex tumors (nonhilar, exophytic, or solid) and evaluate the effects of renal tumor complexity on outcomes in patients undergoing RAPN.
Methods: Four databases were systematically searched, including Science, PubMed, Web of Science, and Cochrane Library, to identify relevant studies published in English up to December 2022. Review Manager 5.4 was used for statistical analyses and calculations. The study was registered with PROSPERO (Registration number: CRD42023394792).
Results: In total, 14 comparative trials, including 3758 patients were enrolled. Compared to non-complex tumors, complex tumors were associated with a significantly longer warm ischemia time (WMD 3.67 min, 95% CI 1.78, 5.57; p = 0.0001), more blood loss (WMD 22.84 mL, 95% CI 2.31, 43.37; p = 0.03), and a higher rate of major complications (OR 2.35, 95% CI 1.50, 3.67; p = 0.0002). However, no statistically significant differences were found between the two groups in operative time, length of stay, transfusion rates, conversion to open nephrectomy and radical nephrectomy rates, estimated glomerular filtration rate (eGFR) decline, intraoperative complication, overall complication, positive surgical margins (PSM), local recurrence, and trifecta achievement.
Conclusions: RAPN can be a safe and effective procedure for complex tumors (hilar, endophytic, or cystic) and provides comparable functional and oncologic outcomes to non-complex tumors.
Systematic Review Registration: https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=394792, identifier CRD42023394792.
Answer: Statin medications do not appear to influence perioperative renal function following robot-assisted partial nephrectomy (RPN), in either clamped or unclamped procedures. The study by PUBMED:24934083 found that the rate of acute kidney injury (AKI) and the progression of chronic kidney disease (CKD) were similar between statin users and nonusers. Additionally, oncologic progression was not affected by statin therapy. Therefore, continuation of statin medications may be continued perioperatively, but any effect on renal functional or oncologic outcomes was not elucidated in this study. |
Instruction: Do angulated implants increase the amount of bone loss around implants in the anterior maxilla?
Abstracts:
abstract_id: PUBMED:23351760
Do angulated implants increase the amount of bone loss around implants in the anterior maxilla? Purpose: The aim of this study was to evaluate the relation between angulated implants and the bone loss around implants in the anterior maxilla.
Materials And Methods: The subjects studied had a missing tooth in the anterior maxilla and a bone deficiency that required restoration with an angulated dental implant. After mounting the casts on the articulator, the amount of direction was measured with a facebow by calculating the difference between the mean buccopalatal angulation of the 2 adjacent natural teeth and the buccopalatal angulation of the implant abutment to the occlusal plane. Radiography was performed in each patient immediately after loading and repeated a minimum of 36 months after loading.
Results: Fifty-eight subjects who received delayed-loading angulated implants were studied. The results showed that the mean implant angulation was 15.2° and the mean bone resorption was 0.87 mm. Analysis of the data showed a significant correlation between implant follow-up time and bone loss. No correlation was seen between the implant angulation and bone loss. An assessment of predictive factors showed a relation between the implant type and bone loss. The follow-up time had a significant effect on bone loss. The implant angulation did not change bone resorption on the mesial and distal surfaces of the implants.
Conclusions: The angulation of implants was not associated with an increased risk for bone loss, and angulated implants may be a satisfactory alternative to vertical implants to avoid grafting procedures. The type of implant may be an important factor that affects bone resorption, although follow-up time was the strongest predictive factor.
abstract_id: PUBMED:33595178
A 5 to 7-year case series on single angulated implants installed following papilla-sparing flap elevation. Background: Bony concavities at the buccal aspect may cause a distortion between the implant axis and ideal prosthetic axis. Angulated implants can overcome this problem, yet long-term data are lacking. In addition, papilla-sparing incisions have been proposed to reduce tissue loss, yet aesthetic outcomes have not been published.
Purpose: To evaluate the 5 to 7-year outcome of single angulated implants installed following papilla-sparing flap elevation.
Materials And Methods: Patients who had been consecutively treated with a single angulated implant (Co-axis®, Southern Implants, Irene, South Africa) in the anterior maxilla were re-examined after 5 to 7 years. Available data at 1 year (T1) were compared to those obtained at 5 to 7 years (T2).
Results: Twenty out of 22 treated patients (11 females, 9 males, mean age of 52) with 22 implants attended the 5 to 7-year reassessment. All implants survived and stable clinical conditions could be reached with mean marginal bone loss of 1.28 mm at T2. Papilla-sparing flap elevation resulted in Pink Esthetic Score of 9.83 at T1 and 8.23 at T2 (p = 0.072). Mucosal Scarring Index was 4.61 at T1 and 3.50 at T2 (p = 0.165). The overall appearance of scarring significantly improved over time (p = 0.032), yet 59% of the cases still demonstrated scarring at T2.c CONCLUSIONS: Within the limitations of the study, angulated implants (Co-axis®, Southern Implants) reached stable clinical conditions. Papilla-sparing incisions may not be recommended in aesthetically demanding patients due to high risk of scarring.
abstract_id: PUBMED:29034578
A prospective, split-mouth study comparing tilted implants with angulated connection versus conventional implants with angulated abutment. Background: An angulation of the implant connection could overcome the problems related to angulated abutments.
Purpose: This study compares conventional implants with angulated abutment to tilted implants with an angulated connection.
Materials And Methods: Twenty patients were treated in the edentulous mandible. In the posterior jaw locations, one conventional tilted implant with angulated abutment and one angulated implant without abutment were placed. In the anterior jaw, two conventional implants were placed, one with and one without abutment. Implants were immediately loaded and 3 months later, the final bridge (PFM or monolithic zirconia) was placed.
Results: After a follow-up of 48 months, 17 patients were available for clinical examination. The mean overall marginal bone loss (MBL) was 1.26 mm. No significant differences in implant survival, MBL, periodontal indices, patients' satisfaction, or complications was found between implants restored on abutment or implant level, between the posteriorly located angulated implant nor angulated abutment, and between both anterior implants with or without abutment. The posterior implants demonstrated less MBL compared to the anterior implants (P < .001). There was no significant difference in MBL between the implants restored with zirconia or PFM bridges (P = .294). Overall mean pocket depth was 2.83 mm. More plaque was found in the PFM group compared to the full-zirconia group, at the bridge (P = .042) and the implants (P = .029). There was no difference between both materials in pocket depth (P = .635) or bleeding (P = .821). One zirconia bridge fractured, two angulated abutment were replaced and four loose bridge screws connected to the angulated abutments had to be tightened. Patients were overall satisfied (4.74/5).
Conclusion: An implant with angulated connection may results in a stronger connection but does not affect the marginal bone loss. No difference in MBL was seen between implants restored on abutment or implant level. Zirconia seems to reduce the amount of plaque.
abstract_id: PUBMED:27688395
Radiographic Evaluation of Crestal Bone Loss Around Dental Implants in Maxilla and Mandible: One Year Prospective Clinical Study. Purpose: The aim of the study was to analyze the amount of maxillary and mandibular crestal bone loss around Bredent Sky Blue type of implants of different dimensions one year after implantation.
Materials And Methods: 36 implants of diameter 3.5 x 10 mm were inserted in the maxilla and 12 in the mandible. 52 implants of diameter 4.0 x 8 mm were inserted in the maxilla, and 61 in the mandible (two-stage implant surgery).
Results: No statistically significant differences were found between the right and left side of the maxilla and between the right and left side of the mandible at the implant sites regarding distal and mesial bone losses as shown by analysis of variance (ANOVA).
Conclusion: Statistically significant differences were found between anterior maxilla, posterior maxilla and anterior mandible and posterior mandible at implant sites regarding distal and mesial bone losses as shown by analysis of variance (ANOVA).
abstract_id: PUBMED:37938208
Association between Gingival Biotype and Crestal Bone Loss in Implants Placed in Anterior Maxilla. Background: When bone loss occurs around an implant, it can cause esthetic compromise, which might affect the tissue level design. Thus, bone level design implants are usually preferred if a natural emergence profile is important. The gingival biotype had been identified as a significant factor in the stability of crestal bone.
Aim: The aim of the current study is to analyze the gingival biotype and crestal bone in implants placed in anterior maxilla.
Materials And Methods: retrospective study was conducted using the case records of patients in University Hospital. Data on the gingival biotype and crestal bone loss in implants placed in anterior maxilla were collected (sample size = 96 patients) and analyzed for association with age and gender by descriptive statistics and chi-square association.
Results: In thick gingival biotype 59.3% of the cases showed no crestal bone loss and 5.2% of the patients showed only 1 mm of bone loss, but in case of thin gingival biotype, 16.6% of patients had 1 mm of bone loss, 5.2% of them has 2 mm of bone loss, and 1% of them had bone loss of 3 mm and above, with a significant p value of 0.02 (less than 0.05) showing a strong association between gingival biotype and crestal bone loss around implants.
Conclusion: It can be concluded that there exists a significant association between gingival biotype and crestal bone loss around implants placed in anterior maxilla.
abstract_id: PUBMED:28462189
Short dental implants in the posterior maxilla: a review of the literature. The purpose of this study was to perform a literature review of short implants in the posterior maxilla and to assess the influence of different factors on implant success rate. A comprehensive search was conducted to retrieve articles published from 2004 to 2015 using short dental implants with lengths less than 10 mm in the posterior maxilla with at least one year of follow-up. Twenty-four of 253 papers were selected, reviewed, and produced the following results. (1) The initial survival rate of short implants in the posterior maxilla was not related to implant width, surface, or design; however, the cumulative success rate of rough-surface short implants was higher than that of machined-surface implants especially in performance of edentulous dental implants of length <7 mm. (2) While bone augmentation can be used for rehabilitation of the atrophic posterior maxilla, short dental implants may be an alternative approach with fewer biological complications. (3) The increased crown-to-implant (C/I) ratio and occlusal table (OT) values in short dental implants with favorable occlusal loading do not seem to cause peri-implant bone loss. Higher C/I ratio does not produce any negative influence on implant success. (4) Some approaches that decrease the stress in posterior short implants use an implant designed to increase bone-implant contact surface area, providing the patient with a mutually protected or canine guidance occlusion and splinting implants together with no cantilever load. The survival rate of short implants in the posterior edentulous maxilla is high, and applying short implants under strict clinical protocols seems to be a safe and predictable technique.
abstract_id: PUBMED:32597779
Bone fenestration repair with leukocyte-platelet-rich fibrin after placement of implants in the anterior maxilla: a case report. Few reports have been published to date on the management of bone fenestration in the anterior maxilla using leukocyte-platelet-rich fibrin (L-PRF) with deproteinized bovine bone mineral allograft (DBBMA). This case report demonstrates the use of L-PRF associated with DBBMA to repair a bone fenestration after the placement of 2 implants in the anterior maxilla. Placement of 2 osseointegrated implants was planned to replace the missing maxillary central incisors of a patient with bone loss in the buccal region. Reverse treatment planning predicted the fenestration of the buccal cortical plate and exposure of the implants. The implants were placed, and fenestration of the buccal cortical bone around the body of the implants occurred as expected. A mixture of L-PRF and DBBMA, mediated by injectable platelet-rich fibrin (a combination sometimes referred to as sticky bone), was positioned to cover the defect. Cone beam computed tomography 6 months after the intervention showed complete coverage of the fenestration with newly formed bone tissue. The use of L-PRF associated with DBBMA efficiently covered the fenestration and promoted new bone formation.
abstract_id: PUBMED:33795161
Evaluation of hard tissue 3-dimensional stability around single implants placed with guided bone regeneration in the anterior maxilla: A 3-year retrospective study. Statement Of Problem: Guided bone regeneration (GBR) is widely used to reconstruct peri-implant bone defects in the esthetic zone. However, the dimensional stability of this bone-biomaterial composite is not fully understood.
Purpose: The primary aim was to evaluate the hard tissue 3-dimensional (3D) stability around single implants placed with simultaneous GBR by using deproteinized bovine bone mineral (DBBM) in the anterior maxilla and explore possible influencing factors.
Material And Methods: The records of patients who had received implants in the anterior maxilla from January 2015 to March 2016 were reviewed retrospectively. The change in volume and thickness of the facial hard tissue were analyzed. To explore possible influencing factors, the thickness and surface area of facial graft were measured, and the time point at which implants were placed and the healing protocol were recorded. Secondary outcome measures were peri-implant marginal bone loss, bleeding on probing (BOP), and pink esthetic score (PES). Statistical analysis was conducted by using the Student t test, Mann-Whitney U test, Kruskal-Wallis test, or generalized estimating equation analysis (α=.05).
Results: Fifty-five participants were included in this study, and no implants had been lost after 3 years. BOP was present in 10 (18.2%) participants. The mean ±standard deviation PES of all implants for this study was 11.0 ±2.1. The mean ±standard deviation percentage of residual hard tissue volume was 36.9 ±23.5%, with a significant difference found between time points before 9 months (P<.05). Type 3 implant placement (OR=1.449, P=.031) was found to have a higher percentage of residual hard tissue volume. A greater reduction of the facial hard tissue thickness was observed in participants with thicker postoperative facial grafting (OR=1.463, P=.001). No statistically significant difference was found between the facial, palatal, mesial, and distal peri-implant sites in terms of marginal bone loss (P>.05).
Conclusions: Although single-tooth implant placement combined with GBR using DBBM in the anterior maxilla offered satisfactory esthetic and functional outcomes after a 3-year follow-up, significant hard tissue volume and thickness reduction in grafted sites was detected, especially during the initial 9-month postoperative period. This phenomenon may be correlated with the timing of implant placement and the thickness of the facial graft.
abstract_id: PUBMED:34041877
Clinical assessment of pterygoid and anterior implants in the atrophic edentulous maxilla: a retrospective study. Objectives: This study aims to evaluate the short-term clinical outcomes and patient satisfaction of anterior and pterygoid implants in the rehabilitation of edentulous maxilla with posterior atrophy.
Methods: Given a minimum follow-up of 1 year, 25 patients with fixed maxillary rehabilitation over anterior and pterygoid implants were enrolled in this retrospective study. The implant survival rates, peri-implant soft tissue status (including probing depth, modified sulcus bleeding index, and plaque index), marginal bone loss, and patient satisfaction were measured.
Results: The survival rates for anterior and pterygoid implants at 1-year follow-up were 96.5% and 97.8%, respectively (P>0.05). No statistically significant difference in probing depth, modified sulcus bleeding index, and plaque index was observed between the two types of implants (P>0.05). The marginal bone losses of anterior implants were 0.62 mm± 0.44 mm (mesial) and 0.61 mm± 0.40 mm (distal), and those of pterygoid implants were 0.64 mm± 0.46 mm (mesial) and 0.68 mm± 0.41 mm (distal) mm. These results showed no statistical difference in mesial and distal sites (P>0.05). Patients indicated a high degree of satisfaction with the full-arch prostheses supported by anterior and pterygoid implants.
Conclusions: For the edentulous maxilla with posterior atrophy, full-arch fixed prostheses supported by anterior and pterygoid implants has an acceptable short-term clinical outcome and excellent patient satisfaction. It may be considered as a predictable and feasible method for maxillary rehabilitation.
abstract_id: PUBMED:21431255
Clinical retrospective study on membrane guided bone tissue regeneration technique in dental implants in the anterior maxilla Purpose: The purpose of this study was to investigate the effect of Bio-Gide in MGBR (membrane guided bone regeneration ) in the anterior maxilla dental implant therapy.
Methods: Fifty five cases underwent dental implant therapy in the anterior maxilla with insufficient maxillary anterior bone, including 29 patients with embedded 40 implants using MGBR and 26 patients with embedded 40 implants without using MGBR. The condition of bone around implant before implantation and before final restoration was observed and recorded. The thickness of the bone was measured with vernier caliper, the density and the bone loss were determined by X-ray examination. The results were analyzed with X-ray radiographs image analysis system (Sidexis) and SPSS16.0 software package.
Results: Univariate analysis showed that there was significant difference in the thickness and the density of the bone between before implant operation and before restoration(P<0.05). There was significant difference between the thickness and the density of the bone between with and without MGBR(P<0.05).
Conclusions: These results suggest that the thickness and the density of the bone in the group with MGBR is better than the group without MGBR. It is recommended to use MGBR in dental implant therapy in the anterior maxilla with insufficient bone.
Answer: The available evidence from the provided abstracts suggests that angulated implants do not necessarily increase the amount of bone loss around implants in the anterior maxilla. Specifically, one study found no correlation between implant angulation and bone loss, indicating that angulated implants may be a satisfactory alternative to vertical implants to avoid grafting procedures (PUBMED:23351760). Another study reported stable clinical conditions with angulated implants over a period of 5 to 7 years, with mean marginal bone loss of 1.28 mm, although it noted that papilla-sparing incisions may not be recommended in aesthetically demanding patients due to the high risk of scarring (PUBMED:33595178). Additionally, a comparison between conventional implants with angulated abutment and tilted implants with an angulated connection found no significant differences in marginal bone loss (MBL) between the two types (PUBMED:29034578).
However, other factors may influence bone loss around implants. For instance, the gingival biotype has been identified as a significant factor in the stability of crestal bone, with a significant association between gingival biotype and crestal bone loss around implants placed in the anterior maxilla (PUBMED:37938208). Moreover, the use of guided bone regeneration (GBR) techniques and the timing of implant placement can affect the stability of the bone around implants (PUBMED:33795161).
In summary, the studies do not support the notion that angulated implants inherently lead to increased bone loss in the anterior maxilla. Instead, other factors such as implant type, gingival biotype, and surgical techniques may play more critical roles in the stability of crestal bone around implants. |
Instruction: Orchidopexy for undescended testis in England: is it evidence based?
Abstracts:
abstract_id: PUBMED:36763163
Outcome of redo orchidopexy after previous laparoscopic orchidopexy. Purpose: Testicular reascent is a recognised complication of orchidopexy, and redo surgery may be required. In this report, we present our experience of redo orchidopexy after initial laparoscopic surgery.
Methods: Patients who had undergone redo orchidopexy following an initial vessel-sparing (VS) or non-vessel sparing (NVS) laparoscopic orchidopexy between 2005 and 2019 were identified. Outcome data, including complications and testicular size, were recorded.
Results: The series comprised 23 patients (5: initial bilateral surgery with reascent on one side only; 18: unilateral surgery) with a mean age at original surgery of 3.5 years (range 8 months-6 years) and at redo surgery, 4 years (range 1.5-7 years). VS surgery had been undertaken in 15 and NVS in 8. A tension-free scrotal position was achieved in all cases. There were no complications and no patient required orchidectomy. At a minimum of 6-month follow-up after redo surgery, there were no cases of reascent and there was no change in testicular size/volume (based on clinical examination).
Conclusion: Redo orchidopexy is an effective treatment following failed laparoscopic orchidopexy and a scrotal testis can be achieved in all cases. Complete testicular atrophy did not occur, but the risk of partial atrophy could not be accurately quantified.
abstract_id: PUBMED:30386090
Utilization of scrotal orchidopexy for palpable undescended testes among surgeons. Introduction: Scrotal orchidopexy for palpable undescended testicle (UDT) has received attention in the last decade due to its lower morbidity. This study was conducted to determine the frequency and factors related to the use of the scrotal approach in the surgical treatment of palpable UDT among surgeons.
Methods: An observational cross-sectional study was carried out using an online survey, which was sent to different pediatric urologists, pediatric surgeons, and urologists groups. The survey consisted of questions on demographics as well as surgeons opinions and experience toward scrotal orchidopexy.
Results: Of 163 respondents, 57 (35.0%) were pediatric surgeons, 98 (60.1%) were pediatric urologists, and 8 (4.9%) were urologists. There were 86 respondents (52.8%) who used the scrotal orchidopexy approach for UDT at any time in their practice. Pediatric urologists tended to use the scrotal orchidopexy approach for UDT more significantly than others (P < 0.001). There were significantly more scrotal orchidopexies for UDT performed by the pediatric urologists throughout their practice and per year compared to others, respectively (P < 0.001). Fifty-two respondents (31.9%) claimed that scrotal orchidopexy is not a good option for their patients, while seven respondents (4.3%) claimed that the procedure was hard to perform.
Discussion: Based on the results of this study, we believe that there is a discrepancy in the reported advantages and success rate of scrotal orchidopexy in the published literature and the utilization of such an approach among surgeons managing palpable UDT in children.
Conclusion: Scrotal orchidopexy is an underutilized approach in the management of palpable UDT in children. Only 52.8% of our respondents used it for UDT. One of the main reasons why scrotal orchidopexy is underutilized is due to the surgeons' perception that scrotal orchidopexy is not the procedure of choice for their patients and their unfamiliarity with the procedure.
abstract_id: PUBMED:35637827
Comparison of Single-Incision Scrotal Orchidopexy Versus Standard Two-Incision Inguinal Orchidopexy in Children With Palpable Undescended Testis. Objective This study compares the operative time and complications including scrotal hematoma and secondary ascent of single-incision scrotal orchidopexy with standard two-incision inguinal orchidopexy in children for the treatment of palpable undescended testis. Methodology An open-label, randomized clinical trial was conducted at the Department of Pediatric Surgery Services Hospital, Lahore, Pakistan, for six months from August 16, 2021, to March 15, 2022. A total number of 266 patients with palpable undescended testis aged 1-10 years were included in this study. Patients were randomized into two groups with an equal number of candidates (n = 133) using the lottery method. In group A, a single-incision scrotal orchidopexy was done while in group B, two-incision inguinal orchidopexy was done. Groups were compared for operative times and frequency of scrotal hematoma formation and secondary ascent one day and one week after the procedure, respectively. Results The mean age of our study children was 2.27+1.36 years; 159 (59.77%) children presented with right-sided palpable undescended testis and 107 (40.23%) children presented with left-sided undescended testis. Mean operative time in groups A and B was 25.35 ± 3.50 min and 45.45 ± 4.55 min, respectively (p < 0.0001). Scrotal hematoma occurred in three (2.3%) patients in group A and in 10 (7.5%) patients in group B. This difference was statistically significant (p = 0.047). In addition, secondary ascent occurred in four (3.0%) patients in group A and two (1.5%) patients in group B. This difference was not statistically significant (p = 0.4). Conclusion Single-Incision scrotal orchidopexy is simple, effective, less time consuming, and has fewer complications in terms of scrotal hematoma and secondary ascent as compared to the two-incision inguinal approach in children of palpable undescended testis.
abstract_id: PUBMED:26816793
Does early orchidopexy improve fertility? Cryptorchidism or undescended testis (UDT) is a common problem in the pediatric male population. While spontaneous testicular descent occurs in the majority of cases, orchidopexy is the definitive treatment in those with remaining cryptorchid testis. A long established sequela to cryptorchidism is reduced fertility in the adult male and recent guidelines have advocated for earlier orchidopexy as studies have shown improvement in fertility rates when surgery is performed before one year of age. Further studies continue to validate these recommendations as recent research demonstrates crucial developmental steps even in very young boys. These steps are critical to complete testicular maturation and a loss of these milestones has increasingly been shown to decrease fertility later in life. This review examines the histological findings, hormonal data, and paternity rates from those who have undergone orchidopexy at varying ages and summarizes current recommendations aimed at preserving fertility as much as possible in this population.
abstract_id: PUBMED:32082634
Unsatisfactory testicular position after inguinal orchidopexy: Is there a role for upfront laparoscopy? Objectives: To examine the role of laparoscopy in managing unsatisfactory testicular position after an open inguinal orchidopexy. We hypothesised that testes that were originally peeping, where short vessels represented a difficulty and testes that only reached a high scrotal position under tension, especially after an initial surgery performed with the appropriate expertise, are candidates for initial laparoscopic dissection. Patients and methods: Nineteen boys with an initial open inguinal orchidopexy, with a mean age of 31 months, were considered. Twelve were then treated by a laparoscopic-assisted orchidopexy technique. Standard laparoscopy was established and utilised to mobilise the spermatic cord from above, then completed by an open inguinal mobilisation. Results: The mean age at surgery was 26 months. The laparoscopic redo surgery took place at a mean interval of 11.9 months after the initial operation. The mean operative time was 72 min. A good position and size of the testis were achieved in all cases, evidenced by ultrasonography at 6 months postoperatively and clinically thereafter. Conclusion: An upfront combined laparoscopic and inguinal approach to redo orchidopexy for recurrent palpable undescended testes is suitable in selected patients. This study identifies the selection criteria and outlines the operative considerations. This laparoscopic-assisted approach is a safe and feasible way to correct unsatisfactory position of the testis, with diminished risk of injury to the vas and vessels, while gaining the maximum possible length by high retroperitoneal dissection. Abbreviation: UDT: undescended testis/testes.
abstract_id: PUBMED:28964408
Frequency of revision orchidopexy in Australia 1995-2014. Background/aim: International criteria currently suggest orchidopexy at 6-12months for congenital undescended testis (UDT). Some children require repeat orchidopexy for recurrent UDT. This study aimed to assess practice in Australia over a 20-year period.
Methods: We examined 20years of Australian orchidopexy data (1995-2014) from the Department of Human Services to explore the national revision orchidopexy rates over time.
Results: The total number of orchidopexy revisions was 890 over 20years compared with 25,984 primary operations. More than 50% of all primary and revision orchidopexies in 0-14year-old boys were performed in major population centers of NSW and Victoria (which hold 52% male population of same age), with a small number of revisions on 15-24year-old males. The incidence of revision orchidopexy significantly decreased over the 20-year period in boys ages 0-14years old, from 276 operations between 1995 and 1999 decreasing to 165 operations between 2010 and 2014 (-53%), compared to a population increase of +15% (p<0.05).
Conclusion: These data demonstrate a decrease in revision orchidopexy since 1995, which may be related to change in referral practice with more children undergoing orchidopexy (primary and revision) by pediatric surgeons over the 20-year period.
Level Of Evidence: Level IV.
Type Of Study: Therapeutic Case Series with no Comparison Group.
abstract_id: PUBMED:22968042
Orchidopexy patterns in Austria from 1993 to 2009. Objective: To evaluate orchidopexy patterns in Austria.
Material And Methods: All boys with cryptorchidism who underwent orchidopexy (n = 19.998) in Austria between 1993 and 2009 were analyzed using the database Austrian Health Information System at the Austrian Federal Research and Planning Institute for Health Care. Regression models were constructed to examine associations between the probability of orchidopexy before 24 months of life and the following parameters: year of birth, federal state of residence, character of area of living (rural/urban) and hospital type.
Results: Average age at operation dropped from 6 to 4.3 years (mean 5.2 years, SD 3.8 years). Total incidence of orchidopexy was continuously rising throughout the study period (p < 0.0001), with an OR of 1.007 (95% C.I.: 1.004; 1.0100) per year. The rate of operations between 0 and 2 years (p < 0.001) and 3-7 years (p < 0.001) increased, while the rate in boys older than 7 years decreased (p < 0.001). Year of birth (p < 0.0001) and place of residence (p < 0.0001 and p < 0.024) are significant predictors for having early orchidopexy.
Conclusion: In Austria the total incidence of orchidopexy is significantly rising. Moreover, the incidence of orchidopexies performed before 24 months of life is constantly rising with significant geographic differences.
abstract_id: PUBMED:28233437
Laparoscopic orchidopexy with transabdominal preperitoneal hernia repair in an adult. We report an adult who underwent laparoscopic orchidopexy and transabdominal preperitoneal hernia repair. The patient was a 53-year-old man who was referred to our hospital for a bulge and pain in his left inguinal area. An abdominal CT scan revealed that the greater omentum was incarcerated in a left inguinal hernia. The patient underwent emergency laparoscopic surgery immediately. After reduction, he was diagnosed with bilateral cryptorchidism and inguinal hernia. After adequate mobilization, pneumoperitoneum was discontinued, and orchidopexy was performed with the Lichtenstein tension-free hernioplasty. One month later, the patient underwent elective laparoscopic orchidopexy with transabdominal preperitoneal hernia repair on his right side. The patient's postoperative course has been uneventful, with no evidence of hernia recurrence to date. This procedure is safe and may be an option for adult patients who desire testis preservation. This may be the first report of laparoscopic hernia repair with orchidopexy.
abstract_id: PUBMED:38362278
Bilateral versus unilateral orchidopexy: IVF/ICSI-ET outcomes. Introduction: Cryptorchidism is a common genital disorder. Approximately 20% of azoospermic or infertile men reported having histories of cryptorchidism. Bilateral cryptorchidism may have been more condemned than unilateral cryptorchidism. Early treatment by orchidopexy is the definitive procedure for cryptorchid patients with cryptorchidism. However, fertility potency after orchidopexy may be adversely affected and assisted reproduction techniques will be required for infertile patients.
Objective: To compare the reproductive outcomes between unilateral and bilateral orchidopexy groups.
Methods: A retrospective cohort study at a tertiary hospital, including a total of 99 infertile men who underwent orchidopexy to treat cryptorchidism and subsequently underwent their first IVF/ICSI-ET cycle. Men were grouped according to the laterality of their cryptorchidism and orchidopexy surgeries they received. Fertilization rate and live birth rate were chosen as parameters for evaluating outcomes.
Results: The sperm concentration and viability were significantly higher in unilateral orchidopexy group than in bilateral orchidopexy group (28.09 ± 27.99 vs 7.99 ± 14.68, P=0.001; 33.34 ± 22.52 vs 11.95 ± 17.85, P=0.001). Unilateral orchidopexy group showed lower demand for ICSI (66.07% vs 95.35%, P<0.001). Interestingly, both groups exhibited similar rates of fertilization, clinical pregnancy, live birth and birth defect. Boy birth ratio was lower in bilateral orchidopexy group as compared to unilateral orchidopexy group (27.27% vs 58.62%, P=0.026).
Conclusion: A history of bilateral orchidopexy surgery correlates with a worsened sperm parameter and a higher demand for ICSI as compared to patients with history of unilateral orchidopexy. However, this does not influence the final live birth rate.
abstract_id: PUBMED:23032098
The risk of failure after primary orchidopexy: an 18 year review. Objective: To review the primary orchidopexy failure rate and outcome of repeat orchidopexy in a tertiary paediatric surgical centre and identify risk factors.
Methods: A prospectively collected and validated audits system was used to identify all boys having a repeat orchidopexy from August 1990 to December 2008 (18 years).
Results: In total, 1538 boys underwent orchidopexy with 1886 testicles operated on. Of these 348 (22.6%) patients had bilateral cryptorchidism. A need for repeat orchidopexy was identified in 31 boys resulting in a primary failure rate of 1.6% over the 18 years. Unilateral orchidopexy as the primary operation had a 1.5% failure rate. The failure rate for bilateral cryptorchidism was 1.87% per testicle rising to 1.93% per testicle when the primary operation was synchronous bilateral orchidopexy. Orchidopexy failure occurred in 9 patients (1.97%) who were under 24 months, 15 (2.67%) who were between 24 and 72 months and 7 (0.8%) over 72 months at time of first operation.
Conclusion: Possible risk factors for primary orchidopexy failure are bilateral operation and older age at time of operation. Failure in achieving a satisfactory scrotal position (and testicular loss) following orchidopexy has been postulated as a potential surgical standard for revalidation of paediatric surgeons. This study adds important contemporary data to inform that process.
Answer: The provided abstracts do not directly address the question of whether orchidopexy for undescended testis in England is evidence-based. However, they do provide insights into various aspects of orchidopexy practices, outcomes, and trends that can be used to infer whether current practices align with evidence-based guidelines.
For instance, the abstract from PUBMED:36763163 discusses the outcomes of redo orchidopexy after initial laparoscopic surgery, indicating that redo orchidopexy can be effective and that a scrotal testis can be achieved in all cases without complete testicular atrophy. This suggests that redo orchidopexy is a viable option based on the evidence of its effectiveness.
The abstract from PUBMED:30386090 highlights that scrotal orchidopexy for palpable undescended testis is underutilized despite its lower morbidity, as reported by surgeons in an observational study. This could imply a gap between evidence-based recommendations and actual surgical practice.
PUBMED:35637827 presents a comparison between single-incision scrotal orchidopexy and standard two-incision inguinal orchidopexy, showing that the single-incision approach is less time-consuming and has fewer complications. This suggests that evidence supports the single-incision approach as a better option in certain cases.
The abstract from PUBMED:32082634 discusses the role of laparoscopy in managing unsatisfactory testicular position after open inguinal orchidopexy, indicating that laparoscopy can be a safe and feasible option, which is evidence-based.
PUBMED:28964408 and PUBMED:22968042 provide data on the frequency and patterns of orchidopexy in Australia and Austria, respectively, showing trends over time and suggesting that practices may be evolving in response to new evidence.
Lastly, PUBMED:23032098 reviews the primary orchidopexy failure rate and identifies potential risk factors for failure, which can inform evidence-based practice by highlighting areas for improvement.
In summary, while the abstracts do not directly answer the question about England, they collectively suggest that there is an ongoing effort to align orchidopexy practices with evidence-based guidelines, although there may be variations and gaps in the application of this evidence in clinical practice. |
Instruction: Can human mannequin-based simulation provide a feasible and clinically acceptable method for training tracheostomy management skills for speech-language pathologists?
Abstracts:
abstract_id: PUBMED:24686737
Can human mannequin-based simulation provide a feasible and clinically acceptable method for training tracheostomy management skills for speech-language pathologists? Purpose: Workplace training for tracheostomy management is currently recognized to be inconsistent and insufficient. A novel approach, using technology-enhanced simulation, may provide a solution to training tracheostomy management skills by providing a consistent, time-efficient, and risk-free learning environment. The current research evaluated clinicians' tracheostomy skills acquisition after training in a simulated learning environment and explored changes in clinicians' confidence and perceptions after the experience.
Method: Forty-two clinicians with no or low levels of tracheostomy skill attended one of six, 1-day simulation courses. The training involved both part-task skill learning and immersive simulated scenarios. To evaluate clinicians' acquisition of manual skills, performance of core tasks during the scenarios was assessed by independent observers. Questionnaires were used to examine perceived outcomes, benefits, and perceptions of the learning environment at pre-, post-, and 4 months post-training.
Results: Only 1 clinician failed to successfully execute all core practical tasks. Clinicians' confidence increased significantly ( p < .05) from pre- to post-workshop and was maintained to 4 months post-workshop across most parameters. All clinicians reported positive perceptions regarding their learning outcomes and learning in a simulated environment.
Conclusion: These findings validate the use of simulation as a clinical training medium and support its future use in tracheostomy competency-training pathways.
abstract_id: PUBMED:33356717
Tracheostomy management by speech-language pathologists in Sweden. Purpose: Speech-language pathologists' (SLPs) role in tracheostomy management is well described internationally. Surveys from Australia and the United Kingdom show high clinical consistency in SLP tracheostomy management, and that practice follows guidelines, research evidence and protocols. Swedish SLPs work with tracheostomised patients, however, the content and extent of this practice, and how it compares to international research is unknown. This study reports how SLPs in Sweden work with tracheostomised patients, investigating (a) the differences and similarities in SLPs tracheostomy management and (b) the facilitators and barriers to tracheostomy management, as reported by SLPs.Methods: A study-specific, online questionnaire was completed by 28 SLPs who had managed tracheostomised patients during the previous year. This study was conducted in 2018, pre Covid-19 pandemic. The answers were analysed for exploratory descriptive comparison of data. Content analyses were made on answers from open-ended questions.Results: Swedish SLPs manage tracheostomised patients, both for dysphagia and communication. During this study, the use of protocols and guidelines were limited and SLPs were often not part of a tracheostomy team. Speech-language pathologists reported that the biggest challenges in tracheostomy management were in (a) collaboration with other professionals, (b) unclear roles and (c) self-perceived inexperience. Improved collaboration with other professionals and clearer roles was suggested to facilitate team tracheostomy management.Conclusions: This study provides insight into SLP tracheostomy management in Sweden, previously uncharted. Results suggest improved collaboration, further education and clinical training as beneficial for a clearer and more involved SLP role in tracheostomy management.
abstract_id: PUBMED:35241384
Simulation-based training in ear, nose and throat skills and emergencies. Objectives: The aim of the study was to compare lecture-based teaching and simulation-based hybrid training for ENT induction and objectively assess the performance of trainees in a simulated environment.
Methods: This is a prospective interventional study that included 60 interns in their rotatory internship with no prior exposure to ENT emergencies. The interns came in batches of 5‒6 for their 15-days ENT postings. On the first day, a pre-test questionnaire, lecture-based teaching on three scenarios and then allocation into one of the three simulation groups- Group A (Tracheostomy group), Group B (Nasogastric tube group), and Group C (Epistaxis group) was done. Hands-on simulation training was given only to the assigned group. At the end of 15-days, post-test questionnaire and an objective assessment of the three scenarios in a simulated environment was conducted. The same training was repeated for each batch of participants who attended the posting.
Results: The participants had significant improvement in the post-test scores in all three scenarios (p < 0.05), and these improvements were marked in those who had received simulated training. On comparing simulation scores, the participants who received hands-on training on a particular scenario outperformed other (p < 0.05).
Conclusion: Simulation-based training improves cognition and overall confidence in managing ENT skills and emergencies. In simulation training, objective and standardized assessment is the key to achieve specific learning objectives to improve the psychomotor and cognitive skill.
Level Of Evidence: II.
abstract_id: PUBMED:33884139
Speech-Language Pathologists' Role in the Multi-Disciplinary Management and Rehabilitation of Patients with Covid-19. Respiratory and neurological complications in patients in various stages of COVID-19 emphasize the role of speech-language pathologists in the assessment and management of swallowing and communication deficits in these patients. The speech-language pathologist works within a multidisciplinary team to identify these deficits, and aims to improve swallowing, nutrition, hydration, speech, and quality of life in the medical settings. This paper describes the unique symptoms and complications associated with COVID-19 that require speech-language pathologist services in medical (acute care, inpatient, and outpatient rehabilitation) facilities. The speech-language pathologist is primarily responsible for dysphagia screening and diagnosis in the acute care units, dysphagia and tracheostomy management in the inpatient units, and swallowing, speech and voice rehabilitation and neurocognitive management in the outpatient units. This paper also discusses the current therapeutic services and the precautions that speech-language pathologists must take to reduce transmission of the virus.
abstract_id: PUBMED:25833074
Developing clinical skills in paediatric dysphagia management using human patient simulation (HPS). Purpose: The use of simulated learning environments to develop clinical skills is gaining momentum in speech-language pathology training programs. The aim of the current study was to examine the benefits of adding Human Patient Simulation (HPS) into the university curriculum in the area of paediatric dysphagia.
Method: University students enrolled in a mandatory dysphagia course (n = 29) completed two, 2-hour HPS scenarios: (a) performing a clinical feeding assessment with a medically complex infant; and (b) conducting a clinical swallow examination (CSE) with a child with a tracheostomy. Scenarios covered technical and non-technical skills in paediatric dysphagia management. Surveys relating to students' perceived knowledge, skills, confidence and levels of anxiety were conducted: (a) pre-lectures; (b) post-lectures, but pre-HPS; and (c) post-HPS. A fourth survey was completed following clinical placements with real clients.
Result: Results demonstrate significant additive value in knowledge, skills and confidence obtained through HPS. Anxiety about working clinically reduced following HPS. Students rated simulation as very useful in preparing for clinical practice. Post-clinic, students indicated that HPS was an important component in their preparation to work as a clinician.
Conclusion: This trial supports the benefits of incorporating HPS as part of clinical preparation for paediatric dysphagia management.
abstract_id: PUBMED:31566861
Evaluation of a tracheostomy education programme for speech-language therapists. Background: Tracheostomy management is considered an area of advanced practice for speech-language therapists (SLTs) internationally. Infrequent exposure and limited access to specialist SLTs are barriers to competency development.
Aims: To evaluate the benefits of postgraduate tracheostomy education programme for SLTs working with children and adults.
Methods & Procedures: A total of 35 SLTs participated in the programme, which included a 1-day tracheostomy simulation-based workshop. Before the workshop, SLTs took an online knowledge quiz and then completed a theory package. The workshop consisted of part-task skill learning and simulated scenarios. Scenarios were video recorded for delayed independent appraisal of participant performance. Manual skills were judged as (1) completed successfully, (2) completed inadequately/needed assistance or (3) lost opportunity. Core non-medical skills required when managing a crisis situation and overall performance were scored using an adapted Ottawa Global Rating Scale (GRS). Feedback from participants was collected and self-perceived confidence rated prior, immediately post and 4 months post-workshop.
Outcomes & Results: SLTs successfully performed 94% of manual tasks. Most SLTs (29 of 35) scored > 5 of 7 on all elements of the adapted Ottawa GRS. Workshop feedback was positive with significant increases in confidence ratings post-workshop and maintained at 4 months.
Conclusions & Implications: Postgraduate tracheostomy education, using a flipped-classroom approach and low- and high-fidelity simulation, is an effective way to increase knowledge, confidence and manual skill performance in SLTs across patient populations. Simulation is a well-received method of learning.
abstract_id: PUBMED:30818130
Simulation-based education to improve emergency management skills in caregivers of tracheostomy patients. Introduction: Children with tracheostomies are medically complex and may be discharged with limited and variably trained home nursing support. When faced with emergencies at home, caregivers must often take the lead role in management, and many lack experience with troubleshooting these emergencies prior to initial discharge.
Methods: A high-fidelity simulation-based tracheostomy education program was designed using a programmable mannequin (Gaumard HAL S3004 one-year-old pediatric simulator). At the conclusion of our standard education program, caregivers completed three simulation scenarios: desaturation, mucus plugging, and dislodgement. A trained simulation facilitator graded performance. A self-assessment tool was used to analyze comfort with emergency management at the beginning of training, before and after simulation. Caregivers rated confidence using a 10 cm visual analog scale. All participants completed a post-simulation debriefing session.
Results: 39 caregivers completed all three scenarios and returned pre- and post-simulation self-assessments. Mean scores from the caregiver self-assessments increased for all three scenarios, with mean increases of 9 mm for desaturation, 16 mm for mucus plugging, and 10 mm for decannulation. Two patterns of responses emerged: caregivers with progressive increase in confidence through training, and caregivers who initially rated confidence highly, and had confidence decrease as the complexity of true emergency management became apparent. All participants found the simulations to be realistic and helpful.
Discussion: High-fidelity simulation training allows for realistic exposure to trach-related emergencies. Many caregivers overestimate their ability to handle emergencies and gain important insight through simulation.
Implications For Practice: Identification of skills and knowledge gaps prior to discharge allows for targeted re-education in emergency management.
abstract_id: PUBMED:18663110
Preparation, clinical support, and confidence of speech-language pathologists managing clients with a tracheostomy in Australia. Purpose: To describe the preparation and training, clinical support, and confidence of speech-language pathologists (SLPs) in relation to tracheostomy client care in Australia.
Method: A survey was sent to 90 SLPs involved in tracheostomy management across Australia. The survey contained questions relating to preparation and training, clinical support, and confidence.
Results: The response rate was high (76%). The majority of SLPs were pursuing a range of professional development activities, had clinical support available, and felt confident providing care of clients with tracheostomies. Despite these findings, 45% of SLPs were not up-to-date with evidence-based practice, less than 30% were knowledgeable of the advances in tracheostomy tube technology, and only 16% felt they worked as part of an optimal team. Only half were confident and had clinical support for managing clients who were ventilated. Most (88%) believed additional training opportunities would be beneficial.
Conclusions: The current data highlight issues for health care facilities and education providers to address regarding the training and support needs of SLPs providing tracheostomy client care.
abstract_id: PUBMED:35167432
Perspectives on speech and language pathology practices and service provision in adult critical care settings in Ireland and international settings: A cross-sectional survey. Purpose: Patients admitted to critical care (CC) are at risk of impaired swallowing and communication function. Speech-language pathologists (SLPs) play an important role in this context. In Ireland and internationally speech-language pathology CC guidelines are lacking, with possible variations in practice. To compare clinical practices in dysphagia, communication and tracheostomy management among SLPs working in adult CC units in Ireland and internationally, and explore their perspectives on training, skills and resources.
Method: Participants were SLPs working in CC. An international online survey sought information on (i) SLP workforce demographics and staffing levels, (ii) current dysphagia and communication assessment and management practices, (iii) practices and perspectives on training, skills and resources.
Result: 366 responses were received across 29 countries. 18.03% (66/366) of these respondents worked in Ireland. Findings showed similarities and differences in practices. Total CC SLP whole-time equivalent (WTE) at each staff grade was lower (mean difference: -0.21 to -0.65 WTE p <.001) than desired for optimal service delivery. Negative effects of under-staffing were reported. Recommendations that all tracheostomised patients receive SLP input was unmet in 66% (220/334) of services.
Conclusion: SLP input in CC is limited in terms of dedicated posts, multidisciplinary team (MDT) involvement, consistent management approaches and training opportunities internationally. Implications of findings are discussed.
abstract_id: PUBMED:36258878
Tracheotomy care simulation training program for inpatient providers. Objectives: Tracheotomy complications can be life-threatening. Many of these complications may be avoided with proper education of health care providers. Unfortunately, access to high-quality tracheotomy care curricula is limited. We developed a program to address this gap in tracheotomy care education for inpatient providers. This study aimed to assess the efficacy of this training program in improving trainee knowledge and comfort with tracheotomy care.
Methods: The curriculum includes asynchronous online modules coupled with a self-directed hands-on simulation activity using a low-cost tracheotomy care task trainer. The program was offered to inpatient providers including medical students, residents, medical assistants, nurses, and respiratory therapists. Efficacy of the training was assessed using pre-training and post-training surveys of learner comfort, knowledge, and qualitative feedback.
Results: Data was collected on 41 participants. After completing the program, participants exhibited significantly improved comfort in performing tracheotomy care activities and 15% improvement in knowledge scores, with large effect sizes respectively and greater gains among those with little prior tracheotomy care experience.
Conclusion: This study has demonstrated that completion of this integrated online and hands-on tracheotomy simulation curriculum training increases comfort and knowledge, especially for less-experienced learners. This training addresses an important gap in tracheotomy care education among health care professionals with low levels of tracheotomy care experience and ultimately aims to improve patient safety and quality of care. This curriculum is easily transferrable as it requires only access to the online modules and low-cost simulation materials and could be used in other hospitals, long-term care facilities, outpatient clinics, and home settings.
Level Of Evidence: 4.
Answer: Yes, human mannequin-based simulation can provide a feasible and clinically acceptable method for training tracheostomy management skills for speech-language pathologists (SLPs). Research has shown that simulation training can significantly increase clinicians' confidence in tracheostomy skills from pre- to post-workshop, with the increased confidence maintained up to four months post-workshop. Clinicians reported positive perceptions regarding their learning outcomes and the simulated learning environment (PUBMED:24686737). Additionally, simulation-based training has been found to improve cognition and overall confidence in managing ear, nose, and throat skills and emergencies, which can be relevant to tracheostomy management (PUBMED:35241384).
Furthermore, simulation-based education has been effective in improving emergency management skills in caregivers of tracheostomy patients, which suggests its applicability to training healthcare professionals as well (PUBMED:30818130). A study on the use of Human Patient Simulation (HPS) in the university curriculum for paediatric dysphagia management, which included scenarios with a child with a tracheostomy, demonstrated significant additive value in knowledge, skills, and confidence obtained through HPS (PUBMED:25833074).
Moreover, a postgraduate tracheostomy education program for SLPs that included simulation-based workshops resulted in significant increases in confidence ratings and successful performance of manual tasks, indicating the effectiveness of simulation in tracheostomy education (PUBMED:31566861). Lastly, a tracheotomy care simulation training program for inpatient providers showed that participants exhibited significantly improved comfort in performing tracheotomy care activities and improved knowledge scores after completing the program (PUBMED:36258878).
In summary, the evidence supports the use of simulation as a valuable training tool for SLPs in tracheostomy management, providing a consistent, time-efficient, and risk-free learning environment that enhances skill acquisition, confidence, and clinical performance. |
Instruction: Were less disabled patients the most affected by 2003 heat wave in nursing homes in Paris, France?
Abstracts:
abstract_id: PUBMED:16234262
Were less disabled patients the most affected by 2003 heat wave in nursing homes in Paris, France? Objective: To analyse the change of mortality rates (MRs) and their contributing medical factors among nursing home patients during the 2003 heat wave in France.
Methods: A retrospective observational study was conduced in all nursing homes of the Assistance-Publique-Hôpitaux de Paris (AP-HP), the French largest public hospital group. All AP-HP nursing home patients (4,403) who were institutionalized in May, 2003, were concerned. The MRs of patients between three periods (before, during and after the August 2003 heat wave period) were compared according to their demographic characteristics, level of dependence and medical condition.
Results: The MR increased from 2.2 per cent persons month (ppm) (1.9-2.4) before heat wave up to 9.2 ppm (8.0-10.4) during heat wave and back to 2.4 ppm (2.2-2.7) after heat wave. MRs before heat wave were higher among highly dependent patients compared to those less dependent [mortality rate ratio (MRR) = 2.66 (1.69-4.21)]. This difference disappeared during the heat wave [MRR = 1.28 (0.91-1.81)] and appeared again after heat wave [MRR = 2.21 (1.52-3.23)]. The same pattern was observed for several medical conditions, such as severe malnutrition or swallowing disorders.
Conclusion: These results suggest that medical care during heat wave has been directed towards more fragile patients, helping to limit deaths in this group. Less frail patients made the largest contribution to excess mortality during the heat wave. During extreme weather conditions, specific attention should be paid not only to frail persons, but to all the elderly community.
abstract_id: PUBMED:14975035
Unprecedented heat-related deaths during the 2003 heat wave in Paris: consequences on emergency departments. In August 2003, France sustained an unprecedented heat wave that resulted in 14,800 excess deaths. The consequences were maximal in the Paris area. The Assistance Publique-Hôpitaux de Paris reported more than 2600 excess emergency department visits, 1900 excess hospital admissions, and 475 excess deaths despite a rapid organization. Indeed, simple preventice measures before hospital admissions are only able to reduce mortality which mostly occurred at home and in nursing homes.
abstract_id: PUBMED:16830967
Excess deaths during the August 2003 heat wave in Paris, France. Background: During the August 2003 heat wave in France, almost 15,000 excess deaths were recorded. Paris was severely affected, with an excess death rate of 141%. This study had two aims: to identify individual factors associated with excess deaths during a heat wave in an urban environment and to describe the spatial distribution of deaths within the French capital.
Methods: The study population included all people who died at home between August 1st and 20th, 2003 (N=961). We identified factors associated with excess deaths by comparing the sociodemographic characteristics of the study population with those of people who died at home during the same period in reference years (2000, 2001, 2002) (N=530). Spatial differences were analysed by calculating comparative mortality rates within Paris during August 2003. Mortality ratio was determined to demonstrate temporal variations in mortality between the heat wave period and reference years.
Results: The major factors associated with excess death were: age over 75 years (adjusted OR=1.44 (1.10-1.90), being female (adjusted OR=1.43 (1.11-1.83)), not being married (adjusted OR=1.63 (1.23-2.15)), particularly for men. Being a foreigner appeared to be a protective factor for women. Comparative mortality rates by neighbourhood showed a gradient in excess deaths from North-West to South-East. The mortality ratio was 5.44 (5.10-5.79), with very high rates of excess death in the South (12th, 13th, 14th and 15th "arrondissement").
Conclusion: The August 2003 heat wave in Paris was associated with both an exceptional increase in mortality rates and changes in the characteristics of those dying and spatial distribution of mortality. Understanding the effects of a heat wave on mortality can probably be improved by an analysis of risk at two levels: individual and contextual.
abstract_id: PUBMED:15217773
Effect of August 2003 heat wave in France on a hospital biochemistry laboratory activity in Paris In August 2003, France sustained an exceptional heat wave. Heat-generated pathologies (dehydratation, heat stroke, cardio-vascular diseases) were responsible for additional biological analysis orders at the Saint-Antoine Hospital biochemistry laboratory in Paris from 4 to 18 august, compared to the same period in 2002. Variations were: + 17.6% for analysis orders, + 30.1% for ionograms, + 28.9% for plasma troponins I and + 58.6% for blood gazes analysis. Women and patients older than 75 years ratios were higher in august 2003. Biochemistry results analysis showed higher frequency of elevated plasma sodium, creatinine and troponin in 2003, confirming that most of patients admitted during heat wave were affected by heat-related diseases. Finally, laboratory excess activity was performed and quality was maintained, in spite of reduced staff and unusual climatic conditions.
abstract_id: PUBMED:14663397
Descriptive study of the patients admitted to an intensive care unit during the heat wave of August 2003 in France Introduction: Several thousands of deaths were attributed to heat stroke during August 2003 in France. To date, only a very few studies have analyzed the prognosis in the intensive care unit (ICU) of the most severely hyperthermic patients.
Method: Descriptive observational study of the patients admitted to the intensive care unit at the Lariboisière hospital in Paris, for heat stroke defined by an elevated core body temperature above 40 degrees C with central nervous system dysfunction, in the absence of other etiologies explaining the hyperthermia.
Results: In the Lariboisière hospital, an elevation in the ICU (+143%) and hospital mortality rate (+191%) were registered during August 2003, in comparison with August 2002. Fifteen patients (10 men, 5 women, median age: 57 years) were admitted to the ICU for heat stroke between the 4th and 14th of August 2003. Seven of them (47%) died. On admission, the occurrence of a pre-hospital cardiac arrest, the presence of coagulation abnormalities (reduction in prothrombin time and in platelet count) or of an elevation in plasma lactate concentration were significantly associated with the risk of death in the ICU. Conversely, age, body temperature, coma depth on admission and convulsions were not predictive of death. Neurological after effects (cerebellar syndrome, polyneuropathy and residual brain damage) were noted in 50% of the survivors.
Discussion: Although it is possible that heat alone precipitated the death of very sick people, our study clearly showed that young and valid patients died of heat stroke and suggests the possible increase in the 2003 death rate secondary to the heat wave. Moreover, it is still difficult at the moment to really appreciate the long-term consequences for survivors who presented serious neurological after effects.
Conclusion: The August 2003 heat wave resulted in an elevation of the hospital and ICU death rates in the Lariboisière hospital in Paris. Despite adequate cooling and supportive therapies, the mortality of patients admitted to the ICU for heat stroke remained elevated and the neurological after effects severe. These preliminary results should be confirmed by larger cohort studies.
abstract_id: PUBMED:15461047
The heat wave of August 2003: what happened? A heat wave of exceptional intensity occurred in France in August 2003, 2003 was the warmest of the last 53 years in terms of minimal, maximal and average temperatures, and in terms of duration. In addition, high temperatures and sunshine, causing the emission of pollutants, significantly increased the atmospheric ozone level. Some epidemiological studies were rapidly implemented during the month of August in order to asses the health impact of this heat wave. Excess mortality was estimated at about 14 800 additional deaths. This is equivalent to a total mortality increase of 60% between August 1st and 20th, 2003 (Inserm survey). Almost the whole country was concerned by this excess-mortality, even in locations where the number of very hot days remained low. Excess-mortality clearly increased with the duration of extreme temperatures. These studies also described the features of heat-related deaths. They showed that the death toll was at its highest among seniors and suggested that less autonomous or disabled or mentally ill people were more vulnerable. So, they provided essential information for the setting up of an early warning system in conjunction with emergency departments. The public health impact of the Summer 2003 heat wave in various European countries was also assessed. Different heat waves in term of intensity had occurred at different times in many countries with each time deaths in excess. But, it does seem that France was the most affected country. However, implementation of standardized methods of data collection through all countries is necessary to afford further comparisons. Collaborative studies will be conducted in this way. After theses first descriptive studies, further etiologic studies on risk factors and heat-related deaths were launched and are now in progress. Considering the health impact of the heat wave, national health authorities decided to launch an Heat Wave National Plan including a provisional Heat Watch Warning System (HWWS) for 2004. Developed in collaboration with Metéo France, this HWWS is based upon an analysis of historical daily mortality data and meteorological indicators in 14 French cities in order to define the best indicators and triggers. The public health impact of the heat wave of August 2003 was major. This exceptional event raises questions about anticipating phenomena which are difficult to predict. The collaborative efforts which were developed and the group of actions and studies which were implemented in a context of emergency are now useful for the setting up of early warning strategies and thus efficient prevention.
abstract_id: PUBMED:15656355
Epidemiology and heat waves: analysis of the 2003 episode in France Epidemiology and heat waves: analysis of the 2003 episode in France. The heat wave that struck France in 2003 has been accompanied with an estimated 15,000 excess deaths. This paper stresses the difficulties of the epidemiology of such an event. The relevant clinical and biological information is incomplete or even inaccessible and many of the deaths are due to multiple factors. The data presently available indicate that the deaths occurred in persons already vulnerable, and that the heat wave caused a five- to eight-month loss of lifetime for the affected individuals. There is a noteworthy similarity between the profiles of this exceptional summer mortality surge, and those of many past winters when similar or larger excess mortalities ave occurred without as yet eliciting much public attention.
abstract_id: PUBMED:35670391
The medical social characteristic of disabled children residing in nursing homes of the Russian Federation The purpose of the study is to identify and systematize urgent problems and proposals for implementing innovations in medical and social care of disabled children in nursing homes. The analysis of official data and results of epidemiological study concerning disabled children aged 0-17 years resigning in nursing homes permitted to identify dynamics of changes in age structure of observed contingent, causes of disability and accessibility and quality of medical care and psychological pedagogical assistance. It was established that total number of disabled children living in nursing homes and their part among all disabled children is stable. The most common pathology and cause of disability are mental and behavioral disorders, diseases of nervous system, congenital anomalies, deformities and chromosomal abnormalities, diseases of eye and its adnexa. The insufficient accessibility and low quality of various types of medical care (preventive examination, specialized, hospital, rehabilitation and palliative care) is established. At organizing education of children, difficulties occurred due to lacking of special conditions and technical means, availability and qualification of specialists of psychological and pedagogical profile. In most cases, inmates reside in buildings with high technical wear and impossibility of complex implementation of state programs requirements. The article presents set of proposals, including consolidation of subordination of medical support to regional medical centers, introduction of modern education technologies for disabled children, improvement of competence of psychological and pedagogical personnel, external audit of technical conditions and facilities of nursing homes, widespread application of potential of non-governmental organizations.
abstract_id: PUBMED:20093248
Heat-related mortality in residents of nursing homes. Background: in population-based studies, age and morbidity were associated with heat-related mortality. The nursing home population reveals both factors and may represent a highly vulnerable subgroup. Therefore, temperature-mortality relationship was examined in residents of nursing homes.
Methods: the association between daily ambient maximum temperature and mortality was analysed in 95,808 nursing home residents in southwest Germany between 2001 and 2005. Time series analyses were applied across age groups, sex and functional abilities. In addition, excess mortality was determined for the 2003 heat wave.
Results: mortality risk was lowest at maximum temperatures between 16 and 25.9 degrees Celsius. Risk increased by 26 and 62% at days of 32.0-33.9 and 34 degrees Celsius and more, respectively. In August 2003, heat caused >400 additional deaths in the observed population and was followed by only a moderate mortality displacement in the following months. The excess number of deaths during the heat wave was particularly high in residents aged > or = 90 years and in residents with higher care needs.
Conclusion: high ambient temperature was associated with an increased mortality risk in all analysed subgroups of the nursing home population. Medical competence and supervision are available in nursing homes and should, therefore, be favourable preconditions for the implementation of preventive measures.
abstract_id: PUBMED:19633540
Increase in out-of-hospital cardiac arrest attended by the medical mobile intensive care units, but not myocardial infarction, during the 2003 heat wave in Paris, France. Objectives: To address the association between the 2003 heat wave in Paris (France) and the occurrence of out-of-hospital cardiac arrest.
Design: : An analysis of the interventions of the medical mobile intensive care units of the City of Paris for out-of-hospital cardiac arrest and prehospital myocardial infarctions, which were routinely and prospectively computerized from January 1, 2000, to December 31, 2005.
Setting: City of Paris, France.
Patients: Participants were consecutive victims of witnessed out-of-hospital cardiac arrest due to heart disease and of ST-segment elevation myocardial infarction (STEMI) aged >or=18 yrs, who were attended by the medical mobile intensive care units (MICUs) of the City of Paris from January 1, 2000, to December 31, 2005.
Interventions: None.
Measurements And Main Results: The numbers of out-of-hospital cardiac arrests and of STEMIs during the 2003 heat wave period (August 1 to August 14) were compared (Poisson regression analysis) with the respective average numbers during the same period in reference years 2000-2002 and 2004-2005 when there was no heat wave. Mean ages of the 3049 patients experiencing out-of-hospital cardiac arrest and the 2767 patients experiencing STEMI attended by the MICUs during the study period were 64.3 +/- 18.0 and 65.2 +/- 15.4, respectively, and two thirds were males. During the heat wave period, the number of out-of-hospital cardiac arrests (n = 40) increased 2.5-fold compared with the reference periods (n = 81 for 5 yrs; p < .001); this corresponded to an estimated relative rates of out-of-hospital cardiac arrests of 2.34 (95% confidence interval, 1.60-3.41), after adjustment for age and for gender. This increase was observed in both genders (p for interaction with gender = .48) but only in those who were aged >or=60 yrs (p for interaction with age = .005). No variation was found for myocardial infarctions during heat wave.
Conclusions: These data suggest that a heat wave may be associated with an increased risk of sudden cardiac death in the population.
Answer: Yes, less disabled patients were the most affected by the 2003 heat wave in nursing homes in Paris, France. A retrospective observational study conducted in all nursing homes of the Assistance-Publique-Hôpitaux de Paris (AP-HP) found that mortality rates (MRs) increased significantly during the heat wave, with less frail patients contributing the most to excess mortality. Before the heat wave, MRs were higher among highly dependent patients compared to those less dependent. However, this difference disappeared during the heat wave, suggesting that medical care during the heat wave was directed towards more fragile patients, helping to limit deaths in this group. As a result, less frail patients made the largest contribution to excess mortality during the heat wave (PUBMED:16234262). |
Instruction: Does childhood attention-deficit/hyperactivity disorder predict risk-taking and medical illnesses in adulthood?
Abstracts:
abstract_id: PUBMED:23357442
Does childhood attention-deficit/hyperactivity disorder predict risk-taking and medical illnesses in adulthood? Objective: To test whether children with attention-deficit/hyperactivity disorder (ADHD), free of conduct disorder (CD) in childhood (mean = 8 years), have elevated risk-taking, accidents, and medical illnesses in adulthood (mean = 41 years); whether development of CD influences risk-taking during adulthood; and whether exposure to psychostimulants in childhood predicts cardiovascular disease. We hypothesized positive relationships between childhood ADHD and risky driving (in the past 5 years), risky sex (in the past year), and between risk-taking and medical conditions in adulthood; and that development of CD/antisocial personality (APD) would account for the link between ADHD and risk-taking. We report causes of death.
Method: Prospective 33-year follow-up of 135 boys of white ethnicity with ADHD in childhood and without CD (probands), and 136 matched male comparison subjects without ADHD (comparison subjects; mean = 41 years), blindly interviewed by clinicians.
Results: In adulthood, probands had relatively more risky driving, sexually transmitted disease, head injury, and emergency department admissions (p< .05-.01). Groups did not differ on other medical outcomes. Lifetime risk-taking was associated with negative health outcomes (p = .01-.001). Development of CD/APD accounted for the relationship between ADHD and risk-taking. Probands without CD/APD did not differ from comparison subjects in lifetime risky behaviors. Psychostimulant treatment did not predict cardiac illness (p = .55). Probands had more deaths not related to specific medical conditions (p = .01).
Conclusions: Overall, among children with ADHD, it is those who develop CD/APD who have elevated risky behaviors as adults. Over their lifetime, those who did not develop CD/APD did not differ from comparison subjects in risk-taking behaviors. Findings also provide support for long-term safety of early psychostimulant treatment.
abstract_id: PUBMED:25176616
Who are those "risk-taking adolescents"? Individual differences in developmental neuroimaging research. Functional magnetic resonance imaging (fMRI) has illuminated the development of human brain function. Some of this work in typically-developing youth has ostensibly captured neural underpinnings of adolescent behavior which is characterized by risk-seeking propensity, according to psychometric questionnaires and a wealth of anecdote. Notably, cross-sectional comparisons have revealed age-dependent differences between adolescents and other age groups in regional brain responsiveness to prospective or experienced rewards (usually greater in adolescents) or penalties (usually diminished in adolescents). These differences have been interpreted as reflecting an imbalance between motivational drive and behavioral control mechanisms, especially in mid-adolescence, thus promoting greater risk-taking. While intriguing, we caution here that researchers should be more circumspect in attributing clinically significant adolescent risky behavior to age-group differences in task-elicited fMRI responses from neurotypical subjects. This is because actual mortality and morbidity from behavioral causes (e.g. substance abuse, violence) by mid-adolescence is heavily concentrated in individuals who are not neurotypical, who rather have shown a lifelong history of behavioral disinhibition that frequently meets criteria for a disruptive behavior disorder, such as conduct disorder, oppositional-defiant disorder, or attention-deficit hyperactivity disorder. These young people are at extreme risk of poor psychosocial outcomes, and should be a focus of future neurodevelopmental research.
abstract_id: PUBMED:23212056
The nature of the association between childhood ADHD and the development of bipolar disorder: a review of prospective high-risk studies. Objective: The author reviewed prospective longitudinal studies of the offspring of parents with bipolar disorder to inform our understanding of the nature of the association between childhood ADHD and the risk of developing bipolar disorder in adolescence and young adulthood.
Method: A literature review of published prospective cohort studies of the offspring of bipolar parents since 1985 was undertaken using a comprehensive search strategy in several electronic databases. The author provides a qualitative synthesis of results focusing on ADHD and the association with bipolar disorder in prospectively assessed high-risk offspring. These results are discussed in light of findings from other prospective epidemiological and clinical cohort studies.
Results: From the reviewed high-risk studies, evidence suggests that the clinical diagnosis of childhood ADHD is not a reliable predictor of the development of bipolar disorder. However, the author found evidence that symptoms of inattention may be part of a mixed clinical presentation during the early stages of evolving bipolar disorder in high-risk offspring, appearing alongside anxiety and depressive symptoms. The author also found preliminary evidence that childhood ADHD may form part of a neurodevelopmental phenotype in offspring at risk for developing a subtype of bipolar disorder unresponsive to lithium stabilization.
Conclusions: While childhood ADHD does not appear to be part of the typical developmental illness trajectory of bipolar disorder, subjective problems with attention can form part of the early course, while neurodevelopmental abnormalities may be antecedents in a subgroup of high-risk children.
abstract_id: PUBMED:18606036
Prevention of bipolar disorder in at-risk children: theoretical assumptions and empirical foundations. This article examines how bipolar symptoms emerge during development, and the potential role of psychosocial and pharmacological interventions in the prevention of the onset of the disorder. Early signs of bipolarity can be observed among children of bipolar parents and often take the form of subsyndromal presentations (e.g., mood lability, episodic elation or irritability, depression, inattention, and psychosocial impairment). However, many of these early presentations are diagnostically nonspecific. The few studies that have followed at-risk youth into adulthood find developmental discontinuities from childhood to adulthood. Biological markers (e.g., amygdalar volume) may ultimately increase our accuracy in identifying children who later develop bipolar I disorder, but few such markers have been identified. Stress, in the form of childhood adversity or highly conflictual families, is not a diagnostically specific causal agent but does place genetically and biologically vulnerable individuals at risk for a more pernicious course of illness. A preventative family-focused treatment for children with (a) at least one first-degree relative with bipolar disorder and (b) subsyndromal signs of bipolar disorder is described. This model attempts to address the multiple interactions of psychosocial and biological risk factors in the onset and course of bipolar disorder.
abstract_id: PUBMED:22583562
Comparison of the burden of illness for adults with ADHD across seven countries: a qualitative study. Background: The purpose of this study was to expand the understanding of the burden of illness experienced by adults with Attention Deficit-Hyperactivity Disorder (ADHD) living in different countries and treated through different health care systems.
Methods: Fourteen focus groups and five telephone interviews were conducted in seven countries in North America and Europe, comprised of adults who had received a diagnosis of ADHD. The countries included Canada, France, Germany, Italy, The Netherlands, United Kingdom, and United States (two focus groups in each country). There were 108 participants. The focus groups were designed to elicit narratives of the experience of ADHD in key domains of symptoms, daily life, and social relationships. Consonant with grounded theory, the transcripts were analyzed using descriptive coding and then themed into larger domains.
Results: Participants' statements regarding the presentation of symptoms, childhood experience, impact of ADHD across the life course, addictive and risk-taking behavior, work and productivity, finances, relationships and psychological health impacts were similarly themed across all seven countries. These similarities were expressed through the domains of symptom presentation, childhood experience, medication treatment issues, impacts in adult life and across the life cycle, addictive and risk-taking behavior, work and productivity, finances, psychological and social impacts.
Conclusions: These data suggest that symptoms associated with adult ADHD affect individuals similarly in different countries and that the relevance of the diagnostic category for adults is not necessarily limited to certain countries and sociocultural milieus.
abstract_id: PUBMED:19208007
Aetiology and risk factors related to traumatic dental injuries--a review of the literature. Background/aim: During the past 30 years, the number of aetiologies of traumatic dental injuries (TDIs) has increased dramatically in the literature and now includes a broad spectrum of variables, including oral and environmental factors and human behaviour. The aim of this study is to present an international review of well-known as well as less well-known unintentional and intentional causes of TDIs. Moreover, some models that are useful in investigating contact sport injuries are presented.
Materials And Methods: The databases of Medline, Cochrane, Social Citation Index, Science Citation Index and CINAHL from 1995 to the present were used.
Result: Oral factors (increased overjet with protrusion), environmental determinants (material deprivation) and human behaviour (risk-taking children, children being bullied, emotionally stressful conditions, obesity and attention-deficit hyperactivity disorder) were found to increase the risk for TDIs. Other factors increasing the risk for TDIs are presence of illness, learning difficulties, physical limitations and inappropriate use of teeth. A new cause of TDIs that is of particular interest is oral piercing. In traffic facial injury was similar in unrestrained occupants (no seat belts) and occupants restrained only with an air bag. Amateur athletes have been found to suffer from TDIs more often than professional athletes. Falls and collisions mask intentional TDIs, such as physical abuse, assaults and torture. Violence has increased in severity during the past few decades and its role has been underestimated when looking at intentional vs unintentional TDIs. There are useful models to prevent TDIs from occurring in sports. WHO Healthy Cities and WHO Health Promoting Schools Programmes offer a broad solution for dental trauma as a public health problem.
Conclusion: The number of known causes of TDIs has grown to alarming levels, probably because of increased interest of the causes and the underlying complexity of a TDI. Accepted oral, environmental and human aetiological factors must therefore be included in the registration of TDIs.
abstract_id: PUBMED:28493605
Adverse effects of obesity on cognitive functions in individuals at ultra high risk for bipolar disorder: Results from the global mood and brain science initiative. Background: The burden of illness associated with bipolar disorder (BD) warrants early pre-emption/prevention. Prediction models limited to psychiatric phenomenology have insufficient predictive power. Herein, we aimed to evaluate whether the presence of overweight/obesity is associated with greater cognitive decline in individuals at high risk (HR) or ultra high risk (UHR) for BD.
Methods: We conducted a retrospective analysis to investigate the moderational role of body mass index (BMI) on measures of cognitive function. Subjects between the ages of 8 and 28 years with a positive family history of BD were compared to age-matched controls with a negative family history of BD. Subjects with at least one biological parent with bipolar I/II disorder were further stratified into UHR or HR status by the presence or absence, respectively, of subthreshold hypomanic, major depressive, attenuated psychotic, and/or attention-deficit/hyperactivity disorder symptoms.
Results: A total of 36 individuals at HR for BD, 33 individuals at UHR for BD, and 48 age-matched controls were included in the analysis. Higher BMI was significantly associated with lower performance on measures of processing speed (i.e. Brief Assessment of Cognition in Schizophrenia-symbol coding: r=-.186, P=.047) and attention/vigilance (i.e. Continuous Performance Test-Identical Pairs: r=-.257, P=.006). There were trends for negative correlations between BMI and measures of working memory (i.e. Wechsler Memory Scale-III Spatial Span: r=-0.177, P=.059) and overall cognitive function (i.e. Measurement and Treatment Research to Improve Cognition in Schizophrenia composite score: r=-.157, P=.097). Negative associations between BMI and cognitive performance were significantly stronger in the UHR group than in the HR group, when compared to controls.
Conclusions: Individuals at varying degrees of risk for BD exhibit greater cognitive impairment as a function of co-existing overweight/obesity. Prediction models for BD may be substantively informed by including information related to overweight/obesity and, perhaps, other general medical conditions that share pathology with BD. Our findings herein, as well as the salutary effects of bariatric surgery on measures of cognitive function in obese populations, provide the rationale for hypothesizing that mitigating excess weight in individuals at elevated risk for BD may forestall or prevent declaration of illness.
abstract_id: PUBMED:24620816
Psychosis in adulthood is associated with high rates of ADHD and CD problems during childhood. Background: Patients diagnosed with schizophrenia display poor premorbid adjustment (PPA) in half of the cases. Attention deficit/hyperactivity disorder (ADHD) and conduct disorder (CD) are common child psychiatric disorders. These two facts have not previously been linked in the literature.
Aims: To determine the prevalence of ADHD/CD problems retrospectively among patients with psychoses, and whether and to what extent the high frequency of substance abuse problems among such patients may be linked to ADHD/CD problems.
Method: ADHD and CD problems/diagnoses were retrospectively recorded in one forensic (n = 149) and two non-forensic samples (n = 98 and n = 231) of patients with a psychotic illness: schizophrenia, bipolar or other, excluding drug-induced psychoses.
Results: ADHD and CD were much more common among the patients than in the general population-the odds ratio was estimated to be greater than 5. There was no significant difference in this respect between forensic and non-forensic patients. Substance abuse was common, but substantially more common among patients with premorbid ADHD/CD problems.
Conclusions: Previous views regarding PPA among patients with a psychotic illness may reflect an association between childhood ADHD/CD and later psychosis. The nature of this association remains uncertain: two disorders sharing some generative mechanisms or one disorder with two main clinical manifestations. Childhood ADHD and particularly CD problems contribute to the high frequency of substance abuse in such groups.
abstract_id: PUBMED:3629198
Review of the NIMH Israeli Kibbutz-City Study and the Jerusalem Infant Development Study. The National Institute of Mental Health (NIMH) Israeli Kibbutz-City Study has followed the development of offspring of schizophrenic parents from middle childhood through early adulthood. During childhood, a subgroup of offspring of schizophrenic patients showed clear neurobehavioral deficits often accompanied by poor social competence. Early followup data suggest that this subgroup of high-risk children is at greatest risk for adult schizophrenia spectrum illness. The Jerusalem Infant Development Study has followed a similar population of children at risk for schizophrenia from before birth through middle childhood. A subgroup of dysfunctioning in the high-risk children showed sensorimotor dysfunctioning in the first year of life, which was followed by perceptual, motor, and attentional dysfunctioning in childhood--identical to that found in the NIMH cohort. Results from both studies support the hypothesis that schizophrenic illness involves constitutional factors whose expression can be observed as early as infancy. Results also illustrate the importance of using data-analytic approaches that (1) look for subgroups within high-risk groups rather than only group differences between high- and low-risk groups, and (2) examine profiles of behavior rather than only single variables.
abstract_id: PUBMED:15723419
Diagnostic issues for adolescents and adults with ADHD. Attention deficit hyperactivity disorder (ADHD) is a common childhood neuropsychiatric syndrome once thought to disappear with maturation. Current data indicate that ADHD remains "hidden" in many of the grown-ups who had it as children. Adult prevalence rates range from 1% to 6% of the population. Research suggests the core childhood symptoms of hyperactivity, inattention, and impulsivity shift with development, perhaps transforming into more overt difficulties in executive functions and affect regulation. ADHD is also usually nestled with other comorbid psychiatric conditions, especially in adolescents and adults, further complicating diagnosis and treatment. This article discusses how to recognize and diagnose ADHD in older patients. Key points include core symptoms present during childhood, appropriate family history in this strongly genetic condition, management of comorbidity, and the evolving role of diagnostic testing. Other medical causes for similar symptoms are considered.
Answer: Yes, childhood attention-deficit/hyperactivity disorder (ADHD) can predict elevated risk-taking and medical illnesses in adulthood. A study with a 33-year follow-up of boys diagnosed with ADHD in childhood found that in adulthood, these individuals had more risky driving, sexually transmitted diseases, head injuries, and emergency department admissions compared to matched male comparison subjects without ADHD. The development of conduct disorder (CD) or antisocial personality (APD) was found to account for the relationship between ADHD and risk-taking behaviors. However, those with ADHD who did not develop CD/APD did not differ from comparison subjects in lifetime risky behaviors. The study also found that psychostimulant treatment in childhood did not predict cardiac illness, and ADHD probands had more deaths not related to specific medical conditions (PUBMED:23357442).
Furthermore, individuals with a history of behavioral disinhibition, including ADHD, are at extreme risk of poor psychosocial outcomes and are more likely to engage in risk-taking behaviors (PUBMED:25176616). While childhood ADHD does not appear to be part of the typical developmental illness trajectory of bipolar disorder, symptoms of inattention may be part of a mixed clinical presentation in the early stages of evolving bipolar disorder in high-risk offspring (PUBMED:23212056). Additionally, ADHD symptoms affect individuals similarly in different countries, suggesting that the diagnostic category for adults is not limited to certain countries and sociocultural milieus (PUBMED:22583562).
In summary, childhood ADHD is associated with an increased risk of engaging in risky behaviors and experiencing medical illnesses in adulthood, particularly among those who develop CD/APD. However, the presence of ADHD alone, without the development of additional conduct-related disorders, may not be a predictor of increased risk-taking behaviors later in life. |
Instruction: Incisional hernia after laparoscopic colorectal surgery. Is there any factor associated?
Abstracts:
abstract_id: PUBMED:36219253
Choice of specimen's extraction site affects wound morbidity in laparoscopic colorectal cancer surgery. Background: The choice for an ideal site of specimen extraction following laparoscopic colorectal surgery remains debatable. However, midline incision (MI) is usually employed for right and left-sided colonic resections while left iliac fossa or suprapubic transverse incision (STI) were reserved for sigmoid and rectal cancer resections.
Objective: To compare the incidence of surgical site infection (SSI) and incisional hernia (IH) in elective laparoscopic colorectal surgery for cancer and specimen extraction via MI or STI.
Method: Prospectively collected data of elective laparoscopic colorectal cancer resections between January 2017 and December 2019 were retrospectively reviewed. MI was employed for right and left-sided colonic resections while STI was used for sigmoid and rectal resections. SSI is defined according to the US CDC criteria. IH was diagnosed clinically and confirmed by CT scan at 1 year.
Results: A total of 168 patients underwent elective laparoscopic colorectal resections. MI was used in 90 patients while 78 patients had STI as an extraction site. Demographic and preoperative data is similar for two groups. The rate of IH was 13.3% for MI and 0% in the STI (p = 0.001). SSI was seen in 16.7% of MI vs 11.5% of STI (p = 0.34). Univariate and multivariate analysis showed that the choice of extraction site is associated with statistically significant higher incisional hernia rate.
Conclusion: MI for specimen extraction is associated with higher incidence of both SSI and IH. The choice of incision for extraction site is an independent predicative factor for significantly higher IH and increased SSI rates.
abstract_id: PUBMED:35989760
Comparison of Non-Oncological Postoperative Outcomes Following Robotic and Laparoscopic Colorectal Resection for Colorectal Malignancy: A Systematic Review and Meta-Analysis. The objective of this systematic review and meta-analysis is to compare the postoperative outcomes of robotic and laparoscopic colorectal resection for colorectal malignancy. We performed a systematic review using a comprehensive search strategy on several electronic databases (PubMed, PubMed Central, Medline, and Google Scholar) in April 2022. Postoperative outcomes of robotic versus laparoscopic surgery for colorectal cancer were compared using 12 end points. Observational studies, randomized controlled trials, and nonrandomized clinical trials comparing robotic and laparoscopic resection for colorectal cancer were included. The statistical analysis was performed using the risk ratio (RR) for categorical variables and the standardized mean differences (SMD) for continuous variables. Sixteen studies involving 2,318 patients were included. The difference in length of hospital stay was significantly shorter with robotic access (SMD = -0.10, 95% CI = -0.19, -0.01, P = 0.04, I2 = 0%). Regarding intra-abdominal abscesses, the analysis showed an advantage in favor of the robotic group, but the result was not statically significant (RR = 0.54, 95% CI = 0.28, 1.05, P = 0.07, I2 = 0%). Mechanical obstruction was found to be higher in robotic group, favoring laparoscopic access, but was not significant (RR = 1.91, 95% CI = 0.95, 3.83, P = 0.07, I2 = 0%). There was no difference in time to pass flatus and consume a soft diet. The rates of anastomotic leakage, ileus, wound infection, readmission, mortality, and incisional hernias were similar with both approaches. Robotic surgery for colorectal cancer is associated with a shorter hospital stay, with no differences in mortality and postoperative morbidity.
abstract_id: PUBMED:37165256
The impact of sarcobesity on incisional hernia after laparoscopic colorectal cancer surgery. Purpose: Incisional hernia is a common complication after abdominal surgery, especially in obese patients. The aim of the present study was to evaluate the relationship between sarcobesity and incisional hernia development after laparoscopic colorectal cancer surgery.
Methods: In total, 262 patients who underwent laparoscopic colorectal cancer surgery were included in the present study. Univariate and multivariate analyses were performed to evaluate the independent risk factors for the development of incisional hernia. We then performed subgroup analyses to assess the impact of visceral obesity according to clinical variables on the development of incisional hernia in laparoscopic surgery for colorectal cancer surgery.
Results: Forty-four patients (16.8%) developed incisional hernias after laparoscopic colorectal cancer surgery. In the univariate analysis, the development of incisional hernia was significantly associated with female sex (P = 0.046), subcutaneous obesity (P = 0.002), visceral obesity (P = 0.002), sarcobesity (P < 0.001), and wound infection (P < 0.001). In the multivariate analysis, sarcobesity (P < 0.001) and wound infection (P < 0.001) were independent predictors of incisional hernia. In subgroup analysis, the odds ratio of visceral obesity was the highest (13.1; 95% confidence interval [CI], 4.51-37.8, P < 0.001) in the subgroup of sarcopenia.
Conclusion: Sarcobesity may be a strong predictor of the development of incisional hernia after laparoscopic surgery for colorectal cancer, suggesting the importance of body composition in the development of incisional hernia.
abstract_id: PUBMED:25307082
Hernia incidence following single-site vs standard laparoscopic colorectal surgery. Aim: Compared with standard laparoscopic (SDL) approaches, less is known about the incidence of hernias after single-site laparoscopic (SSL) colorectal surgery. This study hypothesized that SSL colorectal surgery was associated with an increased risk of hernia development.
Method: Institutional retrospective chart review (September 2008-June 2013) identified 276 evaluable patients who underwent laparoscopic colorectal procedures. The following data were collected: demographic data, risk factors for the development of a hernia, operative details and postoperative course including the development of a hernia. Patients were stratified by laparoscopic technique to compare the characteristics of those undergoing SDL and SSL. Patients were subsequently stratified by the presence or absence of a hernia to identify associated factors.
Results: One hundred and nineteen patients (43.1%) underwent SDL and 157 patients (56.9%) underwent SSL surgery. The development of an incisional hernia was observed in 7.6% (9/119) of SDL patients compared with 17.0% (18/106) of SSL patients (P = 0.03) over a median 18-month follow-up. Similar proportions of patients developed parastomal hernias in both groups [SDL 16.7% (10/60) vs SSL 15.9% (13/80)]. Hernias were diagnosed at a median of 8.1 (SDL) and 6.5 (SSL) months following the index operation and were less likely to be incarcerated in the SSL group [SDL 38.9% (7/18) vs SSL 6.5% (2/31), P = 0.01].
Conclusion: SSL colorectal surgery is associated with an increase in the incidence of incisional hernias but not parastomal hernias. Site of specimen extraction in SSL may contribute to the development of an incisional hernia.
abstract_id: PUBMED:35778241
Comparison of robotic reduced-port and laparoscopic approaches for left-sided colorectal cancer surgery. Background/objective: The reduced-port approach can overcome the limitations of single-incision laparoscopic surgery while maintaining its advantages. Here, we compared the effects of robotic reduced-port surgery and conventional laparoscopic approaches for left-sided colorectal cancer.
Methods: Between January 2015 and December 2016, the clinicopathological characteristics and treatment outcomes of 17 patients undergoing robotic reduced-port surgery and 49 patients undergoing laparoscopic surgery for left-sided colorectal cancer were compared.
Results: The two groups were comparable in almost all outcome measures except for the distal resection margin, which was significantly longer in the laparoscopic group (P < 0.001). The between-group differences in reoperation, incisional hernia development, and overall and progression-free survival were nonsignificant; however, the total hospital cost was significantly higher in the robotic group than in the laparoscopic group (US$13779.6 ± US$3114.8 vs. US$8556.3 ± US$2056.7, P < 0.001).
Conclusion: Robotic reduced-port surgery for left-sided colorectal cancer is safe and effective but more expensive with no additional benefit compared with the conventional laparoscopic approach. This observation warrants further evaluation.
abstract_id: PUBMED:33537500
Influence of Suture Materials on Incisional Hernia Rate after Laparoscopic Colorectal Cancer Surgery: A Propensity Score Analysis. Objectives: Incisional hernia is a common problem after colorectal surgery, and a laparoscopic approach does not reduce the incisional hernia rate. Previous reports have described the risk factors for incisional hernia; however, the impact of suture materials remains unclear. As such, this study compared the incisional hernia rate using different suture materials for abdominal wall closure after laparoscopic colorectal cancer surgery.
Methods: Patients undergoing laparoscopic colorectal cancer surgery between January 2014 and December 2016 were included in this study. We separated patients into the following two groups based on the suture materials used for abdominal wall closure: (1.) fast-absorbable group and (2.) non-absorbable group. The primary outcome was incisional hernia rate that was diagnosed using computed tomography. We compared outcomes between these two groups using propensity score matching.
Results: Before matching, 394 patients were included (168 in the fast-absorbable group and 226 in the non-absorbable group). After one-to-one matching, patients were stratified into the fast-absorbable group (n = 158) and the non-absorbable group (n = 158). The incisional hernia rate was higher in the fast-absorbable group than in the non-absorbable group (13.9% vs. 6.3%; P = 0.04). The median time to develop an incisional hernia was significantly shorter in the fast-absorbable group (6.7 months vs. 12.3 months; P < 0.01). The incidence of surgical site infection was not different between the two groups, but the incidence of suture sinus was lower in the fast-absorbable group (0% vs. 5.1%; P < 0.01).
Conclusions: The use of fast-absorbable sutures may increase the risk of incisional hernia after laparoscopic colorectal cancer surgery.
abstract_id: PUBMED:35165026
Incidence of incisional hernia after major colorectal cancer surgery & analysis of associated risk factors in Asian population: Is laparoscopy any better? Background: Incisional hernia is one of the common morbidities after major colorectal cancer surgery. We aim to compare the incidence of incisional hernias between laparoscopic and open surgery. We also aim to identify associated risk factors of incisional hernia among Asian population who has undergone major resection for colorectal cancer.
Methods: Data of patients who had undergone major colorectal cancer surgery in year 2015 from a single institution was collected. Data were extracted from electronic clinical records from our institution's database. Incisional hernias were identified by clinical examination and computed tomography (CT) scan performed during post-operative follow up as part of colorectal cancer surveillance. Follow up data of up to 3 years were extracted. Univariate and multivariable logistic regression analysis were performed to identify associated risk factors for development of incisional hernia. Propensity score matching analysis was performed for laparoscopic and open resection.
Results: 502 patients were included in the study. With a minimum follow up of 3 years, overall incisional hernia incidence rate of 13% was identified. Incisional hernias after laparoscopic and open surgery were 12.3% and 13.8% (p = 0.688) respectively. Univariate logistic regression analysis showed that body mass index (BMI) of >23kg/m2, ASA of III/IV and post-operative anastomotic leak were associated with development of incisional hernias. On multivariable analysis, female gender (OR 2.102, 95%CI: 1.155, 3.826), BMI of ≥23 kg/m2 (OR 2.862 95%CI: 1.582, 5.181), ASA III/IV (OR 2.052, 95%CI: 1.169, 3.602), were significantly associated with development of incisional hernia. Propensity scores matched analysis showed laparoscopic surgery did not significantly reduce the incidence of incisional hernia.
Conclusion: The overall incidence of incisional hernia seems lower in Asian population. Our study demonstrated no significant difference in incisional hernia rates between patients undergoing laparoscopic versus open colorectal cancer surgery. Female gender, higher BMI, and higher ASA are associated with increased risk of developing incisional hernia after major colorectal cancer resection.
abstract_id: PUBMED:37845691
Retrospective study of an incisional hernia after laparoscopic colectomy for colorectal cancer. Purpose: This study aimed to examine the incidence of incisional hernia (IH) in elective laparoscopic colorectal surgery (LC) using regulated computed tomography (CT) images at intervals every 6 months.
Methods: We retrospectively examined the diagnosis of IH in patients who underwent LC for colorectal cancer at Kansai Medical University Hospital from January 2014 to August 2018. The diagnosis of IH was defined as loss of continuity of the fascia in the axial CT images.
Results: 470 patients were included in the analysis. IH was diagnosed in 47 cases at 1 year after LC. The IH size was 7.8 cm2 [1.3-55.6]. In total, 38 patients with IH underwent CT examination 6 months after LC, and 37 were already diagnosed with IH. The IH size was 4.1 cm2 [0-58.9]. The IH size increased in 17 cases between 6 months and 1 year postoperatively, and in 1 case, a new IH occurred. 47%(18/38) of them continued to grow until 1 year after LC. A multivariate analysis was performed on the risk of IH occurrence. SSI was most significantly associated with IH occurrence (OR:5.28 [2.14-13.05], p = 0.0003).
Conclusion: IH occurred in 10% and 7.9% at 1 year and 6 months after LC. By examining CT images taken for the postoperative surveillance of colorectal cancer, we were able to investigate the occurrence of IH in detail.
abstract_id: PUBMED:32556772
Trocar-site incisional hernia after laparoscopic colorectal surgery: a significant problem? Incidence and risk factors from a single-center cohort. Background: Trocar-site incisional hernia (TSIH) after laparoscopic surgery has been scarcely studied. TSIH incidence and risk factors have never been properly studied for laparoscopic colorectal surgery.
Methods: A retrospective analytic study in a tertiary hospital was performed including patients who underwent elective laparoscopic colorectal surgery between 2014 and 2016. Clinical and radiological TSIH were analyzed.
Results: 272 patients with a mean age of 70.7 years were included. 205 (75.4%) underwent surgery for a malignant disease. The most common procedure was right colectomy (108 patients, 39.7%). After a mean follow-up of 30.8 months 64 (23.5%) patients developed a TSIH. However, only 7 out of 64 (10.9%) patients with a TSIH underwent incisional hernia repair. That means that 2.6% of all the patients underwent TSIH repair. 44 (68.8%) patients had TSIH in the umbilical Hasson trocar. In the multivariate analysis, the existence of an umbilical Hasson trocar orifice was the only statistically significant risk factor for TSIH development.
Conclusions: Incidence of TSIH was high, although few patients underwent incisional hernia repair. Most TSIH were observed in the umbilical Hasson trocar, which was the only risk factor for TSIH development in the multivariate analysis. Efforts should be addressed to avoid TSIH in the umbilical Hasson trocar.
abstract_id: PUBMED:36074636
Risk factors for umbilical incisional hernia after laparoscopic colorectal surgery. Background: Laparoscopic colorectal surgery (LCRS) requires a small laparotomy at the umbilicus. The wound is small and inconspicuous, but if the patient develops an umbilical incisional hernia (UIH), the wound is visible and the patient suffers from symptoms of discomfort. However, the incidence of UIH after LCRS and its risk factors are not well understood. The purpose of this study was to investigate the risk factors for UIH after LCRS for colorectal cancer.
Methods: This was a single-centre retrospective study of 135 patients with colorectal cancer, conducted at our hospital from April 2013 to March 2019. The diagnosis of UIH was based on computed tomography and physical examination findings. Preoperative patient data such as enlargement of the umbilical orifice (EUO), subcutaneous fat thickness (SFT) and intraperitoneal thickness (IPT) were collected and analysed using univariate and multivariate analyses for the presence of risk factors for UIH.
Results: A total of 135 patients who underwent LCRS were analysed. The incidence of UIH was 20.7%. Univariate analysis revealed significantly high body mass index (BMI) ≥ 25 (P = 0.032), EUO (P < 0.001), SFT ≥18 mm (P = 0.011), and IPT ≥61 mm (P < 0.01) in the UIH group. Multivariate analysis revealed significant differences in EUO (P < 0.001), SFT ≥18 mm (P = 0.046) and IPT ≥61 mm (P = 0.022).
Conclusion: EUO was the most important risk factor for UIH, followed by IPT and SFT. These findings are predictive indicators of the development of UIH after LCRS and can be assessed objectively and easily with preoperative computed tomography.
Answer: Yes, there are several factors associated with the development of incisional hernia (IH) after laparoscopic colorectal surgery. The choice of specimen extraction site is one such factor, with midline incisions (MI) being associated with a higher incidence of both surgical site infection (SSI) and IH compared to suprapubic transverse incision (STI). MI was found to have an IH rate of 13.3% versus 0% for STI (PUBMED:36219253). Another study identified sarcobesity (the combination of sarcopenia and obesity) and wound infection as independent predictors of IH (PUBMED:37165256). Additionally, the use of fast-absorbable sutures for abdominal wall closure was associated with a higher IH rate compared to non-absorbable sutures (PUBMED:33537500).
Other factors include the surgical approach, with single-site laparoscopic (SSL) colorectal surgery being associated with an increased incidence of IH compared to standard laparoscopic (SDL) approaches (PUBMED:25307082). However, robotic reduced-port surgery for left-sided colorectal cancer did not show a significant difference in IH development compared to the conventional laparoscopic approach (PUBMED:35778241). Furthermore, a study found no significant difference in IH rates between patients undergoing laparoscopic versus open colorectal cancer surgery, but female gender, higher body mass index (BMI), and higher American Society of Anesthesiologists (ASA) classification were associated with an increased risk of developing IH (PUBMED:35165026).
The incidence of IH was also found to be high in patients with an umbilical Hasson trocar site, which was identified as a risk factor for TSIH development (PUBMED:32556772). Additionally, enlargement of the umbilical orifice (EUO), subcutaneous fat thickness (SFT), and intraperitoneal thickness (IPT) were significant risk factors for umbilical incisional hernia (UIH) after laparoscopic colorectal surgery (PUBMED:36074636).
In summary, factors such as the choice of specimen extraction site, sarcobesity, wound infection, suture material, surgical approach, gender, BMI, ASA classification, and specific anatomical features like EUO, SFT, and IPT are associated with the development of IH after laparoscopic colorectal surgery , and specific anatomical features like EUO, SFT, and IPT are associated with the development of IH after laparoscopic colorectal surgery. These findings suggest that careful consideration of these factors during preoperative planning and surgical technique may help to reduce the risk of IH in patients undergoing laparoscopic colorectal procedures. |
Instruction: Does postembolization fever after chemoembolization have prognostic significance for survival in patients with unresectable hepatocellular carcinoma?
Abstracts:
abstract_id: PUBMED:19084432
Does postembolization fever after chemoembolization have prognostic significance for survival in patients with unresectable hepatocellular carcinoma? Purpose: To investigate risk factors and prognostic significance of postembolization fever (PEF)--a temperature of more than 38.0 degrees C--after chemoembolization in patients with hepatocellular carcinoma (HCC).
Materials And Methods: The authors retrospectively analyzed data from 442 patients with unresectable HCC who underwent their first session of chemoembolization without other procedure-related complications except postembolization syndrome between January 2005 and December 2006. Of the 442 patients, 362 (81.9%) were men and 80 (18.1%) were women; patients ranged in age from 28 to 86 years (median, 61 years).
Results: PEF after chemoembolization developed in 91 patients (20.6%). Occurrence of PEF was closely associated with several clinical-laboratorial variables, although not with response to chemoembolization. With use of logistic regression analysis, however, a tumor size larger than 5 cm was the only independent factor related to PEF development (odds ratio, 8.192; 95% confidence interval [CI]: 3.641, 18.435; P < .001). Although PEF was not an independent predictor of progression-free survival, it significantly increased the risk of death by about 1.4-fold, in correlation with overall survival (hazard ratio, 1.378; 95% CI: 1.003, 1.893; P = .048).
Conclusions: PEF after chemoembolization in patients with HCC was strongly correlated with large tumor size and was a significant independent predictor of overall survival.
abstract_id: PUBMED:22654264
Hepatic transcatheter arterial chemoembolization complicated by postembolization syndrome. Postembolization syndrome (PES) is a common complication after embolic procedures, and it is a frequent cause of extended inpatient hospital admissions. PES is a self-limited constellation of symptoms consisting of fevers, unremitting nausea, general malaise, loss of appetite, and variable abdominal pain following the procedure. Although a definite cause is unknown, this syndrome is thought to be a result of therapeutic cytotoxicity, tumor ischemia, and resulting intrahepatic and extrahepatic inflammation. The authors report a case of PES precipitated by transcatheter intrarterial chemoembolization of hepatic metastases.
abstract_id: PUBMED:31989907
Risk Factors for Postembolization Syndrome After Transcatheter Arterial Chemoembolization. Background: Transarterial Chemoembolization (TACE) is a minimally invasive treatment in managing unresectable liver primary neoplasms or liver metastases. Postembolization Syndrome (PES) is the most common adverse effect after TACE procedures.
Objective: We investigate the risk factors for the development of PES after TACE therapy in patients with primary or metastatic liver tumors.
Methods: In a retrospective analysis of 163 patients who underwent TACE between 01/01/2012 and 31/01/2018, patients that were given medication due to pain, fever, nausea or vomiting were evaluated and noted with PES. Analyses were made to evaluate factors such as age, gender, chemotherapy agent and dose, tumor size, tumor type, a particle used for embolization, multiple tumor treatments and selective application of the procedure, which may lead to PES after TACE.
Results: In a total of 316 patients, PES was observed at a rate of 55 percent after TACE. Tumor size, number of tumors treated and adopting super selective fashion in the procedure were found to be related to the development of PES. No relationship was found between age, gender, presence of ascites, tumor type, size of embolic agent and drug type and the development of PES.
Conclusion: A treated tumor measuring >5 cm, treating more than one tumor, and the failure to perform the procedure in a super selective fashion increase the risk of PES development after TACE.
abstract_id: PUBMED:29573765
Risk Factors for the Development of Postembolization Syndrome after Transarterial Chemoembolization for Hepatocellular Carcinoma Treatment. Introduction: Hepatic transarterial chemoembolization is a widely used technique for the treatment of hepatocellular carcinoma. The most common complication of this procedure is postembolization syndrome. The main objective of this study was to assess risk factors for the development of postembolization syndrome.
Material And Methods: Single-centre retrospective analysis of 563 hepatic transarterial chemoembolization procedures from January 1st, 2014 - December 31st, 2015. Hepatic transarterial chemoembolization was performed with ½ - 2 vials of 100 - 300 μm microspheres loaded with doxorubicin. Patients who experienced postembolization syndrome were identified based on prolongation of hospitalization due to pain, fever, nausea and/or vomiting. A control group with the patients who did not have postembolization syndrome was randomly created (three controls for one case). Descriptive analysis and multivariate logistic regression were performed.
Results: The overall prevalence of postembolization syndrome was 6.2%. Hepatic transarterial chemoembolization with doxorubicin dosage above 75 mg (more than one vial), the size of the largest nodule and female gender had statistically significant relation with development of postembolization syndrome (p = 0.030, p = 0.046 and p = 0.037, respectively).
Discussion: Doxorrubicin dosage above 75 mg is associated with a higher risk of postembolization syndrome. This result can be helpful for decision-making in clinical practice, whenever it is possible to avoid a higher dose without compromising the efficacy of the treatment. The size of the largest nodule and female gender also constitute risk factors for postembolization syndrome. The other variables studied were not related to the development of postembolization syndrome.
Conclusion: The dose of doxorrubicin, the size of the largest nodule treated and female gender are potential risk factors for the development of postembolization syndrome after hepatic transarterial chemoembolization for hepatocellular carcinoma.
abstract_id: PUBMED:34269313
Factors influencing postembolization syndrome in patients with hepatocellular carcinoma undergoing first transcatheter arterial chemoembolization. Context: Postembolization syndrome (PES) is the most common complication in patients with hepatocellular carcinoma (HCC) who had undergone transcatheter arterial chemoembolization (TACE). PES was defined as fever, nausea and/or vomiting, and abdominal pain and these symptoms develop within 1-3 days after TACE. However, few studies have explored the factors influencing PES in patients with TACE for the first time.
Aims: We explored the factors influencing PES in patients with HCC undergoing TACE for the first time.
Settings And Design: The present study was a hospital-based study conducted in the tertiary care hospital of Guangzhou with a retrospective study design.
Subjects And Methods: In this single-center retrospective study, a total of 242 patients with HCC were included in the first TACE program between November 1, 2018 and November 31, 2019.
Statistical Analysis Used: T-test and Chi-square test revealed the factors affecting the occurrence of PES. Correlation analysis (Spearman) explored the relationship between these factors and PES. Binary logistics analyzed the predictive factors of PES.
Results: The probability of PES in patients with HCC undergoing TACE for the first time was 55.45%. Types of embolic agents (r = 0.296), types of microspheres (r = 0.510), number of microspheres (r = 0.130), maximum diameter of microspheres used (r = 0.429), type of drug (r = 0.406), and drug loading (r = 0.433) were positively correlated with PES (P < 0.05). Serum albumin was negatively correlated with PES (P = 0.008, r = -0.170). Binary logistic regression analysis revealed that drug loading microspheres (odds ratio [OR] = 0.075, 95% confidence interval [CI] = 0.031-0.180) and serum albumin (OR = 0.182, 95% CI = 0.068-0.487) were the protective factors influencing PES, while drug loading was the risk factor of PES (OR = 1.407, 95% CI = 1.144-1.173).
Conclusions: Drug loading microspheres, serum albumin, and drug loading were the predictors of PES after the first TACE.
abstract_id: PUBMED:37756031
Prediction of postembolization syndrome after transarterial chemoembolization of hepatocellular carcinoma and its impact on prognosis. Background: Postembolization syndrome (PES) represents the most frequent complication after transarterial chemoembolization (TACE) in patients with HCC. Given the vague definition as a symptom complex comprising abdominal pain, fever, and nausea, PES is diagnosed in heterogeneous patient cohorts with symptoms ranging from mild pain to severe deterioration of their general condition. This study aimed to evaluate predictive factors and the prognostic impact of PES with regard to different severity grades.
Methods: A total of 954 patients treated with TACE for HCC at the University Medical Centres Mainz and Freiburg were included in this study. PES disease severity was graded as mild, moderate, or severe according to a predefined combination of symptoms. Logistic regression models were used to identify independent predictors of PES. The prognostic impact of PES was evaluated by competing risk analyses considering liver transplantation as a competing risk.
Results: PES occurred in 616 patients (64.5%), but only 56 patients (5.9%) had severe PES, defined as moderate to severe abdominal pain requiring opioids in combination with fever and nausea. The largest tumor diameter was the strongest independent predictor of PES (OR = 1.21, 95% CI = 1.13-1.28), and severe PES (OR = 1.23, 95% CI = 1.14-1.33, p < 0.0001). Presence of liver cirrhosis was protective against PES (OR = 0.48, 95% CI = 0.27-0.84, p = 0.01). Furthermore, PES was independently associated with an impaired disease control rate (OR = 0.33, 95% CI = 0.16-0.69, p = 0.003) and severe PES with poor overall survival (subdistribution HR = 1.53, 95% CI = 0.99-2.36, p = 0.04).
Conclusions: Tumor size and absence of liver cirrhosis are predictors of severe PES and associated with impaired prognosis in HCC patients after TACE.
abstract_id: PUBMED:31732854
Effect of palonosetron and dexamethasone administration on the prevention of gastrointestinal symptoms in hepatic arterial chemoembolization with epirubicin. Purpose: There are several studies on premedication to prevent postembolization syndromes which occurs after transcatheter arterial chemoembolization (TACE), but the medication to be used is still not established. This study aimed to examine the effect of palonosetron and dexamethasone on the prevention of gastrointestinal symptoms induced by TACE.
Methods: Patients with hepatocellular carcinoma who were treated with TACE with epirubicin were retrospectively evaluated. The complete response rate of antiemetic drugs and incidence and severity of gastrointestinal symptoms were compared between the antiemetic group (AE group), which includes 51 patients prophylactically administered with palonosetron 0.75 mg and dexamethasone 9.9 mg intravenously before TACE on day 1 and dexamethasone 6.6 mg intravenously on days 2 and 3, and control group with 101 patients without antiemetic premedication.
Results: Complete response rate in the entire evaluation period was significantly higher in the AE group compared with that in the control group. In the acute phase, the incidence and severity of nausea, vomiting, and anorexia significantly decreased in the AE group, but only anorexia improved in the delay phase. Additionally, postembolization syndromes, such as abdominal pain and fever, were significantly attenuated in the AE group; however, constipation worsened in this group.
Conclusions: Premedication of palonosetron and dexamethasone significantly prevents the incidence and reduces the severity of gastrointestinal symptoms especially in the acute phase. Further studies will be needed to determine the most recommended 5-HT3 antagonist or dosage of dexamethasone in establishing the optimal antiemetic regimen.
abstract_id: PUBMED:23345952
Clinical significance and risk factors of postembolization fever in patients with hepatocellular carcinoma. Aim: To investigate tumor response and survival in patients with postembolization fever (PEF) and to determine the risk factors for PEF.
Methods: Four hundred forty-three hepatocellular carcinoma (HCC) patients who underwent the first session of transcatheter arterial chemoembolization (TACE) between January 2005 and December 2009 were analyzed retrospectively. PEF was defined as a body temperature greater than 38.0 °C that developed within 3 d of TACE without evidence of infection. The tumor progression-free interval was defined as the interval from the first TACE to the second TACE based on mRECIST criteria. Clinical staging was based on the American Joint Committee on Cancer tumor, node, metastases (TNM) classification of malignant tumors. All patients were admitted before their 1(st) TACE treatment, and blood samples were obtained from all patients before and after treatment. Clinicoradiological variables and host-related variables were compared between two groups: patients with PEF vs patients without PEF. Additionally, variables related to 20-mo mortality and tumor progression-free survival were analyzed.
Results: The study population comprised 370 (85.4%) men and 73 (14.6%) women with a mean age of 62.29 ± 10.35 years. A total of 1836 TACE sessions were conducted in 443 patients, and each patient received between 1 and 27 (mean: 4.14 ± 3.57) TACE sessions. The mean follow-up duration was 22.23 ± 19.6 mo (range: 0-81 mo). PEF developed in 117 patients (26.41%) at the time of the first TACE session. PEF was not associated with 20-mo survival (P = 0.524) or computed tomography (CT) response (P = 0.413) in a univariate analysis. A univariate analysis further indicated that diffuse-type HCC (P = 0.021), large tumor size (≥ 5 cm) (P = 0.046), lipiodol dose (≥ 7 mL, P = 0.001), poor blood glucose control (P = 0.034), alanine aminotransferase (ALT) value after TACE (P = 0.004) and C-reactive protein (CRP) value after TACE (P = 0.036) served as possible risk factors correlated with PEF. The ALT value after TACE (P = 0.021) and lipiodol dose over 7 mL (P = 0.011) were independent risk factors for PEF in the multivariate analysis. For the 20-mo survival, poor blood sugar control (P < 0.001), portal vein thrombosis (P = 0.001), favorable CT response after TACE (P < 0.001), initial aspartate aminotransferase (P = 0.02), initial CRP (P = 0.042), tumor size (P < 0.001), TNM stage (P < 0.001) and lipiodol dose (P < 0.001) were possible risk factors in the univariate analysis. Tumor size (P = 0.03), poor blood sugar control (P = 0.043), and portal vein thrombosis (P = 0.031) were significant predictors of survival in the multivariate analysis. Furthermore, the tumor progression-free interval was closely associated with CRP > 1 mg/dL (P = 0.003), tumor size > 5 cm (P < 0.001), tumor type (poorly defined) (P < 0.001), and lipiodol dose (> 7 mL, P < 0.001).
Conclusion: PEF has no impact on survival at 20 mo or radiologic response. However, the ALT level after TACE and the lipiodol dose represent significant risk factors for PEF.
abstract_id: PUBMED:31439956
The Efficacy and Safety of Steroids for Preventing Postembolization Syndrome after Transcatheter Arterial Chemoembolization of Hepatocellular Carcinoma. Steroids are often administered at the time of transcatheter arterial chemoembolization (TACE), a standard treatment of hepatocellular carcinoma (HCC), with the expectation of preventing postembolization syndrome. Here we investigated the precise effects of steroids on TACE. We prospectively enrolled 144 HCC patients from 10 hospitals who underwent TACE. Three hospitals used steroids (steroid group, n=77) and the rest did not routinely use steroids (control group, n=67). The occurrence of adverse events and the algetic degree at 1-5 days post-treatment were compared between the groups. Fever (grades 0-2) after TACE was significantly less in the steroid group (56/21/0) compared to the control group (35/29/3, p=0.005, Cochran-Armitage test for trend). The suppressive effect of steroids against fever was prominent in females (p=0.001). Vomiting (G0/G1/ G2-) was also less frequent in the steroid group (70/5/2) versus the control group (53/10/3), but not significantly (p=0.106). The algetic degree and the grade of hematological adverse events, including hyperglycemia, did not differ between the groups. We conclude that the administration of steroids was useful for the prevention of adverse events after TACE in patients with HCC.
abstract_id: PUBMED:26201926
Cytokines are associated with postembolization fever and survival in hepatocellular carcinoma patients receiving transcatheter arterial chemoembolization. Objective: Cytokines play important roles in angiogenesis, inflammation, and cell growth. The present study aimed to investigate the correlation between cytokine changes and clinical characteristics in hepatocellular carcinoma (HCC) patients receiving transcatheter arterial chemoembolization (TACE).
Methods: Forty-one TACE-näive HCC patients receiving 73 sessions of TACE and 30 healthy controls were studied. Serum levels of tumor necrosis factor-α (TNF-α), interleukin-6 (IL-6), interleukin-8 (IL-8), vascular endothelial growth factor (VEGF), basic fibroblast growth factor (bFGF), angiogenin, epidermal growth factor (EGF), epidermal growth factor receptor, and transforming growth factor β1 (TGF-β1) before and at 1, 3, 5, 7, and 14 days after TACE as well as clinical parameters were analyzed.
Results: Baseline serum levels of VEGF, bFGF, IL-6, IL-8, and TNF-α in HCC patients were significantly elevated, whereas EGF and TGF-β1 levels were lower compared to those in healthy controls (p < 0.05 for all). Serum IL-6 increased rapidly and peaked on day 1 after TACE administration, whereas VEGF increased more slowly and peaked on day 14 after TACE administration. Patients with post-TACE fever had higher serum IL-6 levels on days 1, 3, and 5 (p < 0.005 for all). Patients with pre-TACE serum VEGF < 200 pg/ml had a longer survival than those with pre-TACE serum VEGF levels ≥ 200 pg/ml (22.2 months vs. 11.6 months, p = 0.014). Cox multivariate analysis showed that baseline serum VEGF significantly predicted survival for HCC patients receiving TACE.
Conclusions: TACE is associated with the modulation of serum angiogenic, inflammatory, and cell growth cytokines in HCC patients. Serum IL-6 correlates with post-TACE fever, and baseline serum VEGF independently predicts patient survival.
Answer: Yes, postembolization fever (PEF) after chemoembolization does have prognostic significance for survival in patients with unresectable hepatocellular carcinoma (HCC). A study found that PEF after chemoembolization was strongly correlated with large tumor size and was a significant independent predictor of overall survival. Specifically, the occurrence of PEF significantly increased the risk of death by about 1.4-fold in correlation with overall survival (PUBMED:19084432). Another study reported that severe postembolization syndrome (PES), which includes symptoms such as fever, was independently associated with an impaired disease control rate and poor overall survival (PUBMED:37756031). Additionally, cytokine changes, particularly serum IL-6 levels, were found to correlate with post-TACE fever, and baseline serum VEGF levels independently predicted patient survival (PUBMED:26201926). These findings suggest that PEF is not only a common clinical occurrence but also has implications for the prognosis of HCC patients undergoing chemoembolization. |
Instruction: Postabortion psychological adjustment: are minors at increased risk?
Abstracts:
abstract_id: PUBMED:11429300
Postabortion psychological adjustment: are minors at increased risk? Purpose: To assess whether younger adolescents experience greater adverse psychological outcomes after abortion than those aged 18-21 years, whether abortion places all adolescents at risk for negative sequelae, and what factors predict negative outcomes.
Methods: A total of 96 young women aged 14-21 years seeking counseling for unwanted pregnancies at four clinics completed questionnaires after counseling. These included the Beck Depression Inventory (BDI), an emotion scale, questions regarding sociodemographic and reproductive background, feelings about pregnancy, and decision-making. Sixty-three respondents were reinterviewed 4 weeks postabortion and completed the BDI, emotion scale, Spielberger State Anxiety Inventory, Rosenberg Self-esteem Scale, Impact of Events Scale, and Positive States of Mind Scale. Chi-squares and Student's t-tests were used to compare: (a) responses of adolescents under 18 years of age with those 18-21 years, (b) preabortion and postabortion responses, and (c) the current sample with other samples of adolescents.
Results: Adolescents under age 18 years were less comfortable with their decision, but showed no other differences compared with those aged 18-21 years. Both groups showed significant improvement in psychological responses postabortion. Postabortion scores did not differ significantly from those of other adolescent samples reported in the literature. Preabortion emotional state and perception of partner pressure predicted postabortion response.
Conclusions: Despite its legal significance, age 18 years was not a meaningful cutoff point for psychological response to abortion in this sample. There was no evidence that abortion poses a threat to adolescents' psychological well-being.
abstract_id: PUBMED:9420366
Forgiveness intervention with postabortion men. An intervention designed to foster forgiveness was implemented with postabortion men. Participants were randomly assigned to either the treatment or the control (wait list) condition, which received treatment after a 12-week waiting period. Following treatment, the participants demonstrated a significant gain in forgiveness and significant reductions in anxiety, anger, and grief as compared with controls. Similar significant findings were evident among control participants after they participated in the treatment. Maintenance of psychological benefits among the 1st set of participants was demonstrated at a 3-month follow-up.
abstract_id: PUBMED:29282060
Effects of gestational age and the mode of surgical abortion on postabortion hemorrhage and fever: evidence from population-based reproductive health survey in Georgia. Background: Every year around 50 million unintended pregnancies worldwide are terminated by induced abortion. Even in countries, where it is legalized and performed in a safe environment, abortion carries some risk of complications for women. Findings of researchers on the factors that influence the sequelae of abortion are controversial and inconsistent. This study evaluates the effects of gestational age and the method of surgical abortion (i.e., dilatation and curettage and vacuum aspiration) on the most common abortion complications: postabortion hemorrhage and fever.
Methods: We performed a secondary analysis of the data from the population-based Georgian Reproductive Health Survey 2010. Information on 1974 surgical abortions performed >30 days prior to the survey interview were analyzed during the study. Logistic regression statistical analysis was applied to compare the abortion sequelae that followed vacuum aspiration and dilatation and curettage at different gestational ages (<10 weeks and ≥10 weeks). We examined two major early abortion-related complications: postabortion hemorrhage and febrile morbidity (fever ≥38 °C).
Results: Postabortion hemorrhage was reported in 43 cases (1.9%), and febrile morbidity occurred in 44 cases (2%) among all of the surgical abortions. The abortions performed by dilatation and curettage were associated with an estimated fourfold increased risk of developing hemorrhage (OR 4.4, 95% CI 2.2-8.6) and a twofold increased risk of developing fever (OR 2.37, 95% CI 1.17-4.79) compared with the abortions that were performed via vacuum aspiration. The risk of postabortion hemorrhage (OR 1.9, 95% CI 0.8-4.4) or fever (OR 0.9, 95% CI 0.4-2.1) did not significantly differ at gestational age < 10 weeks and ≥10 weeks.
Conclusion: Vacuum aspiration was associated with reduced risks of postabortion hemorrhage and fever compared to dilatation and curettage. Gestational age ≥ 10 weeks was not found to be a predictive factor of immediate postabortion complications: hemorrhage and fever.
abstract_id: PUBMED:2754171
Psychological profile of dysphoric women postabortion. Women who identified themselves as having poorly assimilated the abortion experience were surveyed using a demographic questionnaire, the Beck Depression Inventory (BDI), and the Million Clinical Multiaxial Inventory (MCMI). Eighty-one surveys were returned from the sample of 150 women. Seventeen percent (N = 12) of the women had had multiple abortions. Women with multiple abortions scored significantly higher on the BDI and also scored higher on the borderline personality subscales of the MCMI. Besides multiple abortions, other risk factors for postabortion dysphoria identified in this study were premorbid psychiatric illness, lack of family support, ambivalence, and feeling coerced into having an abortion.
abstract_id: PUBMED:29745972
The Philippines rolls back advancements in the postabortion care policy. In 2018, the Philippines announced a postabortion care policy that rolls back crucial safeguards aimed at protecting women who seek medical treatment for postabortion complications from discrimination and abuse. It replaces another policy that was introduced in 2016, following years of advocacy by national and international advocates who were concerned about the mistreatment of women seeking postabortion care due to discriminatory practices in the health system and abortion stigma. The new policy is narrower in scope than the previous policy and reinforces abortion stigma by emphasizing the legal prohibition on abortion, failing to clarify that women seeking postabortion care need not be reported to the authorities, and not recognizing the availability of complaint mechanisms for women who are mistreated. These and other crucial gaps put the new policy at risk of being in violation of ethical standards of medical care and guarantees of human rights.
abstract_id: PUBMED:29231790
Uptake of postabortion care services and acceptance of postabortion contraception in Puntland, Somalia. Unsafe abortion is responsible for at least 9% of all maternal deaths worldwide; however, in humanitarian emergencies where health systems are weak and reproductive health services are often unavailable or disrupted, this figure is higher. In Puntland, Somalia, Save the Children International (SCI) implemented postabortion care (PAC) services to address the issue of high maternal morbidity and mortality due to unsafe abortion. Abortion is explicitly permitted by Somali law to save the life of a woman, but remains a sensitive topic due to religious and social conservatism that exists in the region. Using a multipronged approach focusing on capacity building, assurance of supplies and infrastructure, and community collaboration and mobilisation, the demand for PAC services increased as did the proportion of women who adopted a method of family planning post-abortion. From January 2013 to December 2015, a total of 1111 clients received PAC services at the four SCI-supported health facilities. The number of PAC clients increased from a monthly average of 20 in 2013 to 38 in 2015. During the same period, 98% (1090) of PAC clients were counselled for postabortion contraception, of which 955 (88%) accepted a contraceptive method before leaving the facility, with 30% opting for long-acting reversible contraception. These results show that comprehensive PAC services can be implemented in politically unstable, culturally conservative settings where abortion and modern contraception are sensitive and stigmatised matters among communities, health workers, and policy makers. However, like all humanitarian settings, large unmet needs exist for PAC services in Somalia.
abstract_id: PUBMED:31700669
Postabortion contraceptive use in Bahir Dar, Ethiopia: a cross sectional study. Background: Although promoting postabortion family planning is very important and effective strategy to avert unwanted pregnancy, less attention was given to it in Ethiopia. Thus, this study aimed to assess contraceptive use and factors which are affecting it among women after abortion in Bahir Dar town.
Methods: Facility based cross-sectional study was conducted in Bahir Dar town. The data was collected using structured interviewer administered questionnaire from women who obtain the abortion services. Bivariable and multivariable logistic regression was used to evaluate the association that demographic factor and reproductive characteristics have with postabortion contracetive use. Findings with p-value of < 0.05 at 95% CI were considered as statistically significant.
Results: A total of 400 women who received abortion service were participated in this study. The proportion of postabortion contraceptive use is 78.5%. Single women are 7.2 times more likely use contraceptive after abortion as compared to their counterpart. Contraceptive use is 2 times higher among women who have previous history of abortion as compared to their counterpart. Women who used contraceptive previously and who used contraception for index pregnancy are 4.73 and 2.64 times more likely to use contraceptive after abortion as compared to their counterpart respectively.
Conclusion: Postabortion contraceptive use is associated with age, marital status, having previous history of abortion, previous contraceptive use and using contraception for index pregnancy. Greater emphasis should be given on providing postabortion contraceptive counselling to increase utilization of postabortion contraceptive use.
abstract_id: PUBMED:28256918
Postabortion contraception. The European Society of Contraception Expert Group on Abortion identified as one of its priorities to disseminate up-to-date evidence-based information on postabortion contraception to healthcare providers. A concise communication was produced which summarises the latest research in an easy-to-read format suitable for busy clinicians. Information about individual methods is presented in boxes for ease of reference.
abstract_id: PUBMED:36303617
Postabortion Family Planning and Associated Factors Among Women Attending Abortion Service in Dire Dawa Town Health Facilities, Eastern Ethiopia. Background: Postabortion family planning is a part of comprehensive package of postabortion care. However, it did not receive due attention to break the cycle of repeated abortion, unintended pregnancies, and abortion-related maternal morbidity and mortality. Therefore, this study aimed to determine the utilization of postabortion family planning and associated factors among women attending abortion service in Dire Dawa health facilities, Eastern Ethiopia.
Methods: A facility-based cross-sectional study design was employed among 483 clients who sought abortion service in Dire Dawa from 15 May to 30 June 2020. A structured interviewer-administered questionnaire was used for data collection. The collected data were entered into EpiData version 3.2 and exported to SPSS version 22 for analysis. The multivariate logistic regression models were fitted to identify factors associated with utilization of postabortion family planning. Adjusted odds ratios (AORs) along with 95% CI were estimated to measure the strength of the association and statistical association was declared statistical at a p-value < 0.05.
Results: More than three-fourths (77.8%) [95% CI: (73.4-81.6%)] of respondents were utilized postabortion family planning methods. Respondents with age 15-24 years [AOR: 5.59, (95% CI: (1.5, 19.7)], attended postsecondary [AOR: 7.6, (95% CI: (2.7, 21.2)], single marital status [AOR: 11.1, (95% CI: (4.6, 26.5)], a monthly income 500-1,000 birr [AOR: 7.9, (95% CI: (3.2, 19.4)], parity ≥ 5 [AOR: 0.41, (95% CI: (0.18, 0.92)], desire of current pregnancy [AOR: 4.2, (95% CI: (1.9, 9.3)], and ever used family planning [AOR: 4.4, (95% CI: (2.2, 8.9)] were major factors significantly associated with postabortion family planning utilization.
Conclusion: In this study, more than three-fourths of respondents utilize postabortion family planning. Most of the factors associated with postabortion family planning were modifiable. Therefore, policymakers and health planners need to integrate with comprehensive abortion care to improve the utilization of postabortion family planning.
abstract_id: PUBMED:31585616
Postabortion and Postpartum Intrauterine Device Provision for Adolescents and Young Adults. Adolescents are at high risk for unintended pregnancy and rapid repeat pregnancy, both of which can be associated with negative health and social outcomes. Intrauterine device (IUD) use has been shown to decrease unintended pregnancy and rapid repeat pregnancy. Evidence supports IUD insertion postabortion and postpartum as safe and practical for nearly all women, including adolescent and young adult women. Providers of adolescent gynecology can play an important role in decreasing repeat and unintended pregnancy among adolescents by increasing access to IUDs, reducing barriers to care, and providing IUDs immediately postabortion and postpartum.
Answer: Based on the study reported in abstract PUBMED:11429300, minors are not at increased risk for adverse psychological outcomes after abortion compared to those aged 18-21 years. The research involved young women aged 14-21 years seeking counseling for unwanted pregnancies at four clinics. The study found that adolescents under age 18 were less comfortable with their decision but showed no other differences in psychological responses postabortion compared with those aged 18-21 years. Both age groups showed significant improvement in psychological responses postabortion, and postabortion scores did not differ significantly from those of other adolescent samples reported in the literature. The study concluded that there was no evidence that abortion poses a threat to adolescents' psychological well-being, and age 18 years was not a meaningful cutoff point for psychological response to abortion in the sample studied. Factors such as preabortion emotional state and perception of partner pressure were predictive of postabortion response. |
Instruction: Is there an identifiable intact medial wall of the cavernous sinus?
Abstracts:
abstract_id: PUBMED:37416808
Resection of corticotroph microadenomas invading the medial wall of the cavernous sinus in two cases of a primary and recurrent case of Cushing's disease. Emerging evidence from multiple highly specialized groups continues to support a role for resection of the medial wall of the cavernous sinus when it is invaded by functional pituitary adenomas, to offer durable biochemical remission. The authors present two cases of Cushing's disease that underscore the power of this surgical technique in achieving remission in microadenomas that ectopically present in the cavernous sinus or have invaded the medial wall of the sinus. This video demonstrates key steps in the safe removal of the medial wall of the cavernous sinus and successful resection of tumor burden in the cavernous sinus for sustained postoperative remission. The video can be found here: https://stream.cadmore.media/r10.3171/2023.4.FOCVID2323.
abstract_id: PUBMED:30192192
The medial wall of the cavernous sinus. Part 1: Surgical anatomy, ligaments, and surgical technique for its mobilization and/or resection. Objective: The medial wall of the cavernous sinus (CS) is often invaded by pituitary adenomas. Surgical mobilization and/or removal of the medial wall remains a challenge.
Methods: Endoscopic endonasal dissection was performed in 20 human cadaver heads. The configuration of the medial wall, its relationship to the internal carotid artery (ICA), and the ligamentous connections in between them were investigated in 40 CSs.
Results: The medial wall of the CS was confirmed to be an intact single layer of dura that is distinct from the capsule of the pituitary gland and the periosteal layer that forms the anterior wall of the CS. In 32.5% of hemispheres, the medial wall was indented by and/or well adhered to the cavernous ICA. The authors identified multiple ligamentous fibers that anchored the medial wall to other walls of the CS and/or to specific ICA segments. These parasellar ligaments were classified into 4 groups: 1) caroticoclinoid ligament, spanning from the medial wall and the middle clinoid toward the clinoid ICA segment and anterior clinoid process; 2) superior parasellar ligament, connecting the medial wall to the horizontal cavernous ICA and/or lateral wall of the CS; 3) inferior parasellar ligament, bridging the medial wall to the anterior wall of the CS or anterior surface of the short vertical segment of the cavernous ICA; and 4) posterior parasellar ligament, which anchors the medial wall to the short vertical segment of the cavernous ICA and/or the posterior carotid sulcus. The caroticoclinoid ligament and inferior parasellar ligament were present in most CSs (97.7% and 95%, respectively), while the superior and posterior parasellar ligaments were identified in approximately half of the CSs (57.5% and 45%, respectively). The caroticoclinoid ligament was the strongest and largest ligament, and it was typically assembled as a group of ligaments with a fan-like arrangement. The inferior parasellar ligament was the first to be encountered after opening the anterior wall of the CS during an interdural transcavernous approach.
Conclusions: The authors introduce a classification of the parasellar ligaments and their role in anchoring the medial wall of the CS. These ligaments should be identified and transected to safely mobilize the medial wall away from the cavernous ICA during a transcavernous approach and for safe and complete resection of adenomas that selectively invade the medial wall.
abstract_id: PUBMED:35982343
The medial wall and medial compartment of the cavernous sinus: an anatomic study using plastinated histological sections. The medial wall of the cavernous sinus (CS) has a significant role in evaluation and treatment of pituitary adenomas. This study was conducted to clarify the fine architecture of the medial wall and medial compartment of the CS at both macro- and micro-levels in twenty-one human cadaveric heads by using the epoxy sheet plastination technique. The sellar part medial wall is an intact dural layer that separates the CS from the pituitary gland. This dural wall adhered to the diaphragma sellae and the periosteum of the sella turcica to form fibrous triangles. Eight micro-protrusions of the pituitary gland were found at both sides of that wall. The thickness of the sellar part medial wall at its central portion was significantly thinner than that at the other surrounding portions. From the superior view, tortuous intracavernous carotid arteries can be divided into outward bending type and inward bending type. The inward bending intracavernous carotid was apt to bent towards the central part of the sellar part medial wall, where there were usually wide and short fibrous bands with more densely stained connective tissues between them. The micro-protrusion of the pituitary gland in the medial wall of the CS could provide an anatomical basis for the occult tumor invasion and the recurrence of residual tumor. Different bending facing states of tortuous intracavernous carotid arteries in the lateral direction may be a factor of the determination of the direction of growth of pituitary tumors.
abstract_id: PUBMED:38468565
Anatomy of the medial wall of the cavernous sinus: A systematic review of the literature. The existence, composition, and continuity of the medial wall of the cavernous sinus (MWCS) have been extensively studied and debated. However, the precise nature of this membrane remains unknown. Understanding the anatomical characteristics of the MWCS is crucial, notably in relation to pituitary adenomas, which often invade the cavernous sinus. Indeed, surgical treatment of those tumors is frequently incomplete because of such invasion. The anatomical and molecular basis of the peculiar and often lateralized tropism of adenomatous cells to the cavernous sinus is not yet understood and it has been suggested repeatedly that the MWCS is physiologically frail. During the past three decades, there have been several conflicting accounts of the existence, composition, and continuity of this medial wall, but methodological differences and varying definitions could have contributed to the current lack of consensus regarding it. The aim of this systematic review was to summarize previously published data concerning the existence, anatomy, composition, and continuity of the MWCS.
abstract_id: PUBMED:36291288
Endoscopic Endonasal Resection of the Medial Wall of the Cavernous Sinus and Its Impact on Outcomes of Pituitary Surgery: A Systematic Review and Meta-Analysis. Introduction. Pituitary adenomas have the potential to infiltrate the dura mater, skull, and the venous sinuses. Tumor extension into the cavernous sinus is often observed in pituitary adenomas and techniques and results of surgery in this region are vastly discussed in the literature. Infiltration of parasellar dura and its impact for pituitary surgery outcomes is significantly less studied but recent studies have suggested a role of endoscopic resection of the medial wall of the cavernous sinus, in selected cases. In this study, we discuss the techniques and outcomes of recently proposed techniques for selective resection of the medial wall of the cavernous sinus in endoscopic pituitary surgery. Methods. We performed a systematic review of the literature using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and protocol and a total of 4 studies with 106 patients that underwent an endoscopic approach for resection of pituitary tumors with resection of medial wall from cavernous sinus were included. Clinical and radiological data were extracted (sex, mean age, Knosp, prior surgery, tumor size and type, complication rate, and remission) and a meta-analysis using the RevMan 5.4 software was performed. Results. A total of 5 studies with 208 patients were included in this analysis. The mean age of the study population was 48.87 years (range 25−82) with a female/male ratio of 1:1.36. Majority of the patients had Knosp Grade 1 (n = 77, 37.02%) and Grade 2 (n = 53, 25.48%). The complication rate was 4.81% (n = 33/106) and the most common complication observed was a new transient CN dysfunction and diplopia. Early disease remission was observed in 94.69% of the patients (n = 196/207). The prevalence rate of CS medial wall invasion varied from 10.4 % up to 36.7%. This invasion rate increased in frequency with higher Knosp Grade. The forest plot of persistent disease vs. remission in this surgery approach showed a p < 0.00001 and heterogeneity (I^2 = 0%). Discussion. Techniques to achieve resection of the medial wall of the cavernous sinus via the endoscopic endonasal approach include the “anterior to posterior” technique (opening of the anterior wall of the cavernous sinus) and the “medial to lateral” technique (opening of the inferior intercavernous sinus and). Although potentially related with improved endocrinological outcomes, these are advanced surgical techniques and require extensive anatomical knowledge and extensive surgical experience. Furthermore, to avoid procedure complications, extensive study of the patient’s configuration of cavernous ICA, Doppler-guided intraoperative imaging, surgical navigation system, and blunt tip knives to dissect the ICA’s plane are recommended. Conclusion. Endoscopic resection of the medial wall of the cavernous sinus has been associated with reports of high rates of postoperative hormonal control in functioning pituitary adenomas. However, it represents a more complex approach and requires advanced experience in endoscopic skull base surgery. Additional studies addressing case selection and studies evaluating long term results of this technique are still necessary.
abstract_id: PUBMED:37416807
Endoscopic ultrasound guided resection of a Cushing's adenoma invading the medial cavernous sinus wall using the "interdural peeling" technique. Cushing's adenoma invading the cavernous sinus requires aggressive resection to be cured. MRI is frequently inconclusive for identifying microadenomas, and visualizing the involvement of the medial cavernous sinus is even more challenging. In this video, the authors present a patient with an adrenocorticotropic hormone (ACTH)-producing microadenoma with doubtful left medial cavernous sinus involvement on MRI. She underwent an endoscopic endonasal exploration of the medial compartment of the cavernous sinus. The abnormally thickened wall, confirmed by intraoperative endoscopic endonasal ultrasound, was safely excised using the "interdural peeling" technique. Complete resection of the tumor resulted in normalization of her postoperative cortisol levels and disease remission with no complications. The video can be found here: https://stream.cadmore.media/r10.3171/2023.4.FOCVID22150.
abstract_id: PUBMED:23361322
Is there an identifiable intact medial wall of the cavernous sinus? Macro- and microscopic anatomical study using sheet plastination. Background: The medial wall of the cavernous sinus is believed to play a significant role in determining the direction of growth of pituitary adenomas and in planning pituitary surgery. However, it remains unclear whether there is a dural wall between the pituitary gland and the cavernous sinus.
Objective: To identify and trace the membranelike structures medial to the cavernous sinus and around the pituitary gland and their relationships with surrounding structures.
Methods: Sixteen cadavers (7 females and 9 males; age range, 54-89 years; mean age, 77 years) were used in this study and prepared as 16 sets of transverse (5 sets), coronal (2 sets), and sagittal (9 sets) plastinated sections that were examined at both macro- and microscopic levels.
Results: The pituitary gland was fully enclosed in a fibrous capsule, but the components and thickness of the capsule varied on different aspects of the gland. The meningeal dural layer was sandwiched between the anterosuperior aspect of the gland capsule and the cavernous sinus. Posteroinferiorly, however, this dural layer disappeared as it fused with the capsule. A weblike loose fibrous network connected the capsule, carotid artery, venous plexus, and the dura of the middle cranial fossa.
Conclusion: The medial wall of the cavernous sinus consists of both the meningeal dura and weblike loose fibrous network, which are located at the anterosuperior and posteroinferior aspects, respectively.
abstract_id: PUBMED:37382779
Efficacy and safety of cavernous sinus medial wall resection in pituitary adenoma surgery: a systematic review and a single-arm meta-analysis. Introduction: Pituitary adenomas, benign tumors, can lower quality of life. Pituitary adenomas that invade the medial wall and cavernous sinus (CS) indicate tumor recurrence and partial surgical excision. Despite the cavernous sinus's complexity and risks, new research has improved the surgical procedure and made excision safer. This comprehensive review and single-arm meta-analysis evaluates endocrinological remission and resection rates in pituitary adenomas to determine the benefits and risks of MWCS resection.
Methods: Databases were systematically searched for studies documenting the resection of the medial wall of the cavernous sinus. The primary outcome was endocrinological remission in patients who underwent resection of the MWCS.
Results: Eight studies were included in the final analysis. The pooled proportion of endocrinological remission (ER) was 63.3%. The excision of MWCS pooled a gross total resection (GTR) proportion of 72.9%. Finally, ICA injury attained a pooled ratio of 0.5%, indicating minimal morbidity in the procedure.
Conclusion: The cavernous sinus was ruled out, proving the MWCS excision is safe. Limiting population selection to Knosp 3A or lower enhanced GTR frequencies and lowered recurrence, according to subgroup analyses. This meta-analysis shows that MWCS resection can be a beneficial treatment option for pituitary tumors, when there is no macroscopic medial wall invasion and careful patient selection is done, especially for GH- and ACTH-producing tumors that can cause life-threatening metabolic changes.
abstract_id: PUBMED:23886874
Magnetic resonance imaging appearance of the medial wall of the cavernous sinus for the assessment of cavernous sinus invasion by pituitary adenomas. Purpose: The diagnostic criteria for cavernous sinus invasion (CSI) by pituitary adenomas are still unsatisfactory and controversial. For this reason, the study examined the appearance of the medial wall of the cavernous sinus (MWCS) on proton-density-weighted (PDW) magnetic resonance imaging (MRI) to determine its value for preoperative assessment of CSI.
Methods: A 3.0-Tesla MRI scanner was used to obtain preoperative PDW images and conventional MRI sequences of 48 consecutive pituitary adenomas, and the MWCS was examined in PDW images to determine the presence of CSI in comparison to surgical findings and three traditional MRI criteria: Knosp grading system (KGS); percentage of encasement of the internal carotid artery (PEICA); and replacement of cavernous sinus compartments (RCSC) by tumors. The value of the MWCS as seen on MRI was compared with that of the Ki-67 labelling index (Ki-67 LI).
Results: CSI images showed that continuity of the MWCS was interrupted and that tumor tissue had infiltrated the cavernous sinus (CS) compartments through the defects. In 96 CSs from 48 patients, the sensitivity of MRI visualization of the MWCS for detection of CSI was 93.3% with a specificity of 93.8%, which was significantly higher than with KGS, PEICA and RCSC (P=0.007, P=0.008 and P=0.056, respectively). Histopathological results showed no significant differences between MRI visualization of the MWCS and the Ki-67 LI.
Conclusion: PDW imaging permits adequate visualization of the MWCS and is superior to traditional diagnostic criteria for the detection of CSI, providing accurate preoperative images for intraoperative navigation.
abstract_id: PUBMED:16015624
Medial portion of the cavernous sinus: quantitative analysis of the medial wall. Pituitary tumors invade the cavernous sinus via the medial wall. Researchers have speculated that this wall is composed of dura and that substances secreted by tumors might damage this barrier. In contrast to the lateral wall, little is known about the structure of the medial wall of the cavernous sinus (MWCS). This study provides the first detailed quantitative (thickness) and qualitative (histological) assessment of the MWCS. Eighteen sellar-parasellar tissue blocks were obtained from adult human autopsies. Ten specimens were used for microsurgical dissection and macroscopic anatomical description. Eight specimens were used for histopathological study and for recording computer measurements of MWCS thickness. Each of these eight specimens was divided into three approximately equal-sized pieces, with cuts made in the coronal plane from posterior to anterior starting at the anterior level of the pituitary stalk. Wall thicknesses were compared in the three different regions (posterior, middle, anterior), and also on the left vs. the right sides. The investigations showed that the MWCS is a distinct dural layer that forms a barrier between the medial venous space of the cavernous sinus and the pituitary gland. The mean thickness of the 48 total (left and right) MWCS observed in the 24 sections examined was 0.195 +/- 0.066 mm (range = 0.080-0.387 mm). This wall is composed of loosely arranged collagen fibers that comprise a specific layer known as "meningeal dura." The posterior third of the MWCS was significantly thinner than the middle third (P = 0.0014) or anterior third (P = 0.0001). No macro- or microscopic defects were observed in any of the MWCS in the 18 specimens. The thinness of the posterior MWCS suggests that this is the most likely path for extension of pituitary tumors into the cavernous sinus.
Answer: Yes, there is an identifiable intact medial wall of the cavernous sinus (MWCS). The existence, composition, and continuity of the MWCS have been subjects of extensive study and debate, with various studies confirming its presence and describing its anatomical characteristics (PUBMED:38468565). The MWCS is an intact single layer of dura that is distinct from the capsule of the pituitary gland and the periosteal layer that forms the anterior wall of the cavernous sinus (PUBMED:30192192). It plays a significant role in the evaluation and treatment of pituitary adenomas, as these tumors often invade the cavernous sinus, and the medial wall can be a pathway for such invasion (PUBMED:23361322).
The medial wall consists of both the meningeal dura and a weblike loose fibrous network, which are located at the anterosuperior and posteroinferior aspects, respectively (PUBMED:23361322). The sellar part of the medial wall is an intact dural layer that separates the cavernous sinus from the pituitary gland (PUBMED:35982343). The thickness of the MWCS varies, with the posterior third being significantly thinner than the middle or anterior thirds, suggesting that the posterior MWCS is the most likely path for the extension of pituitary tumors into the cavernous sinus (PUBMED:16015624).
Moreover, the medial wall adheres to the diaphragma sellae and the periosteum of the sella turcica to form fibrous triangles, and micro-protrusions of the pituitary gland have been found at both sides of that wall (PUBMED:35982343). The configuration of the medial wall and its relationship to the internal carotid artery (ICA) have been investigated, with multiple ligamentous fibers identified that anchor the medial wall to other walls of the cavernous sinus and/or to specific ICA segments (PUBMED:30192192).
In summary, the MWCS is an identifiable intact structure that is crucial for understanding the growth patterns of pituitary adenomas and planning surgical approaches for their resection. |
Instruction: Can ultrasound biomicroscopy be used to predict accommodation accurately?
Abstracts:
abstract_id: PUBMED:25884582
Can ultrasound biomicroscopy be used to predict accommodation accurately? Purpose: Clinical accommodation testing involves measuring either accommodative optical changes or accommodative biometric changes. Quantifying both optical and biometric changes during accommodation might be helpful in the design and evaluation of accommodation restoration concepts. This study aims to establish the accuracy of ultrasound biomicroscopy (UBM) in predicting the accommodative optical response (AOR) from biometric changes.
Methods: Static AOR from 0 to 6 diopters (D) stimuli in 1-D steps were measured with infrared photorefraction and a Grand Seiko autorefractor (WR-5100 K; Shigiya Machinery Works Ltd., Hiroshima, Japan) in 26 human subjects aged 21 to 36 years. Objective measurements of accommodative biometric changes to the same stimulus demands were measured from UBM (Vu-MAX; Sonomed Escalon, Lake Success, NY) images in the same group of subjects. AOR was predicted from biometry using linear regressions, 95% confidence intervals, and 95% prediction intervals.
Results: Bland-Altman analysis showed 0.52 D greater AOR with photorefraction than with the Grand Seiko autorefractor. Per-diopter changes in accommodative biometry were: anterior chamber depth (ACD): -0.055 mm/D, lens thickness (LT): +0.076 mm/D, anterior lens radii of curvature (ALRC): -0.854 mm/D, posterior lens radii of curvature (PLRC): -0.222 mm/D, and anterior segment length (ASL): +0.030 mm/D. The standard deviation of AOR predicted from linear regressions for various biometry parameters were: ACD: 0.24 D, LT: 0.30 D, ALRC: 0.24 D, PLRC: 0.43 D, ASL: 0.50 D.
Conclusions: UBM measured parameters can, on average, predict AOR with a standard deviation of 0.50 D or less using linear regression. UBM is a useful and accurate objective technique for measuring accommodation in young phakic eyes.
abstract_id: PUBMED:35922842
Ultrasound biomicroscopy study of accommodative state in Smartphone abusers. Background: Addiction to Smartphone usage has psychological and physical impacts. However, the state of spasm of accommodation is unclear in Smartphone abusers.
Methods: We performed a cross-sectional study among adults aged 18-35 years between October 2016 and December 2018. Forty participants were Smartphone abusers according to the Smartphone addiction questionnaire, and 40 participants were non users. We measured auto refraction precycloplegia and postcycloplegia at far for all participants to evaluate the state of spasm of accommodation. We assessed the ultrasound biomicroscopy (UBM) parameters including anterior chamber angle (ACA).
Results: There was a significant difference in the odds of having spasm of accommodation between Smartphone abusers compared to non-users (OR = 6.64, 95% CI = 1.73-25.47; adjusted OR = 14.63, 95% CI = 2.99-71.62). The Smartphone abusers and non-users groups had a superior ACA median of 30.45° ± 8.3° vs. 26.75° ± 6.6° (P = 0.04) precycloplegia at far and 31.70° ± 11.8° vs. 31.45° ± 8.3° (P = 0.15) postcycloplegia at far, respectively, demonstrated by the Mann-Whitney U test. There was significant higher difference in the precycloplegic nasal ACA at far in the Smartphone abusers group than the non-users group (mean precycloplegic nasal ACA difference = 3.57°, 95% CI = 0.76° - 6.37°), demonstrated by the independent t test. Similarly, there was significant higher difference in the postcycloplegic nasal ACA at far (mean postcycloplegic nasal ACA difference = 4.26°, 95% CI = 1.33° - 7.19°).
Conclusions: Smartphone abusers are in a condition of accommodation spasm. As a result, cycloplegic refraction should be done for Smartphone abusers.
abstract_id: PUBMED:27990069
Overview of Ultrasound Biomicroscopy. Ultrasound biomicroscopy (UBM) is a high-resolution ultrasound technique that allows noninvasive in vivo imaging of structural details of the anterior ocular segment at near light microscopic resolution and provides detailed assessment of anterior segment structures, including those obscured by normal anatomic and pathologic relations. This review gives an overview regarding the instrument, technique and its applications.
abstract_id: PUBMED:12358310
Ultrasound biomicroscopy of the anterior segment of the enucleated chicken eye during accommodation. Ultrasound biomicroscopy produces real-time two-dimensional images of ocular structures measured non-invasively. Given recent work which shows that lenses from myopic eyes show shorter focal lengths and reduced accommodative amplitudes compared with controls, this study was undertaken to determine the structural characteristics of the anterior segment of chicken eyes during accommodation using the ultrasound biomicroscope (UBM). Form-deprivation myopia and hyperopia were induced in hatching chicks by the application of either translucent or +15 D defocus goggles. After 7 days, eyes were enucleated and ultrasound biomicrographs of the eye, at rest and during ciliary nerve-stimulated accommodation, were collected. For all eyes, accommodation was associated with a decrease in anterior chamber depth, an increase in lenticular thickness and a steepening of the front lenticular surface curvature. Changes related to refractive error were more difficult to detect. Myopic eyes showed deeper anterior chamber depths and differences in lenticular thicknesses just above the resolution limit for detection. In +15 D lens-treated eyes, anterior chamber differences were opposite but smaller, just at the limit of resolution, while differences in mean lenticular thickness were not resolvable at a pixel or above. The UBM is a good tool for measuring robust changes during accommodation, but is limited in its ability to detect subtle changes associated with experimentally induced ametropias.
abstract_id: PUBMED:10333101
In vivo imaging of the human zonular apparatus with high-resolution ultrasound biomicroscopy. Background: To investigate the potential of high-resolution ultrasound biomicroscopy (UBM) for studying the zonular apparatus of human beings in vivo.
Methods: Using transducer frequencies of 34 MHz and 50 MHz, criteria were developed to identify transcorneal and transscleral sections that allowed reproducible identification of the different fiber groups of the zonular architecture. For that purpose, 10 volunteers between the ages of 14 and 41 years underwent high-resolution ultrasound biomicroscopy under conditions of consensual far- and near-accommodation. The online video recordings of the respective UBM investigations were afterwards analyzed image by image. Good visibility of zonular fibers was obtained when the ultrasound wave propagation comprised an angle close to 90 degrees with the fiber orientation and when the oscillations of the UBM scan had a strict radial orientation towards the limbus and avoided, simultaneously, the ciliary processes.
Results: In all the volunteers, high-resolution ultrasound biomicroscopy imaged the zonular fiber groups known from histology. In addition, it detected fibers that do not follow the course of the inner ciliary body surface but take a direct route from the ora serrata to the lens. It also demonstrated that fibers that seem to change direction at crossings with other fibers. Under conditions of near-accommodation, the zonular fibers showed signs of relaxation.
Conclusions: High-resolution ultrasound biomicroscopy seems well suited for in vivo investigations of the zonular apparatus and of accommodation in man. The results support the fundamental features of the Helmholtz theory on accommodation.
abstract_id: PUBMED:25542946
Noninvasive imaging of aortic atherosclerosis by ultrasound biomicroscopy in a mouse model. Objectives: The noninvasive and accurate evaluation of vessel characteristics in mouse models has become an intensive focus of vascular medicine. This study aimed to apply ultrasound biomicroscopy to evaluate aortic atherosclerotic progression in a low-density lipoprotein receptor (LDL-R) knockout mouse model of atherosclerosis.
Methods: Ten male LDL-R(-/-)C57BL/6 mice aged 16 and 24 weeks and 8 male wild-type C57BL/6 mice aged 16 and 24 weeks were used as experimental and control groups, respectively. Ultrasound biomicroscopy was applied to detect the morphologic characteristics of the aortic root, ascending aorta, aortic arch, and carotid artery and to measure the aortic root intima-media thickness and carotid artery bifurcation.
Results: Ultrasound biomicroscopy showed a significant increase in the aortic root intima-media thickness from 0.10 ± 0.03 mm in 16-week-old mice to 0.16 ± 0.04 mm in 24-week-old mice (P < .01). The ultrasound biomicroscopically measured intima-media thickness was highly correlated with the histologic measurement (r = 0.81).
Conclusions: Ultrasound biomicroscopy could be used for a noninvasive, accurate, and dynamic analysis of aortic atherosclerosis in LDL-R knockout mice.
abstract_id: PUBMED:31448182
Automatic Classification of Anterior Chamber Angle Using Ultrasound Biomicroscopy and Deep Learning. Purpose: To develop a software package for automated classification of anterior chamber angle of the eye by using ultrasound biomicroscopy.
Methods: Ultrasound biomicroscopy images were collected, and the trabecular-iris angle was manually measured and classified into three categories: open angle, narrow angle, and angle closure. Inception v3 was used as the classifying convolutional neural network and the algorithm was trained.
Results: With a recall rate of 97% in the test set, the neural network's classification accuracy can reach 97.2% and the overall area under the curve was 0.988. The sensitivity and specificity were 98.04% and 99.09% for the open angle, 96.30% and 98.13% for the narrow angle, and 98.21% and 99.05% for the angle closure categories, respectively.
Conclusions: Preliminary results show that an automated classification of the anterior chamber angle achieved satisfying sensitivity and specificity and could be helpful in clinical practice.
Translational Relevance: The present work suggests that the algorithm described here could be useful in the categorizing of anterior chamber angle and screening for subjects who are at high risk of angle closure.
abstract_id: PUBMED:38249646
Imaging of Anterior Segment Tumours: A Comparison of Ultrasound Biomicroscopy Versus Anterior Segment Optical Coherence Tomography. Anterior segment tumours of the eye are relatively rare but can pose significant morbidity and mortality. We conducted a literature review to compare the performance of ultrasound biomicroscopy to anterior segment optical coherence tomography in the imaging of these tumours. A total of seven studies were included accounting for a cumulative 1,114 eyes. Ultrasound biomicroscopy has traditionally formed, and remains, the mainstay of tumour imaging due to its ability to penetrate pigmented lesions and delineate the posterior border of tumours, and the current evidence supports this.
abstract_id: PUBMED:34228948
Biometric assessment of pseudophakic subjects during objective accommodative stimulation: a prospective observational study. Clinical Relevance: Ultrasound biomicroscopy is an objective method for assessing changes in anterior segment biometry. There is a paucity of data on the reliability of this method. A reliable method for assessing anterior segment changes during physiologically driven accommodation can be a useful tool for clinicians, researchers, and industry.
Background: To assess the test-retest reliability of ultrasound biomicroscopy for measurements of change in anterior chamber depth during a distance to near fixation task in pseudophakic subjects.
Methods: Subjects were adults with monofocal intraocular lenses implanted in both eyes who completed a 6-month post-operative period and had monocular uncorrected distance visual acuity of 6/15 (0.4 logMAR) or better. The change in anterior chamber depth during a distance to near fixation task was measured with a 35-MHz VuMAX HD ultrasound biomicroscopy device (Sonomed Escalon, New Hyde Park, NY) during two separate visits. An asymmetrical vergence paradigm allowed evaluation of anterior segment biometry at 22-µm axial resolution in one eye, while the fellow eye fixated on the target. To assess the test-retest reliability, 2-sided 95% CI from a paired t test was calculated for the difference in anterior chamber depth change from distance to near between visits.
Results: The mean (standard deviation) near-focused anterior chamber depth measured by ultrasound biomicroscopy was 4.331 (0.237) and 4.333 (0.241) mm at visits 1 and 2, respectively. In response to a change in fixation from distance (4 m) to near (40 cm), the mean anterior chamber depth change was -0.012 (0.038) and 0.003 (0.039) mm at visits 1 and 2, respectively. Analysis of the difference in the change in anterior chamber depth between visits was -0.015 mm (95% CI, -0.035 to 0.003).
Conclusion: Ultrasound biomicroscopy is a repeatable, objective method for assessing change in anterior segment biometry during physiological changes in fixation from distance to near.
abstract_id: PUBMED:24613414
Ultrasound biomicroscopy measurement of skin thickness change induced by cosmetic treatment with ultrasound stimulation. Moisturizing creams and lotions are commonly used in daily life for beauty and treatment of different skin conditions such as dryness and wrinkling, and ultrasound stimulation has been used to enhance the delivery of ingredients into skin. However, there is a lack of convenient methods to study the effect of ultrasound stimulation on lotion absorption by skin in vivo. Ultrasound biomicroscopy was adopted as a viable tool in this study to investigate the effectiveness of ultrasound stimulation on the enhancement of lotion delivery into skin. The forearm skin of 10 male and 10 female young subjects was tested at three different sites, including two lotion treatment sites with (Ultrasound Equipment - UE ON) and without (UE OFF) ultrasound stimulation and a control site without any lotion treatment. 1 MHz ultrasound with a duty cycle of 1.7%, a spatial peak temporal peak pressure of 195 kPa and an average power of 0.43 W was used for the stimulation. The skin thickness before, immediately after (0 min), and 15 and 30 min after the treatment was measured by an ultrasound biomicroscopic system (55 MHz). It was found that the skin thickness significantly increased immediately after the lotion treatment for both UE ON (from 1.379 ± 0.187 mm to 1.466 ± 0.182 mm, p<0.001) and UE OFF (from 1.396 ± 0.193 mm to 1.430 ± 0.194 mm, p<0.001) groups. Further comparison between the two groups revealed that the skin thickness increase of UE ON group was significantly larger than that of UE OFF group (6.5 ± 2.4% vs. 2.5 ± 1.3%, p<0.001). Furthermore, it was disclosed that the enhancement of lotion delivery by ultrasound stimulation was more effective for the female subjects than the male subjects (7.6 ± 2.3% vs. 5.4 ± 2.0% immediately after treatment, p=0.017). In conclusion, this study demonstrated that ultrasound biomicroscopy was a feasible method for studying the effectiveness of lotion treatment in vivo, and ultrasound stimulation was effective to enhance the rate of lotion absorption into skin.
Answer: Yes, ultrasound biomicroscopy (UBM) can be used to predict accommodation accurately. A study aimed to establish the accuracy of UBM in predicting the accommodative optical response (AOR) from biometric changes found that UBM-measured parameters can predict AOR with a standard deviation of 0.50 D or less using linear regression, indicating that UBM is a useful and accurate objective technique for measuring accommodation in young phakic eyes (PUBMED:25884582). |
Instruction: Do preoperative eating behaviors influence weight loss after biliopancreatic diversion?
Abstracts:
abstract_id: PUBMED:23546651
Do preoperative eating behaviors influence weight loss after biliopancreatic diversion? Background: The presence of disturbed eating patterns can affect the short- and long-term outcomes after bariatric surgery. Data about the influence of preoperative eating patterns on outcomes after biliopancreatic diversion (BPD) are lacking. The aim of the present study was to assess the role of preoperative eating behavior in patients' selection for biliopancreatic diversion.
Methods: Sixty-one consecutive patients who underwent BPD were evaluated for the present study. For each patient, the following preoperative eating patterns were evaluated: sweet eating, snacking, hyperphagia, and gorging. The primary outcome measure was the percentage of excess weight loss (%EWL) at 3, 6, and 12 months in the groups of patients with different eating patterns at the preoperative evaluation.
Results: At the preoperative evaluation, snacking was found in 31 patients (50.8 %), sweet eating in 15 patients (24.6 %), hyperphagia in 48 patients (78.7 %), and gorging in 45 patients (73.8 %). For each eating behavior, there was no significant difference in mean preoperative BMI and weight loss at 3, 6, and 12 months between the group of patients with and the group of patients without the eating pattern considered. At the analysis of variance in the four groups of patients presenting the eating patterns considered, there was no difference in mean preoperative BMI (P = 0.66), %EWL at 3 months (P = 0.62), %EWL at 6 months (P = 0.94), and %EWL at 12 months (P = 0.95).
Conclusions: Preoperative eating behaviors do not represent reliable outcome predictors for BPD, and they should not be used as a selection criterion for patients who are candidates to this operation.
abstract_id: PUBMED:10102218
Biliopancreatic diversion (doudenal switch procedure). The physiological principle underlying biliopancreatic diversion (BPD) is attractive. It decreases food absorption and particularly that of fat. It preserves normal eating habits and is compatible with a good quality of life. Because weight loss is not a function of an imposed aversion to eating, it is more appealing to patients. Data are accumulating showing that BPD can permanently cure morbid obesity in a majority of patients and is remarkably well tolerated. While long-term systemic side-effects from decreased absorption continue to raise concerns, available results have already shown that, within 20 years, metabolic disturbances are well tolerated while weight loss and quality of life are maintained. Vitamin and mineral replacement therapy and periodic monitoring are essential. The original procedure described by Scopinaro with subsequent modifications will be presented, focusing on the duodeno-ileal switch procedure.
abstract_id: PUBMED:34736423
Late complications of biliopancreatic diversion in an older patient: a case report. Background: In the mid-seventies, biliopancreatic diversion became popular as weight-loss surgery procedure. This bariatric procedure combines distal gastric resection and intestinal malabsorption, leading to greater weight loss and improvement of co-morbidities than other bariatric procedures. Nowadays, biliopancreatic diversion has become obsolete due to the high risk of nutritional complications. However, current patients with biliopancreatic diversions are aging. Consequently, geriatricians and general practitioners will encounter them more often and will be faced with the consequences of late complications.
Case Presentation: A 74-year old female presented with weakness, recurrent falls, confusion, episodes of irresponsiveness, anorexia and weight loss. Her medical history included osteoporosis, herpes encephalitis 8 years prior and a biliopancreatic diversion (Scopinaro surgery) at age 52. Cerebral imaging showed herpes sequelae without major atrophy. Delirium was diagnosed with underlying nutritional deficiencies. Biochemical screening indicated vitamin A deficiency, vitamin E deficiency, zinc deficiency and severe hypoalbuminemia. While thiamin level and fasting blood glucose were normal. However, postprandial hyperinsulinemic hypoglycemia was observed with concomitant signs of confusion and blurred consciousness. After initiating parenteral nutrition with additional micronutrient supplementation, a marked improvement was observed in cognitive and physical functioning.
Conclusions: Long-term effects of biliopancreatic diversion remain relatively underreported in older patients. However, the anatomical and physiological changes of the gastrointestinal tract can contribute to the development of metabolic and nutritional complications that may culminate in cognitive impairment, functional decline and delirium. Therefore, it is warranted to evaluate the presence of metabolic disturbances and nutritional complications in older patients after biliopancreatic diversion.
abstract_id: PUBMED:15085390
Biliopancreatic diversion in the surgical treatment of morbid obesity. Biliopancreatic diversion is a malabsorptive technique of bariatric surgery that has gained wide acceptance in the Western world. It is performed in one of two ways: In its classic form it consists of partial gastrectomy with a Roux-en-Y gastroenterostomy; in its duodenal switch form a vertical sleeve gastrectomy is combined with a duodenoenterostomy. Both techniques realize diversion of biliopancreatic juice, thereby creating a mild form of malabsorption. Weight loss has been approximately 70% of initial excess weight, exceeding that obtained with most other bariatric procedures. Iron, calcium, and vitamin deficiencies may occur, especially with classic biliopancreatic diversion, and must be prevented with adequate supplements during vigorous follow-up. Weight loss is followed by a substantial reduction in the co-morbidities that are present in many morbidly obese patients. Biliopancreatic diversion should be included in each obesity clinic program and be proposed for morbidly obese patients who are having difficulty with the prospect of continuous restraint of food intake or problems due to failed gastric restrictive interventions. The postoperative results in such patients have been good and have substantially improved quality of life and self-esteem in this category of morbidly obese patients.
abstract_id: PUBMED:8339103
Eating behavior following biliopancreatic diversion for obesity: study with a three-factor eating questionnaire. The eating behavior of obese patients and of subjects who had normalized their body weight following biliopancreatic diversion (BPD) was assessed by a three-factor eating questionnaire (TFEQ), constructed to measure cognitive dietary restraint, the tendency to disinhibition, and susceptibility to hunger. In the obese patients higher values of both disinhibition and hunger score were found than in normoweight persons. In BPD subjects a negative association between the time elapsed from the operation and both the disinhibition and hunger score values was observed. In patients operated more than 2 years before, the eating behavior, as assessed by the TFEQ, was similar to that of normoweight persons. After BPD the operated subjects do not have to respect any dietary advice, the loss of weight and the maintenance of a normal body weight occurring in spite of an absolutely free food consumption. Similarity to the control values of disinhibition and hunger score following BPD suggests that in the long term, when the preoccupation with food and diet is abandoned, a normal eating pattern can be achieved.
abstract_id: PUBMED:16025738
Effect of biliopancreatic diversion on hypertension in severely obese patients. Hypertension is a medical disorder frequently associated with severe obesity, and the effect of weight loss on the reduction of blood pressure has been well established. In this study, the relationships between the weight loss surgically obtained by biliopancreatic diversion and blood pressure were investigated in a population of severely obese patients with preoperative hypertension. At 1 year following the operation, blood pressure was normalized in more than half of patients; in a further 10% of cases the hypertensive status resolved within the 3-year follow-up period. The resolution of hypertension was independently associated with age and body weight and was unrelated to sex, the amount of weight loss, or body fat distribution. In severely obese patients with hypertension undergoing bariatric surgery, biliopancreatic diversion is advisable since it achieves and supports the maintenance of body weight close to the ideal value.
abstract_id: PUBMED:31711316
Severe Vitamin A Deficiency After Biliopancreatic Diversion. Biliopancreatic diversion is a surgical procedure that causes weight loss via volume restriction and malabsorption. It is now rarely performed due to the risk of severe nutritional deficiencies including vitamin A. We report a case of severe vitamin A deficiency due to malabsorption from a biliopancreatic diversion procedure for weight loss. By the time the patient presented to our department, she had developed blindness refractory to parenteral vitamin A treatment. A unique feature of her case is the development of a rash with vitamin A injections. This reaction has only been reported in one case series of 3 patients in the published literature. Her case highlights the importance of vitamin deficiency screening in patients after bariatric surgery, and her skin reaction to the injections is a unique side effect that is not frequently observed.
abstract_id: PUBMED:12152155
Laparoscopic biliopancreatic diversion with duodenal switch. The biliopancreatic diversion with duodenal switch combines a sleeve gastrectomy with a duodenoileal switch to achieve maximum weight loss. Consistent excess weight loss between 70% to 80% is achieved with acceptable decreased long-term nutritional complications. With a higher entry weight, the super obese patient (body mass index [BMI] >50 kg/m(2)) benefits the greatest from a procedure that produces a higher mean excess weight loss. The laparoscopic approach to this procedure has successfully created a surgical technique with optimum benefit and minimal morbidity, especially in the super obese patient.
abstract_id: PUBMED:20213290
Dietary habits and body weight at long-term following biliopancreatic diversion. Aim: This study aims to evaluate the role of simple carbohydrates and alcohol intake in determining weight of stabilization at long-term following malabsorptive bariatric surgery.
Material And Methods: Sixty patients at more than 2 years following biliopancreatic diversion (BPD) were submitted to an alimentary interview for evaluating the daily consumption of simple sugar, fruits, ice-cream, sweets, and caloric and alcoholic beverages. Eating behavior was assessed by Three Factors Eating Questionnaire.
Results: The mean estimated daily energy consumption intake was 2,852 kcal, with a mean daily intake of simple carbohydrates of 89 g that represented 12% of the total energy intake. The current body weight was positively and independently related to the preoperative body weight and to simple carbohydrate and alcohol intake and negatively related to physical activity, while no association with total energy consumption and eating behavior was found.
Discussion: These findings confirm that following BPD the body weight is independent of energy intake and eating behavior. Furthermore, these data suggest that in post-BPD subjects the simple carbohydrates and alcohol absorption is fully preserved, and that the operated subjects could regulate their body weight by reducing simple carbohydrates and alcoholic intake and increasing physical activity.
abstract_id: PUBMED:26163177
Does preoperative diabetes mellitus affect weight loss outcome after biliopancreatic diversion with duodenal switch? Background: Preoperative type 2 diabetes mellitus (T2 DM) has previously been reported as an independent predictor for suboptimal (≤40%) weight loss after Roux-en-Y gastric bypass in patients with T2 DM compared with patients who do not have T2 DM. This association has not been shown to apply to patients who undergo biliopancreatic diversion with duodenal switch (BPD/DS). BPD/DS is currently the most effective bariatric operation to treat T2 DM.
Objectives: We designed a study to determine if the reported suboptimal weight loss seen in patients with T2 DM undergoing Roux-en-Y gastric bypass is also seen in those undergoing BDP/DS.
Setting: Independent, university-affiliated teaching hospital.
Materials And Methods: Retrospective chart review of a prospectively maintained database was performed on data on 152 patients who underwent robotically assisted laparoscopic BPD/DS from 2008 to 2012. Patients were divided into 2 groups: those with a preoperative diagnosis of T2 DM (Group 1, n = 51) versus those without a preoperative diagnosis of T2 DM (Group 2, n = 101). Perioperative complications and postoperative weight loss between the 2 groups were compared.
Results: At 1 month postoperatively, Group 1 had a higher percentage of excess weight loss (%EWL) of 20.9% compared with Group 2 of 17.9% (P<.05). At 3, 6, 9, 12, and 18 months postoperatively, both groups had statistically comparable %EWL (P>.05). Thirty-day perioperative complications were higher in Group 1 compared with those in Group 2 (7.8% and 3.9%, respectively). Reoperation rates were similar in both groups (1.9%). No mortality was recorded in this series.
Conclusion: Patients with diabetes had a similar weight loss outcome after BPD/DS compared with those who did not have diabetes.
Answer: Preoperative eating behaviors do not appear to influence weight loss after biliopancreatic diversion (BPD). A study evaluating the role of preoperative eating behavior in patients' selection for BPD assessed patterns such as sweet eating, snacking, hyperphagia, and gorging in 61 consecutive patients who underwent BPD. The primary outcome measure was the percentage of excess weight loss (%EWL) at 3, 6, and 12 months. The results showed no significant difference in mean preoperative BMI and weight loss at these time intervals between the group of patients with and without the considered eating patterns. Furthermore, the analysis of variance in the four groups of patients presenting the eating patterns considered also showed no difference in mean preoperative BMI, %EWL at 3, 6, and 12 months. Therefore, the study concluded that preoperative eating behaviors do not represent reliable outcome predictors for BPD and should not be used as a selection criterion for patients who are candidates for this operation (PUBMED:23546651). |
Instruction: Fibrosis in chronic hepatitis C acquired in infancy: is it only a matter of time?
Abstracts:
abstract_id: PUBMED:12650803
Fibrosis in chronic hepatitis C acquired in infancy: is it only a matter of time? Objective: The natural history of chronic hepatitis C acquired in infancy is not well understood. The progression of fibrosis was analyzed in untreated children with chronic hepatitis C virus infection and no other hepatotoxic cofactors.
Methods: A total of 112 pediatric patients (13 with paired liver biopsies) were considered. Fibrosis was assessed by METAVIR score (i.e., stage F1 to F4). The ratio between the stage of fibrosis (METAVIR units) and the presumed duration of infection represented the "estimated" rate of fibrosis progression per year. In patients with paired biopsies, the "observed" rate of fibrosis progression was defined as the difference between the stage of fibrosis in the two biopsies divided by the time interval between them.
Results: Both age of patients at biopsy and duration of infection correlated with stage of fibrosis (p < 0.002 and p < 0.0005, respectively). Stage of fibrosis differed significantly between patients with infection lasting less or more than 10 yr (p < 0.0006). Sex, hepatitis C virus genotype, and route of infection did not correlate with stage of fibrosis. Among the 13 patients with paired biopsies, stage of fibrosis increased in seven and did not change in six; the median rate of estimated fibrosis progression per year was 0.142. The difference between estimated and observed fibrosis progression rates was significant (coefficient of determination, r(2) = 0.031), which demonstrated that the prediction of the fibrosis progression was unreliable in 97% of patients.
Conclusions: Chronic hepatitis C acquired in childhood is a progressive, slow-moving, fibrotic disease. Fibrosis progression inferred on the basis of linear mathematical models should be critically evaluated in the clinical practice.
abstract_id: PUBMED:28109003
Eradication of hepatitis C virus and non-liver-related non-acquired immune deficiency syndrome-related events in human immunodeficiency virus/hepatitis C virus coinfection. We assessed non-liver-related non-acquired immunodeficiency syndrome (AIDS)-related (NLR-NAR) events and mortality in a cohort of human immunodeficiency virus (HIV)/hepatitis C virus (HCV)-coinfected patients treated with interferon (IFN) and ribavirin (RBV), between 2000 and 2008. The censoring date was May 31, 2014. Cox regression analysis was performed to assess the adjusted hazard rate (HR) of overall death in responders and nonresponders. Fine and Gray regression analysis was conducted to determine the adjusted subhazard rate (sHR) of NLR deaths and NLR-NAR events considering death as the competing risk. The NLR-NAR events analyzed included diabetes mellitus, chronic renal failure, cardiovascular events, NLR-NAR cancer, bone events, and non-AIDS-related infections. The variables for adjustment were age, sex, past AIDS, HIV transmission category, nadir CD4+ T-cell count, antiretroviral therapy, HIV RNA, liver fibrosis, HCV genotype, and exposure to specific anti-HIV drugs. Of the 1,625 patients included, 592 (36%) had a sustained viral response (SVR). After a median 5-year follow-up, SVR was found to be associated with a significant decrease in the hazard of diabetes mellitus (sHR, 0.57; 95% confidence interval [CI], 0.35-0.93; P = 0.024) and decline in the hazard of chronic renal failure close to the threshold of significance (sHR, 0.43; 95% CI, 0.17-1.09; P = 0.075).
Conclusion: Our data suggest that eradication of HCV in coinfected patients is associated not only with a reduction in the frequency of death, HIV progression, and liver-related events, but also with a reduced hazard of diabetes mellitus and possibly of chronic renal failure. These findings argue for the prescription of HCV therapy in coinfected patients regardless of fibrosis stage. (Hepatology 2017;66:344-356).
abstract_id: PUBMED:19856471
Long-term outcome of hepatitis B and hepatitis C virus co-infection and single HBV infection acquired in youth. Co-infection with HBV and HCV seems to be associated with more severe liver disease in retrospective and cross-sectional studies in adults, but no data are available when co-infection is acquired in youth. The long-term outcome of infection acquired in youth was assessed in patients co-infected with HBV and HCV and in patients with HBV infection only. Twenty-seven patients with HBV and HCV co-infection and 27 patients infected with HBV only were enrolled. Seventy-six per cent of the patients were treated with alpha-interferon for 1 year. After a median follow-up of 23 years, the annual progression rate of fibrosis was 0.07 in patients co-infected with HBV and HCV, and in those infected with HBV it was 0.07 and 0.11 (P < 0.004) for HBe and anti-HBe-positive patients, respectively. In co-infected patients, the development of cirrhosis was observed in 2 (7.4%) and of hepatocellular carcinoma (HCC) in 1 (3.7%), while in those with HBV, cirrhosis appeared in one patient (3.7%). Alcohol intake (OR = 9.5 +/- 1.2; 95% CI = 6.6-13.9; P < 0.0001) was independently associated with cirrhosis and HCC. alpha-interferon showed no efficacy during treatment, but the treated group showed higher HCV RNA clearance during post-treatment follow-up. Co-infection with HBV and HCV and single HBV infection acquired in youth showed a low rate of progression to liver fibrosis, no liver failure, and low development of HCC during a median follow-up of 23 years (range 17-40).
abstract_id: PUBMED:16082284
More severe parenchymal injury in chronic hepatitis C acquired by recent injection drug use. Objective: Histologic liver injury is reported to be less severe in persons who acquire hepatitis C through injection drug use (IDU) than by blood transfusion. Because age correlates with histologic severity, it may be that differences between routes of acquisition reflect the younger age of most drug abusers. The early histopathologic changes of hepatitis C acquired through IDU are less defined, probably because of the lack of liver biopsy material from a cohort of patients not long after initial exposure. The availability of material from a cohort of patients who had liver biopsy for IDU-related hepatitis C in the 1970s enabled us to compare the histology with that of current patients.
Methods: Liver biopsies of a group of injection drug users (n=70, all males; mean age, 27.6 years, designated as Group 1) in the 1970s cohort were compared with biopsies of patients (n=63, all males; mean age, 48 years, designated as Group 2, 23 who admitted past IDU) entering a treatment trial in 1999. All patients were positive for anti-HCV at the time of biopsy.
Results: The histologic features of the 23 patients in Group 2 with a history of IDU did not differ significantly from the other 40 patients who denied past IDU. Using a modified Histologic Activity Index (HAI), there was no difference between Group 1 and Group 2 in portal inflammation or periportal injury. However, parenchymal (lobular) injury and inflammation was significantly (P<0.0001) greater in Group 1 than Group 2. Fibrosis was significantly (P=0.014) greater in Group 2.
Conclusions: The degree of parenchymal injury was greater in Group 1 than Group 2, perhaps because they were closer to the time of exposure or possibly because of a stronger immunologic response in younger patients. The degree of hepatic fibrosis was greater in Group 2, suggesting that progression with age may be the natural history of chronic hepatitis C.
abstract_id: PUBMED:17713882
Quality of care and health economics in occupationally acquired hepatitis C in German health care workers between 1993 and 2004 Background And Objective: Data for quality of care and health economics in patients with occupationally acquired hepatitis C are lacking in Germany. The aim of this study was to analyse quality and economics of health care in occupationally acquired hepatitis C recognized by the Employees Compensation Boards between 1993 and 2000 in the area of Cologne and Bochum, Germany.
Methods: Results for 192 patients (146 women and 46 men, mean age 42 +/- 10 years) were analysed, using a standardized evaluation form. In addition to direct medical costs and diagnostic and therapeutic performance, disability days and benefit payments were also analysed. The observational period was from 01.01.1993 to 31.07.2004. Disability benefits were considered from 1983 onwards.
Results: HCV genotype 1 accounted for 79 % of infections. 112 patients (58 %) received antiviral treatment at least once. There were no differences in treatment rates between patients with prognostically favorable genotypes (2/3) and those with unfavorable HCV types (1/4) (59 % v. 60 %) or patients with low and those with advanced fibrosis (61 % v. 64 %). A sustained virological response was achieved in 53 % of treated patients. Disability days were more frequent in patients receiving antiviral treatment (214 v. 67 days). The cost of medication made up a major part of health care expenditure (mean of i 13,279 per patient). In addition, total disability benefits of i 6,933,789 were paid out between 1983 and 2004.
Conclusion: Occupationally acquired hepatitis C is a major health-economic burden in Germany. Quality of health care corresponded to guidelines at any one time and sustained virological response was in the range of large controlled trials. However, 69 % of the patients remain chronically infected and are at risk for disease progression and transmission.
abstract_id: PUBMED:19032460
Natural history of hepatic fibrosis progression in chronic hepatitis C virus infection in India. Background And Aim: The rate of fibrosis progression per year can predict the time for the development of cirrhosis in chronic hepatitis C (CHC). We assessed the rate of fibrosis progression and the predictors of disease severity in Indian CHC patients.
Methods: Of the 355 treatment-naïve, histologically-proven CHC patients, the precise duration of infection (from the time of exposure to HCV until liver biopsy) could be determined in 213 patients (age = 41.6 +/- 14.7 years, male : female = 139 : 74, genotype 3 = 75%). The rate of fibrosis progression per year was calculated. The correlation of the advanced degree of fibrosis and age, duration of infection, age at the onset of infection, sex, mode of infection, hepatitis C virus (HCV) genotype, histological activity index (HAI), and the presence of diabetes mellitus were studied.
Results: The median rate of fibrosis progression per year was 0.25 (0.0-1.5) fibrosis units. The fibrosis progression rate was higher in patients who acquired infection at > 30 years of age, those < 30 years (0.33 vs 0.15; P < 0.001), and those who acquired HCV infection with a history of blood transfusion than with other modes of transmission (0.25 vs 0.19; P = 0.04). The median time to progress to cirrhosis was 16 years. The multivariate analysis found that the HAI score (odds ratio [OR]= 14.03; P < 0.001) and the duration of infection > 10 years (OR = 4.83; P < 0.001) correlated with severe liver disease (fibrosis > or = 3).
Conclusion: The median rate of fibrosis progression per year in Indian CHC patients is 0.25 fibrosis units. A higher HAI and longer duration of infection are associated with a significant risk of advanced liver disease, and merit early therapeutic interventions.
abstract_id: PUBMED:21914074
Chronic hepatitis C in children--review of natural history at a National Centre. The natural history of hepatitis C virus (HCV) infection in adults has been established, but less is known about outcome in children. We conducted a retrospective review of patients referred to Birmingham Children's Hospital Liver Unit, from 1991 till 2008, with the diagnosis of HCV was undertaken. Only children with documented positive HCV RNA and a minimum duration of follow-up of 6 months were included. One hundred and thirty-three children were identified. The route of transmission was transfusion acquired in 47%, vertically acquired in 49% and transplantation in 2%. Since 2000, most children were infected vertically. The overall rate of spontaneous viral clearance was 17.5% with higher clearance (27%) in the transfusion group compared to the vertically acquired group (9%). Seventy-six had a liver biopsy at diagnosis. There was no evidence of fibrosis in 46%, mild fibrosis in 50% and moderate to severe fibrosis in 4%. None had cirrhosis. There was a statistically significant relationship between fibrosis score and older age at the time of biopsy (P = 0.02) and longer duration of infection (P = 0.05). Eighty children received treatment for HCV. Sustained viral response (SVR) was influenced by viral genotypes, with significantly increased response rates in genotypes (G) 2 and 3 compared to G 1 and 4. Vertical infection is now the major route of HCV infection in children in the UK. Histological changes were mild at diagnosis, but the severity of fibrosis progressed with age. Consideration should be given to improve detection and diagnosis to refer children to specialist centres for management and antiviral therapy before developing fibrosis.
abstract_id: PUBMED:12645802
Biochemical markers of fibrosis in patients with chronic hepatitis C: a comparison with prothrombin time, platelet count, and age-platelet index. As an alternative to liver biopsy, an index of five biochemical markers (alpha2-macroglobulin, apolipoprotein A1, haptoglobin, total bilirubin, gamma-glutamyl-transpeptidase) has been shown to predict the severity of hepatitis C-related fibrosis. The objective of this study was to compare this index with other markers frequently used for this purpose (prothrombin time, platelets, age-platelet index). In 323 hepatitis C-infected patients, the discriminative values of these markers for F2-F4 fibrosis (by the METAVIR classification) were compared. By multiple logistic regression analysis, only the five-marker index (P < 0.0001) and prothrombin time (P = 0.02) were independently predictive of F2-F4 fibrosis. For this outcome, the area under the receiver operating characteristic curve was significantly higher for the five-marker index (0.836 +/- 0.024) than the age-platelet index (P = 0.002), and the platelet count and prothrombin time (P < 0.001), indicating greater diagnostic value. The addition of the latter markers to the five-marker index proved unhelpful for increasing its accuracy. In conclusion, an index of five biochemical markers accurately predicts significant hepatitis C-related fibrosis and is superior to traditional markers.
abstract_id: PUBMED:33090666
Characteristics associated with time-to-treatment initiation for chronic Hepatitis C with new direct acting antivirals. Background: Interferon-free direct-acting antivirals (DAAs) were introduced in 2013 and have transformed the therapeutic landscape for chronic Hepatitis C (HCV). Although treatment is recommended for almost all persons infected with HCV, clinical and psychosocial factors may affect treatment initiation.
Methods: We conducted an observational cohort study of Kaiser Permanente Mid-Atlantic States members with prevalent or incident HCV infection identified from November 2013 through May 2016 to identify predictors of DAA initiation. We used Cox regression with time-dependent covariates to compare time to treatment by clinical, demographic and societal factors.
Results: Of 2962 patients eligible for DAA therapy, 33% (n = 980) initiated treatment over the study period. The majority of patients (97%) were persistent with therapy and most (95%) tested for sustained virologic response (SVR) achieved cure. We found no effect of race, insurance type or fibrosis stage on treatment initiation. We observed that patients aged 41-60 years (aHR: 2.014, 95% CI: 1.12, 3.60) and 61-80 years (aHR: 2.08, 95% CI: 1.15-3.75) had higher treatment rates compared to younger patients. Incident cases were more likely to be treated than prevalent cases (aHR: 3.05, 95% CI: 2.40-3.89). Patients with a history of substance use disorder (SUD) were less likely (aHR: 0.805, 95% CI: 0.680, 0.953) to be treated.
Conclusions: In the first 3 years of DAA availability, one-third of patients with HCV initiated therapy, and almost all were persistent and achieved cure. While curative, DAAs remain highly priced. Triaging for non-clinical reasons or perceptions about patients will stall our ability to eradicate HCV.
abstract_id: PUBMED:12409402
Comparison of serum hepatitis C virus RNA and core antigen concentrations and determination of whether levels are associated with liver histology or affected by specimen storage time. An enzyme immunoassay has recently been developed for the hepatitis C virus (HCV) core antigen. To evaluate the possible association between core antigen and HCV RNA levels with regards to the change in liver histology over time as well as study the effect of duration of storage on viral load results, sequential sera were analyzed from 45 patients with chronic HCV infection who had undergone two or more liver biopsies. A relatively strong association was found between the core antigen and HCV RNA concentrations (r(s) = 0.8), with a core antigen level of 1 pg/ml corresponding to approximately 1,000 IU/ml. All 42 sera with detectable HCV RNA at the time of the second biopsy had core antigen concentrations above 1 pg/ml, and the three sera without detectable HCV RNA had concentrations below 1 pg/ml. No association was found between HCV RNA or core antigen levels and the stage of fibrosis in biopsy samples, progression of fibrosis, necro-inflammatory grade, steatosis, genotype, alanine aminotransferase level, or alcohol consumption. A significant association was demonstrated between the storage time of the samples and both the HCV RNA and core antigen concentrations. The median log HCV RNA concentrations (international units/milliliter) were 3.92 for the sera obtained at the time of the first biopsy (median storage time, 13.0 years) and 4.41 for the sera obtained at the time of the second biopsy (median storage time, 6.6 years) compared to 5.96, the median for 102 different routine clinical patient samples.
Answer: The progression of fibrosis in chronic hepatitis C acquired in infancy appears to be a slow-moving process, and the rate of fibrosis progression is not consistent across individuals. In a study of 112 pediatric patients with chronic hepatitis C virus infection, both the age of the patients at biopsy and the duration of infection correlated with the stage of fibrosis. However, the prediction of fibrosis progression based on linear mathematical models was found to be unreliable in 97% of patients, indicating that the progression of fibrosis is not solely a matter of time and may not follow a predictable linear pattern (PUBMED:12650803).
The study found that the median rate of estimated fibrosis progression per year was 0.142, but the observed rate of fibrosis progression varied among the patients with paired biopsies, with fibrosis stage increasing in some and remaining unchanged in others. This suggests that while time and duration of infection are factors in the progression of fibrosis, they are not the only determinants, and other factors may influence the rate of fibrosis progression in children with chronic hepatitis C acquired in infancy (PUBMED:12650803).
Therefore, while chronic hepatitis C acquired in childhood is progressive and fibrotic, the rate at which fibrosis progresses is not consistent and cannot be accurately predicted for individual patients based solely on the duration of infection. This highlights the need for careful clinical evaluation and monitoring of pediatric patients with chronic hepatitis C to manage the disease effectively. |
Instruction: Should cryopreserved epididymal or testicular sperm be recovered from obstructive azoospermic men for ICSI?
Abstracts:
abstract_id: PUBMED:15521877
Should cryopreserved epididymal or testicular sperm be recovered from obstructive azoospermic men for ICSI? Objective: To determine the effect of the anatomical site of sperm recovery on intracytoplasmic sperm injection (ICSI) embryo implantation, pregnancy and live birth rates in couples with isolated obstructive azoospermia as the sole cause of infertility.
Design: Controlled, single centre, retrospective clinical study.
Setting: University Hospital, Centre for Reproductive Medicine.
Sample: One hundred and fifty-one cycles of ICSI were performed, using surgically recovered sperm, between August 1996 and March 2002.
Methods: The outcome of ICSI, with surgically recovered sperm, was compared between epididymal (Group E) and testicular (Group T) derived sperm. Inclusion was limited to couples undergoing their first treatment cycle, where female age was < or =39 years and a minimum of five oocytes were available for injection. Women with a history of ovarian surgery, ultrasonic evidence of polycystic ovaries, uterine anomalies or hydrosalpinx were excluded.
Main Outcome Measures: Clinical pregnancy, implantation and live birth rate.
Results: Forty-two of 151 cycles met the strict inclusion criteria. Groups E and T were comparable with respect to age, basal serum FSH, ovarian response; number of oocytes injected and number of embryos available and transferred. No difference existed between Groups E and T in implantation, clinical pregnancy or live birth rate (28.8% vs 25.8%, 42.9% vs 42.9% and 39.3% vs 42.9%, respectively).
Conclusions: Cryopreserved epididymal and testicular sperm, from men with obstructive azoospermia, appear equally effective in ICSI. Epididymal recovery should remain the method of first choice for obstructive azoospermic men but further study of sperm DNA damage rates in different testicular sites is required.
abstract_id: PUBMED:36034342
Comparative Clinical Study of Percutaneous Epididymal Sperm Aspiration and Testicular Biopsy in the Outcome of ICSI-Assisted Fertility Treatment in Patients with Obstructive Azoospermia. Objective: To compare and contrast the effects of percutaneous epididymal sperm aspiration (PESA) and testicular sperm aspiration (TESA) on the outcome of intracytoplasmic sperm injection (ICSI)-assisted fertility treatment in patients with obstructive azoospermia.
Methods: Patients with obstructive azoospermia with an age distribution of 20-36 years admitted to the male department of the Reproductive Center of the Second Affiliated Hospital of South China University (Hengyang Nanhua Xing Hui Reproductive Health Hospital) from December 2018 to December 2020 were used in this study. One group was set up as the PESA group to perform PESA, and the other group was set up as the TESA group to perform percutaneous testicular biopsy for sperm extraction. Patients who were unsuccessful in PESA continued to undergo TESA, and if sperm were retrieved, they were classified as the TESA group. General information on male patients and their partners was collected and compared in patients from different sperm source groups. Embryo development (normal fertilization rate, high-quality embryo rate, and high-quality blastocyst rate) and pregnancy outcome (clinical pregnancy rate, miscarriage rate, and ectopic pregnancy rate) were compared between the two groups.
Results: Finally, there were 26 patients in the PESA group and 31 patients in the TESA group. There were no significant differences in terms of age, years of infertility, testosterone level, (FSH) follicle-stimulating hormone level, and testicular volume between the male patients in the PESA and TESA groups of two different sperm sources, and no significant differences were found in the general conditions of the female patients in terms of age, number of eggs obtained, number of sinus follicles, basal FSH value, and basal E2 value (p > 0.05). The rate of high-quality blastocysts in the TESA group was significantly higher than that in the PESA group (p < 0.05); the differences in clinical normal fertilization rate, high-quality embryo rate, clinical pregnancy rate, miscarriage rate, and ectopic pregnancy rate between the two groups were not statistically significant (p > 0.05).
Conclusion: ICSI with different sources of sperm in patients with male factor infertility alone, which had no significant effect on embryo development, embryo implantation rate, clinical pregnancy rate, and miscarriage rate, resulting in better clinical outcomes.
abstract_id: PUBMED:32038959
Testicular versus percutaneous epididymal sperm aspiration for patients with obstructive azoospermia: a systematic review and meta-analysis. Background: Intracytoplasmic sperm injection (ICSI) is a popular treatment for male infertility due to obstructive azoospermia (OA). Testicular sperm aspiration (TESA) and percutaneous epididymal sperm aspiration (PESA) are two common sperm retrieval approaches for ICSI among men with OA. However, the comparative efficacies of TESA and PESA have been debated for more than a decade and there has been no synthesis of the available evidence. This meta-analysis compared fertility outcomes between TESA and PESA among men with OA undergoing ICSI.
Methods: We searched Embase, PubMed, ScienceDirect, and Web of Science to identify studies comparing the effectiveness of TESA and PESA for ICSI. Study quality was assessed using the Newcastle-Ottawa scale. Data were pooled using a random-effects model. Outcomes were fertilization rate, implantation rate, pregnancy rate, and miscarriage rate. Results are expressed as odds ratio (OR) with 95% confidence intervals (CIs). Study heterogeneity was evaluated by the I-square (I2) statistic.
Results: Of 2,965 references retrieved, eight studies met eligibility criteria. These studies included 2,020 men receiving 2,060 ICSI cycles. The pooled results showed no significant differences in pregnancy and miscarriage rates between TESA and PESA groups, but TESA yielded a significantly higher implantation rate than PESA (OR =1.58, P=0.02, I2=24%).
Conclusions: TESA and PESA yielded similar pregnancy and miscarriage rates for couples receiving ICSI because of OA, but each demonstrated unique advantages and disadvantages. Further studies are required to evaluate safety outcomes and efficacy for specific clinical groups.
abstract_id: PUBMED:28331619
Processing and selection of surgically-retrieved sperm for ICSI: a review. Although the technique of intracytoplasmic sperm injection (ICSI) has been a revolution in the alleviation of male infertility, the use of testicular sperm for ICSI was a formerly unseen breakthrough in the treatment of the azoospermic man with primary testicular failure. At the clinical level, different procedures of testicular sperm retrieval (conventional TESE, micro-TESE, FNA/TESA, MESA, PESA) are being performed, the choice is mainly based on the cause of azoospermia (obstructive versus non-obstructive) and the surgeon's skills. At the level of the IVF laboratory, mechanical procedures to harvest the sperm from the tissue may be combined with enzymatic treatment in order to increase the sperm recovery rates. A number of techniques have been developed for viable sperm selection in males with only immotile testicular sperm available. However, large, well-designed studies on the benefit and safety of one over the other technique are lacking. Despite all the available methods and combinations of laboratory procedures which have a common goal to maximize sperm recovery from testicular samples, a large proportion of NOA patients fail to father a genetically own child. Advanced technology application may improve recovery rates by detection of the testicular foci with active spermatogenesis and/or identification of the rare individual sperm in the testicular suspensions. On the other hand, in vitro spermatogenesis or sperm production from embryonic stem cells or induced pluripotent stem cells might be future options. The present review summarizes the available strategies which aim to maximize sperm recovery from surgically retrieved samples.
abstract_id: PUBMED:32935172
Impact on using cryopreservation of testicular or epididymal sperm upon intracytoplasmic sperm injection outcome in men with obstructive azoospermia: a systematic review and meta-analysis. Purpose: To determine whether there was a significant impact on using cryopreservation of testicular or epididymal sperm upon the outcomes of intracytoplasmic sperm injection (ICSI) in patients with obstructive azoospermia (OA).
Method: Systematic review and meta-analysis of 20 retrospective studies in databases from January 1, 1995, to June 1, 2020.
Result: Twenty articles were included in this study. A total of 3602 (64.1%) of 5616 oocytes injected with fresh epididymal sperm were fertilized, compared with 2366 (61.2%) of 3862 oocytes injected with cryopreserved sperm (relative risk ratio (RR) 0.96, 95% confidence interval (CI) (0.90, 1.02), P > 0.05). A total of 303 (44.1%) of 687 ICSI cycles using fresh epididymal sperm resulted in a clinical pregnancy, compared with 150 (36.6%) of 410 ICSI cycles using cryopreserved epididymal sperm (RR 0.84, 95% CI (0.72, 0.97), P < 0.05). In the testis, a total of 2147 (68.7%) of 3125 oocytes injected with fresh sperm were fertilized, compared with 1623 (63.5%) of 2557 oocytes injected with cryopreserved sperm (RR 0.97, 95% CI (0.90, 1.06), P > 0.05). A total of 151 (47.8%) of 316 ICSI cycles using fresh testicular sperm resulted in a clinical pregnancy, compared with 113 (38.2%) of 296 ICSI cycles using cryopreserved sperm (RR 0.87, 95% CI (0.72, 1.05), P > 0.05).
Conclusions: In men with OA, there was a statistical lower clinical pregnancy rate (CPR) by using frozen epididymal sperm compared with fresh epididymal sperm, but showing no difference on fertilization rate (FR). Additionally, FR and CPR were not affected by whether the retrieved testicular sperm was frozen or fresh.
abstract_id: PUBMED:29282919
ICSI with testicular or epididymal sperm for patients with obstructive azoospermia: A systematic review Objective: To assess the effects of testicular sperm and epididymal sperm on the outcomes of ICSI for patients with obstructive azoospermia.
Methods: We searched PubMed, MEDLINE, EMBASE, Cochrane, CNKI, VIP, CBM, and Wanfang Database up to December 2015 for published literature relevant to ICSI with testicular or epididymal sperm for obstructive azoospermia patients. According to the inclusion and exclusion criteria, two reviewers independently conducted literature screening, data extraction and quality assessment of the included trials, followed by meta-analysis with the RevMan 5.3 software.
Results: A total of 14 studies were identified, involving 1 278 patients and 1 553 ICSI cycles. ICSI with epididymal sperm exhibited a significantly higher fertilization rate than that with testicular sperm (RR = 1.08, 95% CI 1.05-1.11, P<0.01). No statistically significant differences were observed between the epididymal and testicular sperm groups in the rates of cleavage (RR = 1.04, 95% CI 0.99-1.10, P = 0.13), good-quality embryo (RR = 1.01, 95% CI 0.93-1.09,P = 0.85), implantation (RR = 1.14, 95% CI 0.75-1.73, P = 0.55), clinical pregnancy (RR = 1.14, 95% CI 0.98-1.31, P = 0.08), and miscarriage (RR = 0.86, 95% CI 0.53-1.39,P = 0.54).
Conclusions: ICSI with epididymal sperm yields a markedly higher fertilization rate than that with testicular sperm, but has no statistically significant differences from the latter in the rates of cleavage, good-quality embryo, implantation, clinical pregnancy, and miscarriage in the treatment of obstructive azoospermia.
abstract_id: PUBMED:9695381
Percutaneous sperm retrieval for ICSI procedures in men with obstructive azoospermia: ICSI-PESA and ICSI-TESE micromanipulation: our experience Objectives: The sperm retrieval for ICSI procedures is possibly using following procedures: ejaculate sperm, epididymal sperm obtained by microsurgical aspiration or percutaneous puncture and testicular sperm that are obtained by surgical excision or percutaneous biopsy. Percutaneous techniques seem to be rather simple and effective procedures.
Design: The authors present their own experiences with percutaneous sperm retrieval for micromanipulation ICSI from the epididymis (ICSI-PESA) and from the testicular tissue (ICSI-TESE) in men with obstructive azoospermia and with reactive impotence. First time in Poland ICSI-PESA was done on April 11, 1996 in Private Infertility Center "Novum", Warsaw.
Material And Methods: From April 1996 to the end of January 1998, 10 ICSI-PESA procedures (in 9 couples) and 8 ICSI-TESE (in 6 couples) were performed. In one case ICSI-PESA was performed in man with psychological inability of masturbation during his wife's IVF protocol. All procedures were performed as the day case urology, under general anesthesia. The fine needles No 6 in PESA or biopsy needle from Hepafix Set B. Braun in TESE were used. The therapy of antibiotic and common analgesic drug was used routinely after puncture.
Results: The effectiveness of obtaining sperm for micromanipulation were 100% ICSI-PESA and 75% ICSI-TESE. The pregnancy rate in PESA was 50% and 5 healthy children were born. In TESE only 1 woman (17%) was pregnant, but early spontaneous miscarriage was reported. No surgical and anesthesiological complications were noticed.
Conclusions: Obtaining sperm for micromanipulation ICSI using percutaneous epididymal puncture or testicular tissue needle biopsy seems to be effective and safe for patients with obstructive azoospermia or reactive impotence.
abstract_id: PUBMED:37347075
Microscopic Epididymal Sperm Aspiration (MESA) Should be Employed Over Testicular Sperm Extraction (TESE) Sperm Retrieval Surgery for Obstructive Azoospermia (OA). Introduction: Testicular sperm extraction (TESE) has been widely used as a sperm extraction surgery for azoospermia even for obstructive azoospermia (OA) because it does not require surgical skill. However, there are postoperative pain issues, and subsequent testicular atrophy and decreased testosterone levels may occur with TESE. This study examines the usefulness of microscopic epididymal sperm aspiration (MESA) for OA.
Methods: We studied 108 patients diagnosed with OA and treated with MESA at our institute between April 2004 and December 2021. The MESA was performed using a micropipette with a micropuncture technique under an operative microscope. When no sperm were present or motility was not observed, additional punctures to the epididymal tubule were performed.
Results: Motile sperm were recovered in all cases (108 cases). Of these, intracytoplasmic sperm injection (ICSI) using frozen-thawed sperm was performed in 101 cases and the normal fertilization rate was 76.2%. A total of 436 embryo transfer (ET) cycles were performed. The implantation rate per transfer cycle was 47.9%, the clinical pregnancy rate was 41.0%, and the live birth rate was 23.7%. The per-case live birth rate was 84.8%.
Conclusions: MESA-ICSI has a very good fertilization rate, clinical pregnancy rate, and delivery rate. Furthermore, the patient's postoperative pain is less, the number of sperm collected is larger, the burden on the embryologist who processes the collected sperm is less, and ICSI can be easily attempted after frozen-thawed sperm. MESA rather than TESE should be employed for the OA subjects.
abstract_id: PUBMED:28421491
Chapter 7 Human Epididymal and Testicular Sperm Cryopreservation. Since the advent of intracytoplasmic sperm injection (ICSI) in the early 1990s, surgical techniques to recover samples from the epididymis and testis directly have been used to benefit patients suffering from obstructive and nonobstructive azoospermia. Various studies have demonstrated comparable fertilization, ongoing pregnancy, and implantation rates when fresh and frozen-thawed epididymal sperms were used for ICSI [1]. Injection of fresh and frozen testicular sperms into mature oocytes resulted in similar fertilization rates in cases of obstructive azoospermia. However, in cases of nonobstructive azoospermia, the outcome depends upon the degree of impairment of spermatogenesis, criteria for sperm freezing, and patient selection [2].
abstract_id: PUBMED:9455839
Ongoing pregnancies after intracytoplasmic injection using cryopreserved testicular spermatozoa. We report two clinical pregnancies occurring after intracytoplasmic sperm injection (ICSI) using cryopreserved spermatozoa obtained from testicular biopsy, made in two different infertility situations in our clinic. The first patient showed a secretory azoospermia associated with elevated serum follicular stimulating hormone (FSH) level and spermiogenesis maturation arrest. The second patient was affected by azoospermia resulting from bilateral epididymal obstruction. Spermatozoa present in the wet preparation of testicular biopsy made on the day of scrotal exploration were cryopreserved within the testicular tissue for both men. Intracytoplasmic injections were performed at a later date, using spermatozoa prepared from frozen-thawed tissues. In each case, three embryos were obtained and transferred in utero. The transfers resulted in a twin pregnancy for the first case, and in a singleton pregnancy for the second. Living foetuses were seen in the ultrasound scan at the 7th week and both pregnancies are proceeding to date beyond 30 weeks without complications.
Answer: Based on the information provided in the abstracts, cryopreserved epididymal or testicular sperm can be effectively used for intracytoplasmic sperm injection (ICSI) in men with obstructive azoospermia. Studies have shown that there is no significant difference in implantation, clinical pregnancy, or live birth rates between the use of cryopreserved epididymal and testicular sperm for ICSI in this patient population (PUBMED:15521877). Additionally, a systematic review and meta-analysis indicated that there was no significant impact on fertilization rate or clinical pregnancy rate when using cryopreserved testicular sperm compared to fresh testicular sperm, although there was a statistically lower clinical pregnancy rate when using frozen epididymal sperm compared to fresh epididymal sperm (PUBMED:32935172).
Furthermore, a comparative clinical study found that the source of sperm (PESA or TESA) had no significant effect on embryo development, embryo implantation rate, clinical pregnancy rate, and miscarriage rate, resulting in better clinical outcomes (PUBMED:36034342). Another systematic review and meta-analysis showed that TESA and PESA yielded similar pregnancy and miscarriage rates for couples receiving ICSI because of obstructive azoospermia, but TESA yielded a significantly higher implantation rate than PESA (PUBMED:32038959).
A review also highlighted that different procedures of testicular sperm retrieval are being performed based on the cause of azoospermia and the surgeon's skills, and that the goal is to maximize sperm recovery from testicular samples (PUBMED:28331619). Moreover, a study suggested that microscopic epididymal sperm aspiration (MESA) should be employed over testicular sperm extraction (TESE) for obstructive azoospermia due to its high fertilization rate, clinical pregnancy rate, and delivery rate, as well as less postoperative pain and a larger number of sperm collected (PUBMED:37347075).
In conclusion, cryopreserved epididymal or testicular sperm should be considered for recovery and use in ICSI for men with obstructive azoospermia, as the outcomes are comparable to those achieved with fresh sperm, and the choice between epididymal and testicular sources may depend on individual case factors, surgeon's preference, and specific clinical circumstances. |
Instruction: Are rural older Icelanders less physically active than those living in urban areas?
Abstracts:
abstract_id: PUBMED:19237433
Are rural older Icelanders less physically active than those living in urban areas? A population-based study. Background: Older people in rural areas have been labelled as physically inactive on the basis of leisure-time physical activity research. However, more research is needed to understand the total physical activity pattern in older adults, considering all domains of physical activity, including leisure, work, and domestic life.
Aims: We hypothesised that: (a) total physical activity would be the same for older people in urban and rural areas; and (b) urban and rural residency, along with gender and age, would be associated with differences in domain-specific physical activities.
Methods: Cross-sectional data were collected in Icelandic rural and urban communities from June through to September 2004. Participants were randomly selected, community-dwelling, 65-88 years old, and comprised 68 rural (40% females) and 118 urban (53% females) adults. The Physical Activity Scale for the Elderly (PASE) was used to obtain a total physical activity score and subscores in leisure, during domestic life, and at work.
Results: The total PASE score was not associated with rural vs. urban residency, but males were, in total, more physically active than females, and the 65-74-year-olds were more active than the 75-88-year-olds. In the leisure domain, rural people had lower physical activity scores than urban people. Rural males were, however, most likely of all to be physically active in the work domain. In both urban and rural areas, the majority of the physical activity behaviour occurred in relation to housework, with the rural females receiving the highest scores.
Conclusions: Older Icelanders in rural areas should not be labelled as less physically active than those who live in urban areas. Urban vs. rural living may, however, influence the physical activity patterns among older people, even within a fairly socioeconomically and culturally homogeneous country such as Iceland. This reinforces the need to pay closer attention to the living environment when studying and developing strategies to promote physical activity.
abstract_id: PUBMED:22133526
Medication use among community-dwelling older Icelanders. Population-based study in urban and rural areas Objective: To describe medication use among older community-dwelling Icelanders by collecting information on number of medicine, polypharmacy (>5 medications), and medications by ATC categories. Moreover, to explore the relationship between medication use and various influential factors emphasizing residency in urban and rural areas.
Material And Methods: Population-based, cross-sectional study. Participants were randomly selected from the National registry in one urban (n=118) and two rural (n=68) areas.
Inclusion Criteria: 1) ≥ 65 years old, 2) community-dwelling, 3) able to communicate verbally. Information on medication use was obtained from each person's medication list and interviews. A questionnaire and five standardized instruments were used to assess the potential influencing factors.
Results: On average, participants used 3.9 medications and prevalence of polypharmacy was 41%. Men used 3.5 medications on average and women 4.4 (p=0.018). Compared to rural residents, urban residents had fewer medical diagnoses, better mobility, less pain, and fewer depressive symptoms. By controlling for the effects of these variables, more medications were associated with urban living (p<0.001) and more medical diagnoses (p<0.001). Likewise, adjusted odds for polypharmacy increased with urban residency (p=0.023) and more medical diagnoses (p=0.005). Urban residency, more medical diagnoses, higher age, and male gender were related to use of drugs for blood and blood forming organs.
Conclusion: The results reveal an unexplained regional difference in medications use by older Icelanders. Further studies are required on why urban residents use at least equal amount of medications as rural residents despite better scores on health assessments.
abstract_id: PUBMED:24567416
Factors influencing Internet usage in older adults (65 years and above) living in rural and urban Sweden. Older adults living in rural and urban areas have shown to distinguish themselves in technology adoption; a clearer profile of their Internet use is important in order to provide better technological and health-care solutions. Older adults' Internet use was investigated across large to midsize cities and rural Sweden. The sample consisted of 7181 older adults ranging from 59 to 100 years old. Internet use was investigated with age, education, gender, household economy, cognition, living alone/or with someone and rural/urban living. Logistic regression was used. Those living in rural areas used the Internet less than their urban counterparts. Being younger and higher educated influenced Internet use; for older urban adults, these factors as well as living with someone and having good cognitive functioning were influential. Solutions are needed to avoid the exclusion of some older adults by a society that is today being shaped by the Internet.
abstract_id: PUBMED:37012553
Spirituality and Attitudes Toward Death Among Older Adults in Rural and Urban China: A Cross-Sectional Study. The purpose of this study was to investigate spirituality and attitudes toward death among rural and urban elderly. We asked 134 older adults from rural areas and 128 from urban areas to complete a self-administrated questionnaire including the Spiritual Self-assessment Scale and Death Attitude Scale. The fear and anxiety of death, escape acceptance, natural acceptance, approach acceptance, and death avoidance scores of older adults living in rural areas were higher than those living in urban areas. The construction of social infrastructure and medical care should be strengthened in rural areas so as to improve older adults' attitudes toward death.
abstract_id: PUBMED:24894954
Nutritional status and its health-related factors among older adults in rural and urban areas. Aim: To compare health-related characteristics, nutrition-related factors and nutritional status of older adults living in rural and urban counties of Taiwan.
Background: The older adult population of Taiwan is increasing. Furthermore, older people living in rural areas have shorter life expectancy and more chronic diseases than their urban counterparts. However, little is known about the health-related characteristics, nutrition-related factors and nutritional status of older adults living in rural and urban areas of Taiwan, limiting nurses' ability to identify and care for older adults at risk of poor nutritional health.
Design: Cross-sectional, comparative.
Methods: Older adults were randomly selected from names of residents of an adjacent rural and urban area of northern Taiwan and having completing the 2009 health evaluation. From March-July 2010, older adult participants (N = 366) provided data on demographic and health-related information, nutritional self-efficacy, health locus of control and nutritional status. Data were analysed by descriptive statistics and compared using chi-square and t-test.
Results: Older rural participants had significantly lower educational level, less adequate income, higher medication use, lower scores on self-rated health status and researcher-rated health status and lower self-rated healthy eating status than their urban counterparts. Moreover, rural participants had significantly lower nutritional self-efficacy, higher chance health locus of control and poorer nutritional status than their urban counterparts.
Conclusions: Our results suggest that nurses should assess older adults living in rural areas for nutritional health and nutrition knowledge. Based on this assessment, nurses should develop easy, practical and accessible nutritional programmes for this population.
abstract_id: PUBMED:34280986
Prevalence and Associated Factors of Falls among Older Adults between Urban and Rural Areas of Shantou City, China. Background: To investigate the prevalence of falls and associated factors among older adults in urban and rural areas and to facilitate the design of fall prevention interventions.
Methods: We used cluster random sampling to investigate the sociodemographic information, living habits, medical status, falls, home environment, and balance ability among 649 older adult participants. Univariate and multivariate logistic regression were used to examine the associated factors of falls.
Results: The incidence of falls among older adults in Shantou City was 20.65%. Among them, the incidence was 27.27% in urban areas and 16.99% in rural areas. The rate of injury from falls among older adults was 14.48%, with18.61% in urban area and 12.20% in rural area. Multivariate analysis showed that the associated factors of falls among older adults in Shantou City included a high school or below education level (OR = 2.387, 95% CI: 1.305-4.366); non-farming as the previous occupation (OR = 2.574, 95% CI: 1.613-4.109); incontinence(OR = 2.881, 95% CI: 1.517-5.470); lack of fall prevention education (OR = 1.856, 95% CI: 1.041-3.311); and reduced balance ability (OR = 3.917, 95% CI: 2.532-6.058).
Discussion: Older adults have a higher rate of falling in Shantou City, compared to the average rate in China. There are similarities and differences in the associated factors of falls among older adults between urban and rural areas of Shantou City. Targeted interventions for older adults in different regions may be more effective in reducing the risk of falls.
abstract_id: PUBMED:36833010
Neighborhood and Depressive Symptoms in Older Adults Living in Rural and Urban Regions in South Korea. Neighborhoods have a significant impact on depressive symptoms in older adults. In response to the increasing depression of older adults in Korea, this study aims to identify the relationship between perceived and objective neighborhood characteristics in depressive symptoms and find differences between rural and urban areas. We used a National survey collected in 2020 of 10,097 Korean older adults aged 65 and older. We also utilized Korean administration data for identifying the objective neighborhood characteristics. Multilevel modeling results indicated that depressive symptoms decreased when older adults perceived their housing condition (b = -0.04, p < 0.001), their interaction with neighbors (b = -0.02, p < 0.001), and overall neighborhood environment (b = -0.02, p < 0.001) positively. Among the objective neighborhood characteristics, only nursing homes (b = 0.09, p < 0.05) were related to depressive symptoms of older adults living in urban areas. For older adults living in rural areas, the number of social workers (b = -0.03, p < 0.001), the number of senior centers (b = -0.45, p < 0.001), and nursing home (b = -3.30, p < 0.001) in the neighborhood were negatively associated with depressive symptoms. This study found that rural and urban areas have different neighborhood characteristics related to older adults' depressive symptoms in South Korea. This study encourages policymakers to consider neighborhood characteristics to improve the mental health of older adults.
abstract_id: PUBMED:34931879
Factors Affecting Quality of Life among Older Adults with Hypertension in Urban and Rural Areas in Thailand: A Cross-Sectional Study. This study explored factors affecting quality of life in older adults with hypertension by comparing those living in urban and rural areas. A cross-sectional study was conducted on 420 older adults living in urban and rural areas in Thailand. Data were collected using the WHOQOL-OLD and Health-Promoting Lifestyle Profile-II tools, which measured quality of life and health-promoting behaviors among the participants. Older adults in urban areas had higher quality of life scores than those in rural locations. Health-promoting behaviors significantly predicted higher quality of life for all residents. A high perceived health status predicted increase of quality of life in urban residents, whereas the presence of comorbidity effects decreased quality of life. A longer hypertension duration predicted higher quality of life in rural residents. These findings suggest that healthy behaviors and self-management interventions are critical to improve quality of life in older Thai adults with hypertension.
abstract_id: PUBMED:37322812
Do living arrangements and health behaviors associate with anxiety symptoms among Chinese older people? Differences between urban and rural areas. Living arrangements and health behaviors are considered to be associated with mental health, but their relationship has been less investigated by national survey data in China. The purpose of this study is to explore the relationship of living arrangements and health behaviors with anxiety among Chinese older people and to compare the results between urban and rural areas. The study was based on the Chinese Longitudinal Healthy Longevity Survey conducted in 2018, and a total of 12 726 old-aged respondents were included. Ordinal logistic regression was used to analyze the associations between living arrangements, health behaviors and anxiety. This study found that people living in nursing institutions are likely to be more anxious than their dwelling counterparts. Although we found no significant relationships of health behaviors such as smoking, drinking alcohol and exercise with anxiety, older adults with greater dietary diversity were less likely to experience anxiety. Furthermore, different trends between living arrangements and anxiety, smoking and anxiety were also found among urban versus rural respondents. The findings of this study help to better understand anxiety among Chinese older people and shed light on health policy on older people protection and services.
abstract_id: PUBMED:31888516
Falls and risk factors of falls for urban and rural community-dwelling older adults in China. Background: Falls among older people have become a public health concern due to serious health consequences. Despite abundant literature on falls in older people, little is known about the rural-urban differentials in falls among older people in China. This research fills the voids of prior literature by investigating falls and the associated risk factors among Chinese seniors, with a particular focus on the rural-urban differences.
Methods: Data are from the 2010 wave of Chinese Longitudinal Survey on Urban and Rural Elderly. The analysis includes 16,393 respondents aged 65 and over, with 8440 and 7953 of them living in urban and rural areas, respectively. Descriptive analyses are performed to examine incidence, locations, circumstances and consequences of falls in older people. Regression analysis is used to investigate the effects of risk factors on falls among older people in urban and rural China.
Results: The incidence of falls is higher among rural than urban older people. In both settings, older people are more likely to fell outside of home. But common outdoor falls among rural and urban older people differ in terms of locations and circumstances. Urban older people are more likely to report falling on the road whereas their rural counterparts have experienced more falls in the yard. Falls occurring within homes or immediate home surroundings are also common; but few falls occurred in public areas. The rate of hospitalization of urban seniors after falling is higher than that of rural ones. Most risk factors of falls show similar than different effects on rural and urban elders' risks of falling.
Conclusions: Incidence, locations, circumstances and consequences of falls vary among Chinese rural and urban older people. But most risk factors for falls show similar effects on rural and urban elders' odds of falling. Implications drawn from this research provide suggestions for the government and local agencies to develop suitable fall prevention strategies which may well be applicable to other countries.
Answer: No, rural older Icelanders should not be labeled as less physically active than those living in urban areas. A population-based study conducted in Iceland found that the total physical activity, measured by the Physical Activity Scale for the Elderly (PASE), was not associated with rural versus urban residency. While rural individuals had lower physical activity scores in the leisure domain compared to urban individuals, rural males were more likely to be physically active in the work domain. Additionally, in both urban and rural areas, the majority of physical activity behavior occurred in relation to housework, with rural females receiving the highest scores (PUBMED:19237433). |
Instruction: Does neighbourhood walkability moderate the effects of mass media communication strategies to promote regular physical activity?
Abstracts:
abstract_id: PUBMED:23334760
Does neighbourhood walkability moderate the effects of mass media communication strategies to promote regular physical activity? Background: Mass media campaigns are widely used in Australia and elsewhere to promote physical activity among adults. Neighbourhood walkability is consistently shown to be associated with walking and total activity. Campaigns may have different effects on individuals living in high and low walkable neighbourhoods.
Purpose: The purpose of this study is to compare pre- and post-campaign cognitive and behavioural impacts of the Heart Foundation's Find Thirty every day® campaign, in respondents living in high and lower walkable neighbourhoods.
Methods: Pre- and post-campaign cross-sectional survey data were linked with objectively measured neighbourhood walkability. Cognitive and behavioural impacts were assessed using logistic regression stratified by walkability.
Results: Cognitive impacts were significantly higher post-campaign and consistently higher in respondents in high compared with lower walkable neighbourhoods. Post campaign sufficient activity was significantly higher and transport walking significantly lower, but only in residents of lower walkable areas.
Conclusions: Cognitive impacts of mass media physical activity campaigns may be enhanced by living in a more walkable neighbourhood.
abstract_id: PUBMED:31654728
A population-based study of the associations between neighbourhood walkability and different types of physical activity in Canadian men and women. Few Canadian studies have examined whether or not associations between neighbourhood walkability and physical activity differ by sex. We estimated associations between perceived neighbourhood walkability and physical activity among Canadian men and women. This study included cross-sectional survey data from participants in 'Alberta's Tomorrow Project' (Canada; n = 14,078), a longitudinal cohort study. The survey included socio-demographic items as well as the International Physical Activity Questionnaire (IPAQ) and the abbreviated Neighbourhood Environment Walkability Scale (NEWS-A), which captured perceived neighbourhood built characteristics. We computed subscale and overall walkability scores from NEWS-A responses. Covariate-adjusted generalized linear models estimated the associations of participation (≥10 min/week) and minutes of different types of physical activity, including transportation walking (TW), leisure walking (LW), moderate-intensity physical activity (MPA), and vigorous-intensity physical activity (VPA) with walkability scores. Walkability was positively associated with participation in TW, LW, MPA and VPA and minutes of TW, LW, and VPA. Among men, a negative association was found between street connectivity and VPA participation. Additionally, crime safety was negatively associated with VPA minutes among men. Among women, pedestrian infrastructure was positively associated with LW participation and overall walkability was positively associated with VPA minutes. Notably, overall walkability was positively associated with LW participation among men and women. Different perceived neighbourhood walkability characteristics might be associated with participation and time spent in different types of physical activity among men and women living in Alberta. Interventions designed to modify perceptions of neighbourhood walkability might influence initiation or maintenance of different types of physical activity.
abstract_id: PUBMED:34225681
Associations of changes in neighbourhood walkability with changes in walking activity in older adults: a fixed effects analysis. Background: Supporting older adults to engage in physically active lifestyles requires supporting environments. Walkable environments may increase walking activity in older adults, but evidence for this subgroup is scarce, and longitudinal studies are lacking. This study therefore examined whether changes in neighbourhood walkability were associated with changes in walking activity in older adults, and whether this association differed by individual-level characteristics and by contextual conditions beyond the built environment.
Methods: Data from 668 participants (57.8-93.4 years at baseline) across three waves (2005/06, 2008/09 and 2011/12) of the Longitudinal Aging Study Amsterdam (LASA) were used. These individuals did not relocate during follow-up. Self-reported outdoor walking activity in minutes per week was assessed using the LASA Physical Activity Questionnaire. Composite exposure measures of neighbourhood walkability (range: 0 (low)-100 (high)) within 500-m Euclidean buffer zones around each participant's residential address were constructed by combining objectively measured high-resolution Geographic Information System data on population density, retail and service destination density, land use mix, street connectivity, green space density, and sidewalk density. Fixed effects linear regression analyses were applied, adjusted for relevant time-varying confounders.
Results: Changes in neighbourhood walkability were not statistically significantly associated with changes in walking activity in older adults (β500m = - 0.99, 95% CI = -6.17-4.20). The association of changes in neighbourhood walkability with changes in walking activity did not differ by any of the individual-level characteristics (i.e., age, sex, educational level, cognitive impairment, mobility disability, and season) and area-level characteristics (i.e., road traffic noise, air pollution, and socioeconomic status).
Conclusions: This study did not show evidence for an association between changes in neighbourhood walkability and changes in walking activity in older adults. If neighbourhood walkability and walking activity are causally linked, then changes in neighbourhood walkability between 2005/06 and 2011/12 might have been not substantial enough to produce meaningful changes in walking activity in older adults.
abstract_id: PUBMED:27613233
Neighbourhood walkability and home neighbourhood-based physical activity: an observational study of adults with type 2 diabetes. Background: Converging international evidence suggests that diabetes incidence is lower among adults living in more walkable neighbourhoods. The association between walkability and physical activity (PA), the presumed mediator of this relationship, has not been carefully examined in adults with type 2 diabetes. We investigated the associations of walkability with total PA occurring within home neighbourhoods and overall PA, irrespective of location.
Methods: Participants (n = 97; 59.5 ± 10.5 years) were recruited through clinics in Montreal (QC, Canada) and wore a GPS-accelerometer device for 7 days. Total PA was expressed as the total Vector of the Dynamic Body Acceleration. PA location was determined using a Global Positioning System (GPS) device (SIRF IV chip). Walkability (street connectivity, land use mix, population density) was assessed using Geographical Information Systems software. The cross-sectional associations between walkability and location-based PA were estimated using robust linear regressions adjusted for age, body mass index, sex, university education, season, car access, residential self-selection, and wear-time.
Results: A one standard deviation (SD) increment in walkability was associated with 10.4 % of a SD increment in neighbourhood-based PA (95 % confidence interval (CI) 1.2, 19.7) - equivalent to 165 more steps/day (95 % 19, 312). Car access emerged as an important predictor of neighbourhood-based PA (Not having car access: 38.6 % of a SD increment in neighbourhood-based PA, 95 % CI 17.9, 59.3). Neither walkability nor car access were conclusively associated with overall PA.
Conclusions: Higher neighbourhood walkability is associated with higher home neighbourhood-based PA but not with higher overall PA. Other factors will need to be leveraged to facilitate meaningful increases in overall PA among adults with type 2 diabetes.
abstract_id: PUBMED:31151210
Test-Retest Reliability and Walk Score® Neighbourhood Walkability Comparison of an Online Perceived Neighbourhood-Specific Adaptation of the International Physical Activity Questionnaire (IPAQ). There is a growing public health interest in the contributions of the built environment in enabling and supporting physical activity. However, few tools measuring neighbourhood-specific physical activity exist. This study assessed the reliability of an established physical activity tool (International Physical Activity Questionnaire: IPAQ) adapted to capture perceived neighbourhood-specific physical activity (N-IPAQ) administered via the internet and compared N-IPAQ outcomes to differences in neighbourhood Walk Score®. A sample of n = 261 adults completed an online questionnaire on two occasions at least seven days apart. Questionnaire items captured walking, cycling, moderate-intensity, and vigorous-intensity physical activity, undertaken inside the participant's perceived neighbourhood in the past week. Intraclass correlations, Spearman's rank correlation, and Cohen's Kappa coefficients estimated item test-retest reliability. Regression estimated the associations between self-reported perceived neighbourhood-specific physical activity and Walk Score®. With the exception of moderate physical activity duration, participation and duration for all physical activities demonstrated moderate reliability. Transportation walking participation and duration was higher (p < 0.05) in more walkable neighbourhoods. The N-IPAQ administered online found differences in neighbourhoods that vary in their walkability. Future studies investigating built environments and self-reported physical activity may consider using the online version of the N-IPAQ.
abstract_id: PUBMED:36767815
Association of Perceived Neighbourhood Walkability with Self-Reported Physical Activity and Body Mass Index in South African Adolescents. Adolescence is a life stage critical to the establishment of healthy behaviours, including physical activity (PA). Factors associated with the built environment have been shown to impact PA across the life course. We examined the sociodemographic differences in, and associations between, perceived neighbourhood walkability, PA, and body mass index (BMI) in South African adolescents. We recruited a convenience sample (n = 143; 13-18 years; 65% female) of students from three high schools (middle/high and low-income areas). Participants completed a PA questionnaire and the Neighbourhood Environment Walkability Scale (NEWS)-Africa and anthropometry measurements. Multivariable linear regression was used to examine various relationships. We found that, compared with adolescents living in middle/high income neighbourhoods, those living in low-income neighbourhoods had lower perceived walkability and PA with higher BMI percentiles. The associations between neighbourhood walkability and PA were inconsistent. In the adjusted models, land use diversity and personal safety were associated with club sports participation, street connectivity was positively associated with school sports PA, and more favourable perceived walkability was negatively associated with active transport. Overall, our findings suggest that the perceived walkability of lower income neighbourhoods is worse in comparison with higher income neighbourhoods, though the association with PA and BMI is unclear.
abstract_id: PUBMED:33219673
Neighbourhood walkability and physical activity: moderating role of a physical activity intervention in overweight and obese older adults with metabolic syndrome. Background: While urban built environments might promote active ageing, an infrequently studied question is how the neighbourhood walkability modulates physical activity changes during a physical activity intervention programme in older adults. We assessed the influence of objectively assessed neighbourhood walkability on the change in physical activity during the intervention programme used in the ongoing PREvención con DIeta MEDiterránea (PREDIMED)-Plus trial.
Method: The present study involved 228 PREDIMED-Plus senior participants aged between 55 and 75, recruited in Palma de Mallorca (Spain). Overweight/obese older adults with metabolic syndrome were randomised to an intensive weight-loss lifestyle intervention or a control group. A walkability index (residential density, land use mix, intersections density) was calculated using geographic information systems (1 km sausage-network buffer). Physical activity was assessed using accelerometer and a validated questionnaire, at baseline and two follow-up visits (6-months and 1-year later). Generalised additive mixed models were fitted to estimate the association between the neighbourhood walkability index and changes in physical activity during follow-up.
Results: Higher neighbourhood walkability (1 z-score increment) was associated with moderate-to-vigorous accelerometer assessed physical activity duration, (β = 3.44; 95% CI = 0.52; 6.36 min/day). When analyses were stratified by intervention arm, the association was only observed in the intervention group (β = 6.357; 95% CI = 2.07;10.64 min/day) (P for interaction = 0.055).
Conclusions: The results indicate that the walkability of the neighbourhood could support a physical activity intervention, helping to maintain or increase older adults' physical activity.
abstract_id: PUBMED:34279619
The relationship between job components, neighbourhood walkability and African academics' physical activity: a post-COVID-19 context. Research to date suggests that physical activity (PA) among academics is insufficient globally. Academics in many African countries were recently required to resume work while observing social distancing protocols. Physical inactivity (PI) was, therefore, expected to increase in such academics. Interestingly, walkable neighbourhoods are resources that could discourage excessive sitting and PI in this situation. This study, therefore, assessed the moderating role of neighbourhood walkability in the relationship between core job components (i.e. on-site teaching, online teaching, research and student assessment) and PA among academics. The study adopted a cross-sectional design that utilized an online survey hosted by Google Forms to gather data. Participants were volunteer full-time academics in Nigeria, Ghana, Kenya and Tanzania. A total of 1064 surveys were analysed, with a sensitivity analysis utilized to select covariates for the ultimate hierarchical linear regression model. After controlling for the ultimate covariates (e.g. gender, education and income), PA was found to be positively associated with the job component 'research work' but negatively associated with student assessment. Neighbourhood walkability increased the positive relationship of research work with PA and reduced the negative relationship of student assessment with PA. The non-significant negative relationship between 'teaching online' and PA was made positively significant by neighbourhood walkability. We conclude that research as a job component is positively associated with PA, but online teaching is negatively associated with PA among African academics in a post-COVID-19 context.
abstract_id: PUBMED:38052331
Mediation analysis of the associations between neighbourhood walkability and greenness, accelerometer-measured physical activity, and health-related fitness in urban dwelling Canadians. Objective: To estimate sex-specific associations (total, direct, and indirect effects) between objectively measured neighbourhood walkability and greenness and objectively measured physical activity and health-related fitness including cardiorespiratory and muscular fitness in Canadian adults.
Methods: Neighbourhood walkability (Canadian Active Living Environment) and greenness (Normalized Difference Vegetation Index; NDVI) data were linked to cardiorespiratory (i.e., submaximal step test estimated V̇O2 max) and muscular fitness (i.e., handgrip strength) and accelerometer measured physical activity; Canadian Health Measures Survey). Covariate-adjusted sex-stratified path analyses was conducted to assess if physical activity (light: LPA; moderate: MPA, and; vigorous: VPA) mediated the associations between neighbourhood walkability, NDVI and health-related fitness. Model sample sizes ranged from 987 to 2796 for males and 989 to 2835 for females.
Results: Among males, we found indirect effects between neighbourhood walkability and cardiorespiratory fitness via LPA (negative) and VPA (positive). We also found a total effect (negative) between neighbourhood walkability and grip strength and indirect effects between neighbourhood walkability and handgrip strength via LPA (negative) and MPA (negative). Among females, we found a total effect (positive) and direct effect (positive) between neighbourhood walkability and cardiorespiratory fitness, and an indirect effect for neighbourhood walkability and cardiorespiratory fitness via LPA. We found no significant effects related to neighbourhood greenness.
Conclusions: Residing in a neighbourhood with higher walkability may positively affect cardiorespiratory fitness but negatively affect muscular strength. The negative associations between neighbourhood walkability and LPA may offset potential positive associations between neighbourhood walkability and MPA and VPA and their subsequent influence on health-related fitness.
abstract_id: PUBMED:29221476
Lexical neutrality in environmental health research: Reflections on the term walkability. Neighbourhood environments have important implications for human health. In this piece, we reflect on the environments and health literature and argue that precise use of language is critical for acknowledging the complex and multifaceted influence that neighbourhood environments may have on physical activity and physical activity-related outcomes. Specifically, we argue that the term "neighbourhood walkability", commonly used in the neighbourhoods and health literature, constrains recognition of the breadth of influence that neighbourhood environments might have on a variety of physical activity behaviours. The term draws attention to a single type of physical activity and implies that a universal association exists when in fact the literature is quite mixed. To maintain neutrality in this area of research, we suggest that researchers adopt the term "neighbourhood physical activity environments" for collective measures of neighbourhood attributes that they wish to study in relation to physical activity behaviours or physical activity-related health outcomes.
Answer: The abstract from PUBMED:23334760 suggests that neighbourhood walkability does indeed moderate the effects of mass media communication strategies aimed at promoting regular physical activity. The study found that cognitive impacts of mass media physical activity campaigns were enhanced for individuals living in more walkable neighbourhoods. Additionally, the study observed that post-campaign physical activity levels were significantly higher in residents of lower walkable areas, indicating that the campaign may have had different effects depending on the walkability of the neighbourhood.
While this abstract provides evidence of a moderating effect of neighbourhood walkability on the impact of mass media campaigns, it is important to note that the relationship between neighbourhood walkability and physical activity is complex and may be influenced by various factors, including individual and environmental characteristics. Other studies, such as those reported in PUBMED:31654728, PUBMED:34225681, PUBMED:27613233, PUBMED:31151210, PUBMED:36767815, PUBMED:33219673, PUBMED:34279619, and PUBMED:38052331, have explored different aspects of how neighbourhood walkability is associated with physical activity levels in various populations, including older adults, adults with type 2 diabetes, and Canadian men and women. These studies contribute to the understanding of how neighbourhood walkability can influence physical activity but do not directly address the moderating role of walkability on the effectiveness of mass media campaigns.
In summary, based on the evidence from the abstract provided (PUBMED:23334760), neighbourhood walkability appears to moderate the effects of mass media communication strategies to promote regular physical activity, with campaigns potentially having different impacts in high versus low walkable neighbourhoods. |
Instruction: Survival of women with gestational trophoblastic neoplasia and liver metastases: is it improving?
Abstracts:
abstract_id: PUBMED:22696824
Survival of women with gestational trophoblastic neoplasia and liver metastases: is it improving? Objective: To determine whether survival outcomes of women with liver metastases from gestational trophoblastic neoplasia (GTN) have improved from the previous finding of 27% at 5 years.
Study Design: The Charing Cross GTN database was searched for patients with liver metastases treated between 1975 and 2007. Prognostic variables were recorded and analyzed for effect on survival.
Results: Thirty-eight (1.8%) of 2,100 GTN patients had liver metastases. One patient with placental site trophoblastic tumor was excluded. In the remaining 37 cases the overall survival was 48% at 5 years. Seven patients with very advanced disease died <4 weeks after admission, and 12 late deaths occurred, 5 due to non-GTN causes (1 stroke and 4 second cancers). After exclusion of the early deaths and censoring for the non-GTN related deaths, the cause-specific survival was 68%. No prognostic variable was significant on univariate analysis. However, patients presenting >2.8 years and <2.8 years from the antecedent pregnancy had a 32% and 75% (p = 0.08) chance of long-term survival, respectively.
Conclusion: The prognosis of patients with liver metastases from GTN has improved. Outcome may be best in those patients presenting within 2.8 years of the causative pregnancy and without very large volumes of disease.
abstract_id: PUBMED:37500013
Morbidity, mortality, and prognostic factors in gestational trophoblastic neoplasia with liver metastasis. Background: Liver metastases of gestational trophoblastic neoplasia (GTN) are rare, but associated with poor prognosis. The additional concomitant presence of brain or intra-abdominal metastases, with liver metastases has been described as worsening factors, but the literature on this topic is reduced.
Objective: To estimate the overall mortality, specific hepatic morbidity, and mortality, and to identify prognostic factors for patients with GTN and liver metastases.
Method: The medical records of 26 GTN patients with liver metastases registered in the French Center for Trophoblastic Diseases and treated between November 1999 and December 2019 were reviewed. Overall survival was described using Kaplan-Meier estimates. Prognostic factors were identified using univariate and multivariate Cox analyses.
Results: The 5-year overall survival rate was 60.7% for all patients with liver metastasis. The survival rate was higher in patients who achieved complete remission after first-line chemotherapy than in those who did not (100% vs 20%, p = 0.001). The only factor independently associated with prognosis was the presence of 6 or more liver metastases (5-year survival, 16.7% vs. 82.4% otherwise; HR =11.1, 95%CI, 2.3-53.1; p = 0.003). None of the five patients with a single liver metastasis died.
Conclusion: GTN with liver metastasis is very rare (1.6%). The prognosis of patients seems to be improving. The results of this study are also reassuring for patients with complete remission after first-line combination chemotherapy, as well as for those with a single liver metastasis.
abstract_id: PUBMED:7590478
Gestational trophoblastic disease metastatic to the central nervous system. Objective: To evaluate characteristics of patients with central nervous system (CNS) lesions of gestational trophoblastic disease (GTD) and determine prognostic and therapeutic implications applicable to management.
Methods: We retrospectively reviewed the records of 454 patients treated at the Southeastern Regional Trophoblastic Disease Center between 1966 and 1992 with at least 2 years of follow-up, and identified 42 (9.3%) with CNS metastases. Sixteen patients presented for primary therapy and 27 patients had received significant therapy prior to presentation. Three heavily treated moribund patients died before their first cycle of chemotherapy and were excluded from analysis. Brain metastases were documented by physical exam and radionuclide imaging (before 1976), computed tomography scan (after 1976), or magnetic resonance imaging (after 1986). Patients received multiagent chemotherapy with methotrexate, actinomycin D, and chlorambucil (MAC)- or etoposide-based regimens. All patients received radiation therapy. No intrathecal chemotherapy was given. Craniotomy was employed in seven cases. Remission was defined as three weekly hCG levels below assay sensitivity (< 5 mIU/ml).
Results: Overall survival was 44%. Twelve of 16 patients (75%) who presented with CNS metastases with no prior therapy (Group A), 5 of 13 (38%) patients who had prior treatment (Group B), and none of 10 patients who developed CNS metastases during therapy (Group C) survived (P < 0.05). Two of four patients who failed in the CNS after treatment for CNS lesions were salvaged. Demographic characteristics of Groups A and B were similar. No significant differences with respect to WHO score, interval from pregnancy to onset of disease, or age among these groups were found. Group B patients had a four-fold higher incidence of liver metastases. Survival of Group A patients was not related to conventional clinical prognostic factors. Inverse (nonsignificant) correlations were found for Group B patients between survival and WHO score, hCG level, size and number of metastatic lesions, but not type of prior therapy. Survival was higher in those with prior molar pregnancies (56%) as contrasted with aborted (50%) or term (27%) gestations. Selective use of craniotomy helped alleviate intracranial pressure and resect refractory foci.
Conclusions: Chemotherapy combined with radiation therapy in GTD patients with CNS metastases yields survival rates comparable to those reported for intrathecal methotrexate regimens. Tumor burden as indicated by hCG level and size/number of metastases in previously treated patients may correlate with survival. Patients who develop CNS metastases during active therapy have a very poor outcome.
abstract_id: PUBMED:16803542
Results with EMA/CO (etoposide, methotrexate, actinomycin D, cyclophosphamide, vincristine) chemotherapy in gestational trophoblastic neoplasia. The aim of this study was to evaluate the efficacy and toxicity of EMA/CO (etoposide, methotrexate, actinomycin D, cyclophosphamide, vincristine) regimen for the treatment of high-risk gestational trophoblastic neoplasia (GTN). Thirty-three patients with high-risk GTN, scored according to World Health Organization, received 159 EMA/CO treatment cycles between 1994 and 2004. Twenty-three patients were treated primarily with EMA/CO, and 10 patients were treated secondarily after failure of single agent or MAC (methotrexate, actinomycin D, cyclophosphamide, or clorambucile) III chemotherapy. Adjuvant surgery and radiotherapy were used in selected patients. Survival, response, and toxicity were analyzed retrospectively. The overall survival rate was 90.9% (30/33). Survival rates were 91.3% (21/23) for primary treatment and 90% (9/10) for secondary treatment. Six (18.2%) of 33 patients had drug resistance. Four of them underwent surgery for adjuvant therapy. Three of these patients with drug resistance died. Survival and complete response to EMA/CO were influenced by liver metastasis, antecedent pregnancy, and histopathologic diagnosis of choriocarcinoma. Survival rate was also affected by blood group. The treatment was well tolerated. The most severe toxicity was grade 3-4 leukopenia that occurred in 24.3% (8/33) of patients and 6.9% (11/159) of treatment cycles. Febrile neutropenia occurred in one patient (3%). EMA/CO regimen is highly effective for treatment of high-risk GTN. Its toxicity is well tolerated.
abstract_id: PUBMED:8988707
Gestational trophoblastic disease with liver metastases: the Charing Cross experience. Objective: To define management options for women presenting with gestational trophoblastic disease (GTD) which had already metastasised to the liver.
Design: Retrospective analysis of case records between 1958 and 1994.
Setting: A national referral centre for trophoblastic disease.
Results: The database containing 1676 treated patients was reviewed and 46 patients with hepatic metastases were identified (2.7%). The median age was 32 years (range 19-52 years). The antecedent pregnancy to the GTD was normal in 65% (30/46), and the time interval between the antecedent pregnancy and presentation was longer than one year in 50% (22/44). Lung metastases were present in 43 patients (93%) and brain deposits in 15 patients (33%). Forty-five patients (98%) were high risk by WHO criteria. The five-year overall survival was 27%. The five-year survival of the subgroup of patients having both hepatic and cerebral metastases was 10%. There was no significant survival difference between the different chemotherapy regimens used in the study period (pre-1979 CHAMOCA: methotrexate, actinomycin D, cyclophosphamide, doxorubicin, melphalan, hydroxyurea and vincristine; 1979 onwards EMA/CO-EP: etoposide, methotrexate, adriamycin-D/ cyclophosphamide, vincristine-etoposide and cis-platinum). Multivariate analysis revealed that a prognostic score > 12 was significant (Hazard ratio 5.4, 95% CI 0.7-41.9; P = 0.04).
Conclusions: The outcome for women presenting with hepatic metastases from GTD is poor with an even worse prognosis if cerebral metastases are also present. Alternative therapeutic measures, such as high dose therapy or new drugs, should be explored in these women.
abstract_id: PUBMED:24937957
Hepatic metastasis in gestational trophoblastic neoplasia: patient characteristics, prognostic factors, and outcomes. Objective: To identify patient characteristics, determine prognostic factors, and evaluate outcomes for women with hepatic metastases in gestational trophoblastic neoplasia (GTN).
Study Design: Seventeen GTN patients with hepatic metastases were treated at our institution between 1962 and 2010. Demographic data, disease characteristics, and survival were all analyzed retrospectively. Fisher's exact test was used to determine significance.
Results: The median age was 29 years (range, 16-48), and the antecedent pregnancy was nonmolar in 12 patients (75%) and a hydatidiform mole in 4 patients (25%). Fifteen patients (88%) had metastatic disease outside the liver, including lung (13), brain (5), and other intraabdominal organs (8). Median FIGO score was 14 (range, 12-19). Chemotherapy consisted of single-agent methotrexate or actinomycin D in 2 patients; methotrexate, actinomycin D, cyclophosphamide (MAC) in 4 patients; and etoposide, methotrexate, actinomycin D, cyclophosphamide and vincristine (EMACO) in 11 patients. Complete response rate to chemotherapy was 82% for EMA-CO versus 17% for other types of chemotherapy (p = 0.035). Overall survival was 41% (7/17).
Conclusion: Survival of patients with GTN and hepatic metastases increased from 17% (1/6) to 55% (6/11) after 1986 when EMA-CO chemotherapy was introduced. Survival was significantly decreased for patients with concomitant intraabdominal or brain metastases (11% vs. 75%, p = 0.015).
abstract_id: PUBMED:9641237
High-risk gestational trophoblastic disease: analysis of clinical prognoses. Purpose Of Investigation: An attempt to better define factors leading to patient survival in the high-risk group of malignant gestational trophoblastic disease (GTD).
Methods: From January 1, 1997 to December 31, 1995 25 cases of malignant high-risk GTD were retrospectively collected to evaluate prognostic factors by univariate and multivariate analysis.
Results: We identified the presence of liver metastases and/or brain metastases and the presence of intestinal metastases as significant by using univariate analysis. However, only the presence of liver metastases of brain metastases was significant by multivariate analysis (p=0.009).
Conclusions: Although a high-risk group of GTD can be identified according to the modified World Health Organization (WHO) prognostic scoring system, liver metastases were not emphasized (only two points) in this scoring system. We suggested that these risk factors, including brain metastases and liver metastases, should be weighted more than other risk factors.
abstract_id: PUBMED:17086804
Primary treatment of metastatic high-risk gestational trophoblastic neoplasia with EMA-CO chemotherapy. Objective: To evaluate the efficacy of etoposide, methotrexate, actinomycin D, cyclophosphamide and vincristine (EMA-CO) in the primary treatment of metastatic high-risk gestational trophoblastic neoplasia.
Study Design: Thirty women with metastatic high-risk gestational trophoblastic neoplasia were treated primarily with EMA-CO between 1986 and 2005. Patients who had incomplete responses or developed resistance to EMA-CO were treated with drug combinations employing etoposide and a platinum agent with or without bleomycin or ifosfamide. Adjuvant surgery and radiotherapy were used in selected patients. Survival, clinical response and factors affecting treatment success were analyzed retrospectively.
Results: The overall survival rate was 93.3% (28 of 30). Of the 30 patients treated with EMA-CO, 20 (66.7%) had a lasting clinical response, 8 (26.7%) developed resistance but were subsequently placed in remission with platinum-based chemotherapy, and 2 (6.7%) died of widespread metastatic disease. Clinical complete response to EMA-CO was significantly influenced by human chorionic gonadotropin level (<100,000 mIU/ mL, 82%, vs. > 100,000 mIU/mL, 46%), metastatic site (lung and pelvis, 75%, vs. other, 33%) and International Federation of Gynecology and Obstetrics (FIGO) risk factor score (< 7, 92% vs. >7, 50%). Surgical procedures were performed on 12 patients, and 4 patients received brain irradiation. Eight (80%) of 10 patients who received secondary platinum-based chemotherapy or without surgery were cured. The 2 patients who died had stage IV disease (brain and/or liver metastases) with FIGO scores of 13 and 14.
Conclusion: Over 93% of 30 patients with metastatic high-risk gestational trophoblastic neoplasia treated initially with the EMA-CO protocol, often in conjunction with brain irradiation, surgical resection of sites of persistent tumor and salvage platinum-based chemotherapy, were cured.
abstract_id: PUBMED:16681775
Evolution of treatment of high-risk metastatic gestational trophoblastic tumors: Ain Shams University experience. The aim of the current study is to evaluate the different treatment modalities used in the management of high-risk metastatic gestational trophoblastic tumors (GTT) between June 1992 and December 2004 at the Gynecologic Oncology Unit, Ain Shams University. Out of 261 patients diagnosed and treated for GTT, 70 (26.8%) were high risk metastatic patients based on the National Institutes of Health clinical classification. The mean age was 29.39 +/- 9.38 years (16-55 years), with six patients (8.6%) being older than 39 years, and the mean duration of follow-up was 79.74 +/- 40.44 months (6-157 months). Forty patients (57.14%) were diagnosed after molar pregnancy, 22 (31.43%) after abortion, and 8 (11.43%) after term pregnancy. Forty-two patients (60%) were diagnosed within 4 months of the occurrence of the disease, and 28 (40%) were diagnosed after more than 4 months. Sixty-seven patients were treated using different regimens according to the protocol of treatment at that time. The MAC regimen was used initially but has been subsequently abandoned in favor of EMA-CO (etoposide, methotrexate, dactinomycin, cyclophosphamide, and vincristine [Oncovin]) regimen, which was later modified by omitting the CO arm to decrease its toxicity. If resistance developed, platinum-based therapy was given in the form of EMA-EP. Recently, our unit incorporated paclitaxel in the third-line treatment. Surgical intervention was used selectively. Fifty-seven (81.4%) patients could be cured; 43 by initial chemotherapy, with a mean of 7 +/- 0.46 courses (6-15), and 14 were salvaged by second- or third-line chemotherapy. Fourteen patients (20%) died during the study period; one was unrelated to GTT, while three died of acute respiratory distress syndrome before instituting proper therapy and two died of treatment complications. Using univariate and multivariate Cox regression analyses, the presence of brain and/or liver metastases was found to be the worst prognostic variable affecting the survival, followed by resistance to combination chemotherapy and then the type of antecedent pregnancy. The projected 5-year survival as estimated by Kaplan-Meier method was 78%.
abstract_id: PUBMED:9833617
The management of high-risk gestational trophoblastic tumours (GTT). Between 1979 and 1995 we have treated 272 consecutive women with high-risk (GTT including 121 previously treated patients who were treated with the weekly EMA/CO (etoposide, methotrexate, actinomycin D alternating with cyclophosphamide and vincristine). The median follow-up is 4.5 years (range 1-16 years). The cumulative 5 year survival is 86.2% (95% confidence interval 81.9-90.5%). No deaths from GTT occurred later than 2 years after starting EMA/CO. In a multivariate analysis, adverse prognostic factors were the presence of liver metastases (p < 0.0001), interval from antecedent pregnancy > 24 months (p < 0.0001), brain metastases (p=0.0008) and term delivery of antecedent pregnancy (p=0.045). There were 11 (4%) early deaths while 213 (78%) achieved complete remission. 47 (17%) developed drug resistance to EMA/CO of whom 33 (70%) were salvaged by further cisplatinum based chemotherapy and surgery. 2 women developed acute myeloid leukaemia after treatment with EMA/CO. 56% of women who have been in remission for at least 2 years and had fertility conserving surgery have achieved pregnancy since completing EMA/CO and there have been 112 live births including 3 babies with congenital abnormalities. EMA/CO is an effective, easy to administer and well tolerated regimen for treating patient with high-risk GTT. More than half of these women will retain their fertility. However, there is a small but significant increase in second malignancies.
Answer: Yes, the survival of women with gestational trophoblastic neoplasia (GTN) and liver metastases appears to be improving. A study from the Charing Cross GTN database, which included patients treated between 1975 and 2007, showed that the overall survival was 48% at 5 years, which is an improvement from the previous finding of 27% at 5 years. The cause-specific survival, after excluding early deaths and censoring for non-GTN related deaths, was 68%. It was also noted that patients presenting within 2.8 years of the causative pregnancy had a better chance of long-term survival compared to those presenting after 2.8 years (PUBMED:22696824).
Another study reviewing the medical records of 26 GTN patients with liver metastases registered in the French Center for Trophoblastic Diseases and treated between November 1999 and December 2019 found a 5-year overall survival rate of 60.7%. The survival rate was higher in patients who achieved complete remission after first-line chemotherapy than in those who did not (100% vs 20%, p = 0.001). The presence of 6 or more liver metastases was independently associated with a poorer prognosis (PUBMED:37500013).
Furthermore, the introduction of EMA-CO chemotherapy (etoposide, methotrexate, actinomycin D, cyclophosphamide, vincristine) has been associated with improved survival rates. A study evaluating the efficacy and toxicity of EMA-CO for the treatment of high-risk GTN reported an overall survival rate of 90.9% (PUBMED:16803542). Another study reported that survival of patients with GTN and hepatic metastases increased from 17% to 55% after 1986 when EMA-CO chemotherapy was introduced (PUBMED:24937957).
In summary, the prognosis for patients with GTN and liver metastases has improved over time, particularly with the use of effective chemotherapy regimens such as EMA-CO and the achievement of complete remission after first-line chemotherapy. |
Instruction: Are transthoracic echocardiographic parameters associated with atrial fibrillation recurrence or stroke?
Abstracts:
abstract_id: PUBMED:15963405
Are transthoracic echocardiographic parameters associated with atrial fibrillation recurrence or stroke? Results from the Atrial Fibrillation Follow-Up Investigation of Rhythm Management (AFFIRM) study. Objectives: The purpose of this study was to evaluate the associations of transthoracic echocardiographic parameters with recurrent atrial fibrillation (AF) and/or stroke.
Background: The Atrial Fibrillation Follow-up Investigation of Rhythm Management (AFFIRM) study, an evaluation of elderly patients with AF at risk for stroke, provided an opportunity to evaluate the implications of echocardiographic parameters in patients with AF.
Methods: Transthoracic echocardiographic measures of mitral regurgitation (MR), left atrial (LA) diameter, and left ventricular (LV) function were evaluated in the AFFIRM rate- and rhythm-control patients who had sinus rhythm resume and had these data available. Risk for recurrent AF or stroke was evaluated with respect to transthoracic echocardiographic measures.
Results: Of 2,474 patients studied, 457 had > or =2(+)/4(+) MR, and 726 had a LA diameter >4.5 cm. The LV ejection fraction was abnormal in 543 patients. The cumulative probabilities of at least one AF recurrence/stroke were 46%/1% after 1 year and 84%/5% by the end of the trial (> 5 years), respectively. Multivariate analysis showed that randomization to the rhythm-control arm (hazard ratio [HR] = 0.64; p < 0.0001) and a qualifying episode of AF being the first known episode (HR = 0.70; p < 0.0001) were associated with decreased risk. Duration of qualifying AF episode >48 h (HR = 1.55; p < 0.0001) and LA diameter (p = 0.008) were associated with an increased risk of recurrent AF. Recurrent AF was more likely with larger LA diameters (HR = 1.21, 1.16, and 1.32 for mild, moderate, and severe enlargement, respectively). No transthoracic echocardiographic measures were associated with risk of stroke.
Conclusions: In the AFFIRM study, large transthoracic echocardiographic LA diameters were associated with recurrent AF, but no measured echocardiographic parameter was associated with stroke.
abstract_id: PUBMED:29672879
Increased left atrial size is associated with higher atrial fibrillation recurrence in patients treated with antiarrhythmic medications. Background: Atrial fibrillation (AF) is highly prevalent, and antiarrhythmic therapy is often used to help with rhythm control. Some common echocardiographic parameters may be useful in predicting AF recurrence among these patients. The purpose of this study was to evaluate the association between 3 common echocardiographic parameters (left atrial [LA] size, left ventricular ejection fraction [LVEF], and mitral regurgitation [MR]) and AF recurrence among patients treated with antiarrhythmic medications.
Hypothesis: We hypothesized that LA size, LVEF, and severity of MR are predictors of AF recurrence in this population.
Methods: A real-world cohort of AF patients who had transthoracic echocardiograms was analyzed. Data on LA size, LVEF, and MR were collected retrospectively from echocardiography reports. Patients were followed from the time of the echocardiogram until first recurrence of AF.
Results: A total of 2522 patients had echocardiography reports available for review. LA size showed the strongest prognostic relationship with AF recurrence; neither LVEF nor MR was significantly associated with AF recurrence. These results persisted after adjusting for age, sex, race, tobacco use, alcohol use, drug use, body mass index, and Charlson Comorbidity Index in a multivariable model.
Conclusions: In a cohort of patients treated with antiarrhythmic medications that had transthoracic echocardiogram data, LA size was a significant predictor of AF recurrence. The clinical utility of this finding would be strengthened by replication in a multicenter setting.
abstract_id: PUBMED:37708737
Research progress on predicting atrial fibrillation recurrence after radiofrequency ablation based on electrocardiogram-related parameters. Atrial fibrillation (AF) is the most common arrhythmia. It is associated with increased stroke risks, thromboembolism, and other complications, which are great life and economic burdens for patients. In recent years, with the maturity of percutaneous catheter radiofrequency ablation (RFA) technology, it has become a first-line therapy for AF. However, some patients still experience AF recurrence (AFR) after RFA, which can cause serious consequences. Therefore, it is critical to identify appropriate parameters that are predictive of prognosis and to be able to translate the parameters easily into the clinical setting. Here, we reviewed possible predicting indicators for AFR, focusing on all the electrocardiogram indicators, such as P wave duration, PR interval and so on. It may provide valuable information for guiding clinical works.
abstract_id: PUBMED:23114271
Risk factors for atrial fibrillation recurrence: a literature review. Atrial fibrillation is the most common arrhythmia managed in clinical practice and it is associated with an increased risk of mortality, stroke and peripheral embolism. Unfortunately, the incidence of atrial fibrillation recurrence ranges from 40 to 50%, despite the attempts of electrical cardioversion and the administration of antiarrhythmic drugs. In this review, the literature data about predictors of atrial fibrillation recurrence are highlighted, with special regard to clinical, therapeutic, biochemical, ECG and echocardiographic parameters after electrical cardioversion and ablation. Identifying predictors of success in maintaining sinus rhythm after cardioversion or ablation may allow a better selection of patients to undergo these procedures. The aim is to reduce healthcare costs and avoid exposing patients to unnecessary procedures and related complications. Recurrent atrial fibrillation depends on a combination of several parameters and each patient should be individually assessed for such a risk of recurrence.
abstract_id: PUBMED:16565591
Transthoracic echocardiographic predictors of the left atrial appendage contraction velocity in stroke patients with sinus rhythm. Systemic embolization is a potential complication in patients with thrombi situated in the left atrium and particularly, in the left atrial appendage (LAA). Reduced LAA contraction velocities, determined by the transesophageal echocardiography (TEE), are associated with increased risk of LAA spontaneous echocontrast and thrombus formation, and a history of systemic embolism. However, TEE remains a semi-invasive procedure, limiting its serial application as a screening tool. Therefore, it is desirable to obtain information regarding LAA function by transthoracic echocardiography in patients having cardioembolic stroke. The present study was designed to investigate various echocardiographic variables for patients with stroke to predict LAA dysfunction, reflected as reduced LAA contraction velocity. We studied a total of 61 patients with newly diagnosed acute embolic stroke (42 patients) and transient ischemic attack (19 patients). Computerized tomographic scanning was performed for the diagnosis of embolic stroke. Left atrial functional parameters determined by transthoracic echocardiography, such as left atrial active emptying fraction and acceleration slope of mitral inflow A wave, had significant correlations with the LAA contraction velocity (r = 0.57, p < 0.001; r = 0.54, p < 0.001, respectively). Left atrial volume index, left atrial active emptying volume and left atrial fractional shortening were also correlated with LAA contraction velocity (r = -0.44, p < 0.001; r = 0.38, p = 0.003; r = 0.37, p = 0.004, respectively). In conclusion, transthoracic echocardiography can provide valuable and reliable information about the LAA contraction velocity in stroke patients with sinus rhythm. This finding gives new insights for the appropriate strategy in the evaluation of an acute ischemic stroke.
abstract_id: PUBMED:31020990
Transthoracic echocardiography in the assessment of cardiogenic causes of ischaemic stroke. Introduction: One of the leading causes of death in Poland is stroke. Cardiogenic stroke is known to be one of the most important reasons for acute ischaemic stroke (AIS), comprising 25-30% of all AISs.
Aim Of Study: Assessment of the prevalence of different risk factors of cardiogenic causes of AIS using transthoracic echocardiography (TTE).
Material And Methods: Transthoracic echocardiograms performed in patients with AIS admitted to a single neurological ward between October 2013 and September 2017 were analysed. Patients were assigned, based on the results of their TTE and their previous medical history of atrial fibrillation (AF), to one of three groups depending on the level of the risk of occurrence of cardiogenic causes of AIS.
Ethical Permission: According to Dz.U.2001, no. 126, 1381 no ethical permission was needed.
Results: 663 patients with AIS were included in the study. Patients with high risk of cardiogenic cause of AIS: 26.7% (N = 177 patients [p]). Of these, 64.4% (114 p) were diagnosed with AF. 31.6% (56 p) with sinus rhythm during hospitalisation had a history of paroxysmal AF (PAF). In 11.9% (21 p) of the patients qualified to the high risk group, factors other than AF were found. Patients with moderate risk of cardiogenic cause of AIS: 10.1% (67 p). Patients with low risk of cardiogenic cause of AIS: 25.9% (172 p). Echocardiographic results led to a change in therapy in 1.21% of cases.
Conclusions: 1. Transthoracic echocardiography performed routinely in all AIS patients affects the treatment in a very low percentage of cases. 2. The group that could benefit the most from TTE examination includes people without established indications for chronic anticoagulant therapy, in particular patients after myocardial infarction and people with additional clinical symptoms. 3. In patients with AIS, the diagnostic sensitivity of TTE in the detection of PFO is low. Young people with a cryptogenic ischaemic stroke should undergo a transoesophageal assessment.
abstract_id: PUBMED:18364652
Echocardiographic, electrocardiographic, and clinical correlates of recurrent transient ischemic attacks: a follow-up study. Background: Transient ischemic attack (TIA) is presumed to be of cardiovascular origin. The aim of the study was to evaluate the electrocardiographic, echocardiographic, and clinical signs for predicting TIA recurrence.
Methods: A total of 100 consecutive patients presenting with a first episode of TIA without atrial fibrillation, previous stroke, and uncontrolled diabetes or hypertension were enrolled in the study. The electrocardiographic, echocardiographic, and clinical parameters were obtained in those patients. The patients received a follow-up of bimonthly visits and were grouped according to the presence (or lack) of TIA recurrence in the follow-up period.
Results: Of these patients, 23 experienced recurrent TIA and 72 did not; 5 patients dropped out. Independent risk factors evaluated for TIA recurrence were aortic diameter, left atrial diameter, P-wave dispersion, hyperlipidemia, absence of lipid lowering, and warfarin treatment.
Conclusion: Careful electrocardiographic and echocardiographic evaluation of patients with TIA may help assess the outcome of patients and guide therapeutic interventions.
abstract_id: PUBMED:16155395
Identification of good responders to rhythm control of paroxysmal and persistent atrial fibrillation by transthoracic and transesophageal echocardiography. Background: Identification of good responders to rhythm control in the management of atrial fibrillation (AF) is worthwhile in terms of increasing hemodynamic benefit and decreasing the likelihood of unstable anticoagulation even after the Atrial Fibrillation Follow-Up Investigation of Rhythm Management.
Methods: We tested the hypothesis that atrial substrate determines the risk of recurrence on rhythm control both in patients with paroxysmal AF (PAF) and in those with persistent or sustained AF (> or =1 week, SAF). There were 90 consecutive patients (mean age 63 +/- 12 years, 67 males and 23 females) with previous PAF (n = 66) or SAF (n = 24). They were maintained in sinus rhythm successfully for at least 1 month after conversion and then studied by transthoracic and transesophageal echocardiography. All of the patients were followed regularly by determination of symptoms, 12-lead ECG and intermittent Holter recording to determine recurrence of AF after echocardiographic study.
Results: After 9.1 +/- 3.8 (range 3-12) months of follow-up, 23 of the 90 (26%) patients had documented recurrence of AF (67 without recurrence). Univariate analysis of demographic characteristics, medications, ECG and echocardiographic parameters revealed that, compared with the group of patients without recurrent AF, the group of those with it included more members of the SAF group (11/27 vs. 13/67, p = 0.039), included more male subjects (22/23 vs. 45/67, p = 0.045), had a larger left atrial volume index (LAVI; 27 +/- 9 vs. 22 +/- 9 ml/m2, p = 0.024) and had lower LA appendage peak emptying velocity (LAAPEV; 42 +/- 15 vs. 55 +/- 22 cm/s, p = 0.01). Multivariate Cox proportional hazards regression analysis adjusted for age, gender and AF group revealed that patients with LAVI <30 ml/m2 and LAAPEV >46 cm/s had the least recurrence of AF (relative risk 0.18, 95% confidence interval 0.06-0.55, vs. with LAVI >30 ml/m2 or LAAPEV <46 cm/s, p = 0.002). Kaplan-Meier probability of freedom from AF recurrence was significantly better when LAVI <30 ml/m2 (log-rank p = 0.02), LAAPEV > 46 cm/s (p = 0.013) or both (p = 0.004). The superiority to predict the rate of sinus rhythm maintenance was the same in the PAF and SAF groups.
Conclusions: Good responders to rhythm control in the PAF and SAF groups share the characteristics of smaller LA volume and better LAA contractile function, emphasizing the critical role of atrial substrate remodeling in recurrence of AF.
abstract_id: PUBMED:33914144
Echocardiographic diagnosis of atrial cardiomyopathy allows outcome prediction following pulmonary vein isolation. Background: Relevant atrial cardiomyopathy (ACM), defined as a left atrial (LA) low-voltage area ≥ 2 cm2 at 0.5 mV threshold on endocardial contact mapping, is associated with new-onset atrial fibrillation (AF), higher arrhythmia recurrence rates after pulmonary vein isolation (PVI), and an increased risk of stroke. The current study aimed to assess two non-invasive echocardiographic parameters, LA emptying fraction (EF) and LA longitudinal strain (LAS, during reservoir (LASr), conduit (LAScd) and contraction phase (LASct)) for the diagnosis of ACM and prediction of arrhythmia outcome after PVI.
Methods: We prospectively enrolled 60 consecutive, ablation-naive patients (age 66 ± 9 years, 80% males) with persistent AF. In 30 patients (derivation cohort), LA-EF and LAS cut-off values for the presence of relevant ACM (high-density endocardial contact mapping in sinus rhythm prior to PVI at 3000 ± 1249 sites) were established in sinus rhythm and tested in a validation cohort (n = 30). Arrhythmia recurrence within 12 months was documented using 72-h Holter electrocardiograms.
Results: An LA-EF of < 34% predicted ACM with an area under the curve (AUC) of 0.846 (sensitivity 69.2%, specificity 76.5%) similar to a LASr < 23.5% (AUC 0.878, sensitivity 92.3%, specificity 82.4%). In the validation cohort, these cut-offs established the correct diagnosis of ACM in 76% of patients (positive predictive values 87%/93% and negative predictive values 73%/75%, respectively). Arrhythmia recurrence in the entire cohort was significantly more frequent in patients with LA-EF < 34% and LASr < 23.5% (56% vs. 29% and 55% vs. 26%, both p < 0.05).
Conclusion: The echocardiographic parameters LA-EF and LAS allow accurate, non-invasive diagnosis of ACM and prediction of arrhythmia recurrence after PVI.
abstract_id: PUBMED:29474699
Rationale and design of the EACVI AFib Echo Europe Registry for assessing relationships of echocardiographic parameters with clinical thrombo-embolic and bleeding risk profile in non-valvular atrial fibrillation. The European Society of Cardiology (ESC) guidelines for management of atrial fibrillation (AF) recommend the use of CHA2DS2VASc risk score for assessment of thromboembolic (TE) risk, whereas the stratification of bleeding risk should be obtained by HAS-Bleed to balance the most appropriate anticoagulation (OAC) therapy. However, men with CHA2DS2VASc score = 1 and women with CHA2DS2VASc = 2, who are at intermediate TE risk, represent a grey zone where guidelines do not provide a definite OAC indication. Accordingly, implementation of risk stratification with echocardiography could be extremely useful. Both prospective and cross-sectional studies on transthoracic echocardiography (TTE) prediction of TE events and studies utilizing transoesophageal echocardiographic parameters as surrogate markers of TE events makes sustainable the hypothesis that echocardiography could improve TE prediction in non-valvular AF. Moreover, considering the close association of AF and stroke, all echo-Doppler parameters that have shown to predict AF onset and recurrence could be useful also to predict TE events in this clinical setting. Accordingly, EACVI AFib Echo Europe Registry has been designed as an observational, cross-sectional study, with the aim of evaluating: (i) left atrial (LA) size and function together with left ventricular geometry, systolic and diastolic functions in paroxysmal, persistent, and permanent AF; (ii) relationships of structural/functional parameters with clinical TE and bleeding risk profile. By the AFib Echo Europe Registry, we expect to collect data on echocardiographic phenotype of patients with AF. The large data set accumulated will be useful to test the level of agreement of different echocardiographic measurements with the available risk scores.
Answer: Transthoracic echocardiographic parameters have been studied to determine their association with atrial fibrillation (AF) recurrence and stroke. Several studies have found that larger left atrial (LA) diameters are associated with an increased risk of recurrent AF. For instance, in the Atrial Fibrillation Follow-Up Investigation of Rhythm Management (AFFIRM) study, it was found that larger LA diameters were associated with recurrent AF, with hazard ratios of 1.21, 1.16, and 1.32 for mild, moderate, and severe enlargement, respectively (PUBMED:15963405). Similarly, another study found that LA size was a significant predictor of AF recurrence in patients treated with antiarrhythmic medications, even after adjusting for various factors in a multivariable model (PUBMED:29672879).
However, the AFFIRM study did not find an association between the measured transthoracic echocardiographic parameters and the risk of stroke (PUBMED:15963405). This suggests that while echocardiographic parameters, particularly LA size, may be useful in predicting AF recurrence, they may not be reliable predictors of stroke risk in patients with AF.
Other studies have also explored the predictive value of echocardiographic parameters for AF recurrence. For example, one study identified that patients with smaller LA volume and better LA appendage contractile function were less likely to experience AF recurrence, emphasizing the role of atrial substrate remodeling in the recurrence of AF (PUBMED:16155395). Another study found that transthoracic echocardiography could provide valuable information about left atrial appendage contraction velocity in stroke patients with sinus rhythm, which is related to the risk of systemic embolism (PUBMED:16565591).
In summary, transthoracic echocardiographic parameters, particularly those related to LA size and function, are associated with the recurrence of AF but do not appear to be directly associated with the risk of stroke in patients with AF. |
Instruction: Metabolic and kidney disorders correlate with high atazanavir concentrations in HIV-infected patients: is it time to revise atazanavir dosages?
Abstracts:
abstract_id: PUBMED:25875091
Metabolic and kidney disorders correlate with high atazanavir concentrations in HIV-infected patients: is it time to revise atazanavir dosages? Introduction: Ritonavir-boosted atazanavir (ATV/r) is a relatively well tolerated antiretroviral drug. However, side effects including hyperbilirubinemia, dyslipidemia, nephrolithiasis and cholelithiasis have been reported in the medium and long term. Unboosted ATV may be selected for some patients because it has fewer gastrointestinal adverse effects, less hyperbilirubinemia and less impact on lipid profiles.
Methods: We investigated the distribution of ATV plasma trough concentrations according to drug dosage and the potential relationship between ATV plasma trough concentrations and drug-related adverse events in a consecutive series of 240 HIV-infected patients treated with ATV/r 300/100 mg (68%) or ATV 400 mg (32%).
Results: 43.9% of patients treated with ATV/r 300/100 mg had ATV concentrations exceeding the upper therapeutic threshold. A significant and direct association has been observed between the severity of hyperbilirubinemia and ATV plasma trough concentrations (ATV concentrations: 271 [77-555], 548 [206-902], 793 [440-1164], 768 [494-1527] and 1491 [1122-1798] ng/mL in patients with grade 0, 1, 2, 3 and 4 hyperbilirubinemia, respectively). In an exploratory analysis we found that patients with dyslipidemia or nephrolitiasis had ATV concentrations significantly higher (582 [266-1148], and 1098 [631-1238] ng/mL, respectively) (p<0.001), as compared with patients with no ATV-related complications (218 [77-541] ng/mL).
Conclusions: A significant proportion of patients treated with the conventional dosage of ATV (300/100) had plasma concentrations exceeding the upper therapeutic threshold. These patients that are at high risk to experience ATV-related complications may benefit from TDM-driven adjustments in ATV dosage with potential advantages in terms of costs and toxicity.
abstract_id: PUBMED:21819530
Randomized comparison of metabolic and renal effects of saquinavir/r or atazanavir/r plus tenofovir/emtricitabine in treatment-naïve HIV-1-infected patients. Objectives: The aim of the study was to compare the effects on lipids, body composition and renal function of once-daily ritonavir-boosted saquinavir (SQV/r) or atazanavir (ATV/r) in combination with tenofovir/emtricitabine (TDF/FTC) over 48 weeks.
Methods: An investigator-initiated, randomized, open-label, multinational trial comparing SQV/r 2000/100 mg and ATV/r 300/100 mg once daily, both in combination with TDF/FTC, in 123 treatment-naïve HIV-1-infected adults was carried out. The primary endpoint was to demonstrate noninferiority of SQV/r compared with ATV/r with respect to the change in fasting cholesterol after 24 weeks. Secondary outcome measures were changes in metabolic abnormalities, body composition, renal function, and virological and immunological efficacy over 48 weeks. Patients who had used at least one dose of trial drug were included in the analysis.
Results: Data for 118 patients were analysed (57 patients on SQV/r and 61 on ATV/r). At week 24, changes in lipids were modest, without increases in triglycerides, including a significant rise in high-density lipoprotein (HDL) cholesterol and a nonsignificant decrease in the total:HDL cholesterol ratio in both arms with no significant difference between arms. Lipid changes at week 48 were similar to the changes observed up to week 24, with no significant change in the homeostasis model assessment (HOMA) index. Adipose tissue increased regardless of the regimen, particularly in the peripheral compartment and to a lesser extent in the central abdominal compartment, with an increase in adipose tissue reaching statistical significance in the ATV/r arm. A slight decline in the estimated glomerular filtration rate (eGFR) was observed in both arms during the first 24 weeks, with no progression thereafter. The immunological and virological responses were similar over the 48 weeks.
Conclusions: Combined with TDF/FTC, both SQV/r 2000/100 mg and ATV/r 300/100 mg had comparable modest effects on lipids, had little effect on glucose metabolism, conserved adipose tissue, and similarly reduced eGFR. The virological efficacy was similar.
abstract_id: PUBMED:20116610
Pharmacology, pharmacokinetic features and interactions of atazanavir Atazanavir (ATV) is an HIV protease inhibitor (IP) with a high in vitro activity against HIV-1, that demonstrates a high additive activity in the presence of other antiretrovirals and a synergic activity with other PI. Oral absorption is greater than 68%, maximum concentration (C(max)) being reached approximately 2 to 3 h after its administration. Its absorption is dependent on gastric pH, its administration being recommended after meals. The pharmacokinetics (PK) of ATV are non-linear; that is to say, its plasma concentrations (C(p)) do not increase in proportion to the dose. ATV is approximately 86% bound to plasma proteins. Its entry into the cerebrospinal fluid, semen or genital secretions varies but is generally less than 10-20%. Its passage across the placenta, measured as the mean of the ratios between the C(p) in umbilical cord and maternal blood, is 0.13. ATV is metabolised by oxidation by cytochrome P450 enzymes, subsequently being eliminated by the bile duct in the free or glucuronide form (80%) and by the urine. ATV is a weak competitive inhibitor of CYP3A4 and a strong inhibitor of uridine diphosphate-glucuronosyltransferase 1A1, which is the cause of the frequent high plasma bilirubin after its administration and of its pharmacological interactions.
abstract_id: PUBMED:19776778
The nephrotoxic effects of HAART. With significant reductions in mortality and risk of progression to AIDS in the era of highly active antiretroviral therapy (HAART), complications of long-standing HIV infection and treatment have become increasingly important. Such complications include the nephrotoxic effects of HAART, which are the subject of this Review. The most common nephrotoxic effects associated with HAART include crystal-induced obstruction secondary to use of protease inhibitors (mainly indinavir and atazanavir), and proximal tubule damage related to the nucleotide analog reverse transcriptase inhibitor tenofovir. Acute kidney injury (AKI) can occur following tenofovir-induced tubule dysfunction or as a result of severe mitochondrial dysfunction and lactic acidosis induced by nucleoside reverse transcriptase inhibitors. The potential insidious long-term renal toxicity of antiretroviral treatment is probably underappreciated in patients with HIV: a proportion of patients with treatment-related AKI did not recover their baseline renal function at 2-year follow-up, suggesting the possibility of permanent renal damage. Finally, nonspecific metabolic complications might increase the risk of vascular chronic kidney disease in patients on HAART. However, given the benefits of HAART, fear of nephrotoxic effects is never a valid reason to withhold antiretroviral therapy. Identification of patients with pre-existing chronic kidney disease, who are at increased risk of renal damage, enables appropriate dose modification, close monitoring, and avoidance or cautious use of potentially nephrotoxic medications.
abstract_id: PUBMED:27125367
A cross-sectional study to evaluate the association of hyperbilirubinaemia on markers of cardiovascular disease, neurocognitive function, bone mineral density and renal markers in HIV-1 infected subjects on protease inhibitors. Background: Ongoing inflammation in controlled HIV infection contributes to non-AIDS comorbidities. High bilirubin appears to exhibit an anti-inflammatory effect in vivo. We therefore examined whether increased bilirubin in persons with HIV was associated with differences in markers of inflammation and cardiovascular, bone, renal disease, and neurocognitive (NC) impairment.
Methods: This cross-sectional study examined inflammatory markers in individuals with stable HIV infection treated with two nucleoside reverse transcriptase inhibitors and a boosted protease inhibitor. Individuals recruited were those with a normal bilirubin (NBR; 0-17 μmol/L) or high bilirubin (>2.5 × upper limit of normal). Demographic and anthropological data were recorded. Blood and urine samples were taken for analyses. Pulse wave velocity (PWV) measurement, carotid intimal thickness (CIT), and calcaneal stiffness (CSI) were measured. Males were asked to answer a questionnaire about sexual function; NC testing was performed using CogState.
Results: 101 patients were screened, 78 enrolled (43 NBR and 35 HBR). Atazanavir use was significantly higher in HBR. Whilst a trend for lower CIT was seen in those with HBR, no significant differences were seen in PWV, bone markers, calculated cardiovascular risk (Framingham), or erectile dysfunction score. VCAM-1 levels were significantly lower in the HBR group. HBR was associated with lower LDL and triglyceride levels. NBR was associated with a calculated FRAX significantly lower than HBR although no associations were found after adjusting for tenofovir use. No difference in renal markers was observed. Component tests of NC testing revealed differences favouring HBR but overall composite scores were similar.
Discussion: High bilirubin in the context of boosted PI therapy was found not to be associated with differences in with the markers examined in this study. Some trends were noted and, on the basis of these, a larger, clinical end point study is warranted.
abstract_id: PUBMED:28298195
Role of systemic inflammation scores for prediction of clinical outcomes in patients treated with atazanavir not boosted by ritonavir in the Italian MASTER cohort. Background: Atazanavir (ATV) not boosted by ritonavir (uATV) has been frequently used in the past for switching combination antiretroviral therapy (cART). However, the clinical outcomes and predictors of such strategy are unknown.
Methods: An observational study was carried out on the Italian MASTER, selecting HIV infected patients on cART switching to an uATV-containing regimen. Baseline was set as the last visit before uATV initiation. In the primary analysis, a composite clinical end-point was defined as the first occurring of any condition among: liver, cardiovascular, kidney, diabetes, non AIDS related cancer or death events. Incidence of AIDS events and incidence of composite clinical end-point were estimated. Kaplan-Meier and multivariable Cox regression analysis were used to assess predictors of the composite clinical end-point.
Results: 436 patients were observed. The majority of patients were males (61.5%) and Italians (85.3%), mean age was 42.7 years (IQR: 37.7-42), the most frequent route of transmission was heterosexual intercourse (47%), followed by injection drug use (25%) and homosexual contact (24%); the rate of HCV-Ab positivity was 16.3%. Patients were observed for a median time of 882 days (IQR: 252-1,769) under uATV. We recorded 93 clinical events (3 cardiovascular events, 20 kidney diseases, 33 liver diseases, 9 non AIDS related cancers, 21 diabetes, 7 AIDS events), and 19 deaths, accounting for an incidence of 3.7 (composite) events per 100 PYFU. At multivariable analysis, factors associated with the composite clinical end-point were intravenous drug use as risk factor for HIV acquisition vs. heterosexual intercourses [HR: 2.608, 95% CI 1.31-5.19, p = 0.0063], HIV RNA per Log10 copies/ml higher [HR: 1.612, 95% CI 1.278-2.034, p < 0.0001], number of switches in the nucleoside/nucleotide (NRTI) backbone of cART (performed to compose the uATV regimen under study or occurred in the past) per each more [HR: 1.085, 95% CI 1.025-1.15, p = 0.0051], Fib-4 score per unit higher [HR: 1.03, 95% CI 1.018-1.043, p < 0.0001] and Neutrophil/lymphocytes ratio (NLR inflammation score) per Log10 higher [HR: 1.319, 95% CI 1.047-1.662, p = 0.0188].
Conclusions: Intravenous drug users with high HIV RNA, high Fib-4 levels and more heavily exposed to antiretroviral drugs appeared to be more at risk of clinical events. Interestingly, high levels of inflammation measured through NLR, were also associated with clinical events. So, these patients should be monitored more strictly.
abstract_id: PUBMED:30218298
Risk factors for kidney disease among HIV-1 positive persons in the methadone program. Background: Kidney injury is a serious comorbidity among HIV-infected patients. Intravenous drug use is listed as one of the risk factors for impaired renal function; however, this group is rarely assessed for specific renal-related risks.
Methods: Patients attending methadone program from 1994 to 2015 were included in the study. Data collected included demographic data, laboratory tests, antiretroviral treatment history, methadone dosing and drug abstinence. Patients' drug abstinence was checked monthly on personnel demand. We have evaluated two study outcomes: (1) having at least one or (2) three eGFR < 60 ml/min (MDRD formula).
Results: In total, 267 persons, with 2593 person-years of follow-up were included into analyses. At the time of analyses, 251 (94%) were on antiretroviral therapy (ARV). Fifty-two (19.5%) patients had 1eGFR and 20 (7.5%) 3eGFR < 60. In univariate analysis, factors significantly increasing the odds of impaired renal function were: female gender, detectable HIV RNA on ART, age at registration per 5 years older, atazanavir use and time on antiretroviral treatment per 1 year longer. In the multivariate model, only female gender (OR 4.7; p = 0.002), time on cART (OR 1.11; p = 0.01) and baseline eGFR (OR 0.71; p = 0.001) were statistically significant.
Conclusions: We have demonstrated a high rate of kidney function impairment among HIV-1 positive patients in the methadone program. All risk factors for decreased eGFR in this subpopulation of patients were similar to those described for general HIV population with very high prevalence in women. These findings imply the need for more frequent kidney function monitoring in this subgroup of patients.
abstract_id: PUBMED:24827777
Advances in the pathogenesis of HIV-associated kidney diseases. Despite improved outcomes among persons living with HIV who are treated with antiretroviral therapy, they remain at increased risk for acute and chronic kidney diseases. Moreover, since HIV can infect renal epithelial cells, the kidney might serve as a viral reservoir that would need to be eradicated when attempting to achieve full virologic cure. In recent years, much progress has been made in elucidating the mechanism by which HIV infects renal epithelial cells and the viral and host factors that promote development of kidney disease. Polymorphisms in APOL1 confer markedly increased risk of HIV-associated nephropathy; however, the mechanism by which ApoL1 variants may promote kidney disease remains unclear. HIV-positive persons are at increased risk of acute kidney injury, which may be a result of a high burden of subclinical kidney disease and/or viral factors and frequent exposure to nephrotoxins. Despite the beneficial effect of antiretroviral therapy in preventing and treating HIVAN, and possibly other forms of kidney disease in persons living with HIV, some of these medications, including tenofovir, indinavir, and atazanavir can induce acute and/or chronic kidney injury via mitochondrial toxicity or intratubular crystallization. Further research is needed to better understand factors that contribute to acute and chronic kidney injury in HIV-positive patients and to develop more effective strategies to prevent and treat kidney disease in this vulnerable population.
abstract_id: PUBMED:16360231
Antiviral hepatitis and antiretroviral drug interactions. More and more HIV-infected patients are treated for viral hepatitis, increasing interactions. HEPATITIS C: The concomitant use of didanosine and ribavirin increases the risk of mitochondrial toxicity, responsible for pancreatitis and/or lactic acidosis. Lactic acidosis is characterized by a high mortality rate. Thus, didanosine, but also stavudine, should not be co-administered with ribavirin. Cases of hepatic decompensation have been reported in cirrhotics concomitantly receiving ribavirin and didanosine. Thus, this co-admininistration should be contraindicated in patients with advanced liver fibrosis. Anemia is a frequent side effect of ribavirin. In patients with zidovudine-related anemia, this drug should be discontinued before prescribing ribavirin. Erythropoietin may help to improve the haemoglobin level. HEPATITIS B: Adefovir significantly decreases the plasma levels of saquinavir. Pancreatitis may occur with the co-administration of didanosine and tenofovir. Thus this co-administration should be avoided. Atazanavir concentrations are decreased when tenofovir is co-administered. Thus, atazanavir should be boosted with ritonavir, when combined with tenofovir. Atazanavir increases the concentrations of tenofovir, with the potential risk of increasing the adverse events of tenofovir, including renal disorders. Tenofovir area under the curve is increased if lopinavir-ritonavir are co-administered. The main interactions, with a fatal risk, are observed with didanosine, when co-administered with ribavirin (hepatitis C) or with tenofovir (hepatitis B). Anemia is frequent, but usually moderate, when zidovudine is co-administered with ribavirin. Other interactions are usually easy to manage.
abstract_id: PUBMED:18680693
Toxicogenetics of antiretroviral treatment (II): neurotoxicity, hepatotoxicity, lactic acidosis, kidney damage, and other adverse effects of antiretroviral drugs Several pharmacogenetics studies have analyzed the influence of specific genetic polymorphisms on the toxicity of antiretroviral treatment. The present review describes some of the adverse effects of antiretroviral drugs in which a genetic predisposition may be involved: efavirenz-induced neurological toxicity, generally associated with the 516G>T polymorphism of liver enzyme cytochrome P450 2B6 (CYP2B6); hypersensitivity reactions to nevirapine, associated with specific alleles of major histocompatibility complex, mainly the HLA-DRB1*0101 allele, which, in combination with a high CD4 lymphocyte count, has been associated with systemic reactions and hepatitis in Caucasians, and the HLA-Cw8 allele, which is associated with hypersensitivity reactions in persons from the Italian island of Sardinia and from Japan; nevirapine-induced hepatotoxicity associated with the C>T polymorphism in position 3435T of the ABCB1 (MDR-1) gene codifying for glycoprotein P (lower risk); hyperbilirubinemia in patients exposed to atazanavir or indinavir carrying the UGT1A1*28 polymorphism; peripheral neuropathy with nucleoside analogues associated with haplogroup T of the mitochondrial genome (higher risk) and with the HFE C282Y genotype of the hemochromatosis gene (lower risk); the mutation in codon 964 (R964C) of the POLG gene that codifies the mitochondrial polymerase DNA gamma described in a Thai patient with lactic acidosis; the ABCC2 gene haplotypes associated with tenofovir-induced proximal tubulopathy, and the risk of pancreatitis in persons with mutations in the CFTR and SPINK-1 genes.
Answer: The evidence from the provided abstracts suggests that there is a correlation between high atazanavir (ATV) concentrations and the occurrence of metabolic and kidney disorders in HIV-infected patients, which raises the question of whether it is time to revise ATV dosages.
In a study investigating ATV plasma trough concentrations in relation to drug dosage and drug-related adverse events, it was found that a significant proportion of patients treated with the conventional dosage of ATV/r 300/100 mg had plasma concentrations exceeding the upper therapeutic threshold. This was associated with a higher severity of hyperbilirubinemia and significantly higher ATV concentrations in patients with dyslipidemia or nephrolithiasis compared to those without complications (PUBMED:25875091). This suggests that patients at high risk of ATV-related complications may benefit from therapeutic drug monitoring (TDM)-driven adjustments in ATV dosage to potentially reduce costs and toxicity.
Another study comparing the metabolic and renal effects of ATV/r with saquinavir/r in combination with tenofovir/emtricitabine found that both regimens had modest effects on lipids and little effect on glucose metabolism, while a slight decline in estimated glomerular filtration rate (eGFR) was observed in both arms during the first 24 weeks without progression thereafter (PUBMED:21819530). This indicates that while ATV/r has some impact on renal function, the effects may not be severe or progressive over the short term.
The pharmacokinetics of ATV are non-linear, and it is metabolized by cytochrome P450 enzymes, which can lead to high plasma bilirubin and pharmacological interactions (PUBMED:20116610). Additionally, HAART, including ATV, has been associated with nephrotoxic effects such as crystal-induced obstruction and proximal tubule damage (PUBMED:19776778). These findings further support the need to consider dosage adjustments to minimize the risk of renal toxicity.
In summary, the evidence suggests that high ATV concentrations are associated with an increased risk of metabolic and kidney disorders, and there is a potential benefit in revising ATV dosages based on individual patient risk factors and TDM to optimize therapeutic outcomes and minimize adverse effects. |
Instruction: Total aortic arch replacement using hypothermic circulatory arrest with antegrade selective cerebral perfusion: are there cerebral deficits other than frank stroke?
Abstracts:
abstract_id: PUBMED:22566263
Total aortic arch replacement using hypothermic circulatory arrest with antegrade selective cerebral perfusion: are there cerebral deficits other than frank stroke? Background: It is controversial whether cerebral deficits other than frank stroke develop after total aortic arch replacement using hypothermic circulatory arrest (HCA) with antegrade selective cerebral perfusion (SCP).
Objectives: We investigated neuropsychological functions in patients who received total aortic arch replacement using deep HCA with SCP.
Methods: Eleven patients who underwent elective total arch replacement using deep HCA with antegrade SCP were included. Cognitive functions of the patients were evaluated at baseline, and 3 weeks and 6 months after the aortic arch surgery.
Results: The performance of cognitive tests did not change 3 weeks after surgery, except for the attention/calculation task of the Mini-Mental State Examination (MMSE). Six months after surgery, the decline in score for the attention/calculation task in the MMSE had reversed and the score for this task as well as for all other tests had returned to baseline levels.
Conclusion: Long-lasting cognitive deficits other than frank stroke may not develop after total arch replacement surgery using deep HCA with SCP.
abstract_id: PUBMED:34317953
A new cannula for antegrade selective cerebral perfusion. Objective: Our aim was to perform antegrade selective cerebral perfusion with a different surgical technique using a new type of cannula.
Methods: This cannula has been designed to be introduced in the supra-aortic vessels directly using a standard guidewire technique (Seldinger technique). The cannula can also be inserted from the ostia of a vessel if preferred. Furthermore, this device can be introduced before the institution of hypothermic circulatory arrest and opening the aortic arch.
Results: We have performed operations on 5 patients so far using this cannula. No stroke or spinal cord injuries were detected. At the moment, both intraoperatively and at computed tomography scan follow-up, no significant stenosis of the cannulation sites were noted. Follow-up at 2 years found that patients are alive and free from new major neurological events.
Conclusions: Transarterial introduction using the Seldinger technique of our cannula (AV Flow; MedEurope Srl, Bologna, Italy) represents an alternative to the current well-established techniques. The major advantages we describe are complete cerebral protection throughout the hypothermic circulatory arrest and easier arch vessels reimplantation or hemiarch operations.
abstract_id: PUBMED:30902473
Cerebral protection strategies in aortic arch surgery: A network meta-analysis. Objective: Cerebral protection for aortic arch surgery has been widely studied, but comparisons of all the available strategies have rarely been performed. We performed direct and indirect comparisons of antegrade cerebral perfusion, retrograde cerebral perfusion, and deep hypothermic circulatory arrest in a network meta-analysis.
Methods: After a systematic literature search, studies comparing any combination of antegrade cerebral perfusion, retrograde cerebral perfusion, and deep hypothermic circulatory arrest were included, and a frequentist network meta-analysis was performed using the generic inverse variance method. The primary outcomes were postoperative stroke and operative mortality. Secondary outcomes were postoperative transient neurologic deficits, myocardial infarction, respiratory complications, and renal failure.
Results: A total of 68 studies were included with a total of 26,968 patients. Compared with deep hypothermic circulatory arrest, both antegrade cerebral perfusion and retrograde cerebral perfusion were associated with significantly lower postoperative stroke and operative mortality rates: antegrade cerebral perfusion (odds ratio [OR], 0.62; 95% confidence interval [CI], 0.51-0.75; and OR, 0.63, 95% CI, 0.51-0.76, respectively) and retrograde cerebral perfusion (OR, 0.66; 95% CI, 0.54-0.82; and OR, 0.57; 95% CI, 0.45-0.71, respectively). Antegrade cerebral perfusion and retrograde cerebral perfusion were associated with similar incidence of primary outcomes. No difference among the 3 techniques was found in secondary outcomes. At meta-regression, circulatory arrest duration correlated with the neuroprotective effect of antegrade cerebral perfusion and retrograde cerebral perfusion compared with deep hypothermic circulatory arrest. Unilateral or bilateral antegrade cerebral perfusion and arrest temperature did not influence the results.
Conclusions: Antegrade cerebral perfusion and retrograde cerebral perfusion are associated with better postoperative outcomes compared with deep hypothermic circulatory arrest, and the relative benefit increases with the duration of the circulatory arrest. No differences between antegrade cerebral perfusion and retrograde cerebral perfusion were found for all the explored outcomes.
abstract_id: PUBMED:32360886
Total Arch Replacement with Hypothermic Circulatory Arrest, Antegrade Cerebral Perfusion and the Y-graft. This study examines postoperative morbidity and mortality and long-term survival after total arch replacement (TAR) using deep to moderate hypothermic circulatory arrest (HCA), antegrade cerebral perfusion (ACP), and the Y-graft. Seventy-five patients underwent TAR with the Y graft. Deep to moderate HCA was initiated at 18-22°C. ACP was either initiated immediately (early ACP) or after the distal anastomosis was performed (late ACP). The arch vessels were then serially anastomosed to the individual limbs of the Y-graft. The median age was 66 years (range = 32-82). Etiology of aneurysmal dilatation included 20 (27%) patients with medial degenerations, 25 (33%) with chronic dissections, 14 (19%) with acute dissections, 9 (12%) with atherosclerosis and 2 (3%) with Marfan syndrome. In-hospital mortality was 5%. Neurologic complications occurred in 8 (11%) patients; 2 (3%) had strokes and 6 (8%) had transient neurologic deficits. Patients undergoing TAR with moderate hypothermia had a significantly higher incidence of new-onset renal insufficiency (3 [23%] vs [0%], P < 0.001) and TND (3 (23%) vs 3 (5%), P = 0.028) than the profound and deep hypothermia cohort. Excluding the 1 patient who died intraoperatively, 89% (95%CI: 79-94%) were alive at 1 year, 78% at 5 years (95%CI: 66-86%), and 73% at 10 years (95%CI: 59-82%). The combination of deep to moderate HCA, ACP, and the Y-graft is a safe and reproducible technique. Further inquiry is needed to assess if early ACP provides superior clinical outcomes.
abstract_id: PUBMED:34420792
Long-term outcomes of hemiarch replacement with hypothermic circulatory arrest and retrograde cerebral perfusion. Objective: This study sought to report outcomes of hemiarch replacement with hypothermic circulatory arrest and retrograde cerebral perfusion, and secondarily, to report outcomes of this operative approach by type of underlying aortic disease.
Methods: This was an observational study of aortic surgeries from 2010 to 2018. All patients who underwent hemiarch replacement with retrograde cerebral perfusion were included, whereas patients undergoing partial or total arch replacement or concomitant elephant trunk procedures were excluded. Patients were dichotomized into 2 groups by underlying aortic disease; that is, acute aortic dissection (AAD) or aneurysmal degeneration of the aorta. These groups were analyzed for differences in short-term postoperative outcomes, including stroke and operative mortality (Society of Thoracic Surgeons definition). Multivariable Cox analysis was performed to identify variables associated with long-term survival after hemiarch replacement.
Results: A total of 500 patients undergoing hemiarch replacement with hypothermic circulatory arrest plus retrograde cerebral perfusion were identified, of whom 53.0% had aneurysmal disease and 47.0% had AAD. For the entire cohort, operative mortality was 6.4%, whereas stroke occurred in 4.6% of patients. Comparing AAD with aneurysm, operative mortality and stroke rates were similar across each group. Five-year survival was 84.4% ± 0.02% for the entire hemiarch cohort, whereas 5-year survival was 88.0% ± 0.02% for the aneurysm subgroup and was 80.5% ± 0.03% for the AAD subgroup. On multivariable analysis, AAD was not associated with an increased hazard of death, compared with aneurysm (P = .790).
Conclusions: Morbidity and mortality after hemiarch replacement with hypothermic circulatory arrest plus retrograde cerebral perfusion are acceptably low, and this operative approach may be as advantageous for AAD as it is for aneurysm.
abstract_id: PUBMED:37629655
Association between Bilateral Selective Antegrade Cerebral Perfusion and Postoperative Ischemic Stroke in Patients with Emergency Surgery for Acute Type A Aortic Dissection-Single Centre Experience. Acute type A aortic dissection (ATAAD) is a surgical emergency with a mortality of 1-2% per hour. Since its discovery over 200 years ago, surgical techniques for repairing a dissected aorta have evolved, and with the introduction of hypothermic circulatory arrest and cerebral perfusion, complex techniques for replacing the entire aortic arch were possible. However, postoperative neurological complications contribute significantly to mortality in this group of patients. The aim of this study was to determine the association between different bilateral selective antegrade cerebral perfusion (ACP) times and the incidence of postoperative ischemic stroke in patients with emergency surgery for ATAAD. Patients with documented hemorrhagic or ischemic stroke, clinical signs of stroke or neurological dysfunction prior to surgery, that died on the operating table or within 48 h after surgery, from whom the postoperative neurological status could not be assessed, and with incomplete medical records were excluded from this study. The diagnosis of postoperative stroke was made using head computed tomography imaging (CT) when clinical suspicion was raised by a neurologist in the immediate postoperative period. For selective bilateral antegrade cerebral perfusion, we used two balloon-tipped cannulas inserted under direct vision into the innominate artery and the left common carotid artery. Each cannula is connected to a separate pump with an independent pressure line. Near-infrared spectroscopy was used in all cases for cerebral oxygenation monitoring. The circulatory arrest was initiated after reaching a target core temperature of 25-28 °C. In total, 129 patients were included in this study. The incidence of postoperative ischemic stroke documented on a head CT was 24.8% (31 patients), and postoperative death was 20.9% (27 patients). The most common surgical technique performed was supravalvular ascending aorta and Hemiarch replacement with a Dacron graft in 69.8% (90 patients). The mean cardiopulmonary bypass time was 210 +/- 56.874 min, the mean aortic cross-clamp time was 114.775 +/- 34.602 min, and the mean cerebral perfusion time was 37.837 +/- 18.243 min. Using logistic regression, selective ACP of more than 40 min was independently associated with postoperative ischemic stroke (OR = 3.589; 95%CI = 1.418-9.085; p = 0.007). Considering the high incidence of postoperative stroke in our study population, we concluded that bilateral selective ACP should be used with caution, especially in patients with severely calcified ascending aorta and/or aortic arch and supra-aortic vessels. All efforts should be made to minimize the duration of circulatory arrest when using bilateral selective ACP with a target of less than 30 min, in hypothermia, at a body temperature of 25-28 °C.
abstract_id: PUBMED:30155418
Neuro-protection in open arch surgery. Although antegrade cerebral perfusion (ACP) is the predominant method of protecting the brain in patients undergoing total arch replacement, both deep hypothermic circulatory arrest and ACP provide excellent and comparable clinical outcomes with regard to mortality, stroke, and temporary neurological deficit rates.
abstract_id: PUBMED:29780723
Cerebral perfusion issues in type A aortic dissection. Stroke events are very common in acute type A aortic dissection. Cerebral malperfusion could manifest at presentation due to prolonged arch vessels hypoperfusion or develop after surgery for inadequate cerebral protection during arch repair. To reduce this detrimental complication there are several adjuncts that can be adopted for cerebral protection such as direct antegrade or retrograde cerebral perfusion (RCP) and use period of deep to moderate hypothermic circulatory arrest time; however, they are often insufficient as preoperative malperfusion already caused irreversible ischemic damages. The aim of the current review article is to analyze the principal series reporting on neurological injuries during type A aortic dissection to focus on the outcomes according to the type of surgical management and identify possible predictors to better manage this complication.
abstract_id: PUBMED:9725391
Experience with antegrade bihemispheric cerebral perfusion in aortic arch operations. Background: Various techniques have been used for cerebral protection in aortic arch operations. Antegrade cerebral perfusion has lost its popularity to hypothermic circulatory arrest to overcome the so-called cluttered operative field. Hypothermic circulatory arrest has its own problems of coagulopathy, time constraints, and prolongation of cardiopulmonary bypass time.
Methods: Since June 1986 we have used antegrade bihemispheric cerebral perfusion with moderate hypothermia in 20 patients with aortic arch disease. Twelve patients had aneurysm, 7 had dissection, and 1 had traumatic tear. Five patients had had previous sternotomy for ascending aortic replacement. In addition to arch reconstruction, 7 patients had aortic valve replacement or repair, 2 patients had Bentall procedure, and 3 had selective innominate reconstruction. The mean cerebral perfusion time was 51+/-29 minutes. In 7 patients the cerebral perfusion time was between 60 and 120 minutes.
Results: There was no in-hospital or 30-day mortality. The blood product requirements were significantly less with moderate hypothermia. One patient suffered cerebrovascular accident (5%). None of the 7 patients with cerebral perfusion times of 60 to 120 minutes had any neurologic deficits. These results are superior to those reported for hypothermic circulatory arrest with or without retrograde cerebral perfusion.
Conclusions: Antegrade bihemispheric cerebral perfusion is an optimal adjunct for cerebral protection during aortic arch operations.
abstract_id: PUBMED:25107725
New frontiers in aortic therapy: focus on deep hypothermic circulatory arrest. There is currently a paradigm shift in the conduct of adult aortic arch repair. Although deep hypothermic circulatory arrest has been the classic perfusion platform for adult aortic arch repair, recent developments have challenged this aortic arch paradigm. There has been a gradual clinical drift towards moderate, and even mild, hypothermic circulatory arrest combined with antegrade cerebral perfusion. This paradigm shift appears to be associated with equivalent clinical outcomes, and in certain settings, with improved outcomes. The advent of endovascular therapy has challenged even further the concept that circulatory arrest is required for adult aortic arch repair. These dramatic advances have resulted in the emergence of an international aortic arch surgery study group that aims to advance this dynamic field through consensus statements, meta-analysis, clinical database analysis, prospective registries, and randomized controlled trials.
Answer: The investigation into whether cerebral deficits other than frank stroke develop after total aortic arch replacement using hypothermic circulatory arrest (HCA) with antegrade selective cerebral perfusion (SCP) has yielded some insights. A study that included eleven patients who underwent elective total arch replacement using deep HCA with antegrade SCP found that cognitive functions evaluated at baseline, 3 weeks, and 6 months after surgery did not change significantly, except for the attention/calculation task of the Mini-Mental State Examination (MMSE). However, the decline in score for this task observed at 3 weeks post-surgery had reversed by 6 months, and scores for this and all other tests had returned to baseline levels. This suggests that long-lasting cognitive deficits other than frank stroke may not develop after total arch replacement surgery using deep HCA with SCP (PUBMED:22566263).
Additionally, other studies have compared different cerebral protection strategies in aortic arch surgery. A network meta-analysis that included 68 studies with a total of 26,968 patients found that both antegrade cerebral perfusion and retrograde cerebral perfusion were associated with significantly lower postoperative stroke and operative mortality rates compared with deep hypothermic circulatory arrest. The relative benefit of these techniques increased with the duration of the circulatory arrest, but no differences between antegrade cerebral perfusion and retrograde cerebral perfusion were found for all explored outcomes (PUBMED:30902473).
Furthermore, a study examining postoperative morbidity and mortality and long-term survival after total arch replacement (TAR) using deep to moderate HCA, ACP, and the Y-graft reported that neurologic complications occurred in 11% of patients, with 3% having strokes and 8% having transient neurologic deficits (PUBMED:32360886).
In summary, while there is a risk of cerebral deficits following total aortic arch replacement using HCA with SCP, the evidence suggests that long-lasting cognitive deficits other than frank stroke may not be a common outcome, and the use of antegrade cerebral perfusion techniques may be associated with better postoperative outcomes compared to deep hypothermic circulatory arrest. |
Instruction: Patterns of mortality in Indigenous adults in the Northern Territory, 1998-2003: are people living in more remote areas worse off?
Abstracts:
abstract_id: PUBMED:19296811
Patterns of mortality in Indigenous adults in the Northern Territory, 1998-2003: are people living in more remote areas worse off? Objective: To quantify Indigenous mortality in the Northern Territory by remoteness of residence.
Design, Setting And Participants: Australian Bureau of Statistics mortality data were used to compare rates of death from chronic disease in the NT Indigenous population with rates in the general Australian population over the period 1998-2003. Rates were evaluated by categories of remoteness based on the Accessibility/Remoteness Index of Australia: outer regional areas (ORAs), remote areas (RAs) and very remote areas (VRAs).
Main Outcome Measures: Mortality from cardiovascular disease, diabetes and renal disease; standardised mortality ratios (SMRs); percentage change in annual death rates; changes in mortality between 1998-2000 and 2001-2003.
Results: In 1998-2000, SMRs for all-cause mortality were 285% in ORAs, 875% in RAs and 214% in VRAs. In 2001-2003, corresponding SMRs were 325%, 731% and 208%. For the period 1998-2003, percentage changes in annual all-cause mortality were 4.4% (95% CI, -2.2%, 11.5%) in ORAs, -5.3% (95% CI, -9.6%, -0.8%) in RAs, and 1.1% (95% CI, -7.2%, 11.3%) in VRAs. In 2001-2003, compared with 1998-2000, changes in the number of Indigenous deaths were +35 in ORAs, -37 in RAs and +32 in VRAs. Similar patterns were observed for cardiovascular mortality.
Conclusions: Compared with mortality in the general Australian population, Indigenous mortality was up to nine times higher in RAs, three times higher in ORAs and two times higher in VRAs. The fact that rates were lowest in VRAs runs contrary to claims that increasing remoteness is associated with poorer health status. Despite the high death rate in RAs, there was a downward trend in mortality in RAs over the study period. This was partly attributable to a fall in the absolute number of deaths.
abstract_id: PUBMED:35954785
Built Environment Features and Cardiometabolic Mortality and Morbidity in Remote Indigenous Communities in the Northern Territory, Australia. Indigenous Australians experience poorer health than non-Indigenous Australians, with cardiometabolic diseases (CMD) being the leading causes of morbidity and mortality. Built environmental (BE) features are known to shape cardiometabolic health in urban contexts, yet little research has assessed such relationships for remote-dwelling Indigenous Australians. This study assessed associations between BE features and CMD-related morbidity and mortality in a large sample of remote Indigenous Australian communities in the Northern Territory (NT). CMD-related morbidity and mortality data were extracted from NT government health databases for 120 remote Indigenous Australian communities for the period 1 January 2010 to 31 December 2015. BE features were extracted from Serviced Land Availability Programme (SLAP) maps. Associations were estimated using negative binomial regression analysis. Univariable analysis revealed protective effects on all-cause mortality for the BE features of Education, Health, Disused Buildings, and Oval, and on CMD-related emergency department admissions for the BE feature Accommodation. Incidence rate ratios (IRR's) were greater, however, for the BE features Infrastructure Transport and Infrastructure Shelter. Geographic Isolation was associated with elevated mortality-related IRR's. Multivariable regression did not yield consistent associations between BE features and CMD outcomes, other than negative relationships for Indigenous Location-level median age and Geographic Isolation. This study indicates that relationships between BE features and health outcomes in urban populations do not extend to remote Indigenous Australian communities. This may reflect an overwhelming impact of broader social inequity, limited correspondence of BE measures with remote-dwelling Indigenous contexts, or a 'tipping point' of collective BE influences affecting health more than singular BE features.
abstract_id: PUBMED:25902766
The comparative cost of food and beverages at remote Indigenous communities, Northern Territory, Australia. Objective: To determine the average price difference between foods and beverages in remote Indigenous community stores and capital city supermarkets and explore differences across products.
Methods: A cross-sectional survey compared prices derived from point-of-sale data in 20 remote Northern Territory stores with supermarkets in capital cities of the Northern Territory and South Australia for groceries commonly purchased in remote stores. Average price differences for products, supply categories and food groups were examined.
Results: The 443 products examined represented 63% of food and beverage expenditure in remote stores. Remote products were, on average, 60% and 68% more expensive than advertised prices for Darwin and Adelaide supermarkets, respectively. The average price difference for fresh products was half that of packaged groceries for Darwin supermarkets and more than 50% for food groups that contributed most to purchasing.
Conclusions: Strategies employed by manufacturers and supermarkets, such as promotional pricing, and supermarkets' generic products lead to lower prices. These opportunities are not equally available to remote customers and are a major driver of price disparity.
Implications: Food affordability for already disadvantaged residents of remote communities could be improved by policies targeted at manufacturers, wholesalers and/or major supermarket chains.
abstract_id: PUBMED:31991842
Built Environments and Cardiometabolic Morbidity and Mortality in Remote Indigenous Communities in the Northern Territory, Australia. The health of Indigenous Australians is dramatically poorer than that of the non-Indigenous population. Amelioration of these differences has proven difficult. In part, this is attributable to a conceptualisation which approaches health disparities from the perspective of individual-level health behaviours, less so the environmental conditions that shape collective health behaviours. This ecological study investigated associations between the built environment and cardiometabolic mortality and morbidity in 123 remote Indigenous communities representing 104 Indigenous locations (ILOC) as defined by the Australian Bureau of Statistics. The presence of infrastructure and/or community buildings was used to create a cumulative exposure score (CES). Records of cardiometabolic-related deaths and health service interactions for the period 2010-2015 were sourced from government department records. A quasi-Poisson regression model was used to assess the associations between built environment "healthfulness" (CES, dichotomised) and cardiometabolic-related outcomes. Low relative to high CES was associated with greater rates of cardiometabolic-related morbidity for two of three morbidity measures (relative risk (RR) 2.41-2.54). Cardiometabolic-related mortality was markedly greater (RR 4.56, 95% confidence interval (CI), 1.74-11.93) for low-CES ILOCs. A lesser extent of "healthful" building types and infrastructure is associated with greater cardiometabolic-related morbidity and mortality in remote Indigenous locations. Attention to environments stands to improve remote Indigenous health.
abstract_id: PUBMED:24034417
Health inequity in the Northern Territory, Australia. Introduction: Understanding health inequity is necessary for addressing the disparities in health outcomes in many populations, including the health gap between Indigenous and non-Indigenous Australians. This report investigates the links between Indigenous health outcomes and socioeconomic disadvantage in the Northern Territory of Australia (NT).
Methods: Data sources include deaths, public hospital admissions between 2005 and 2007, and Socio-Economic Indexes for Areas from the 2006 Census. Age-sex standardisation, standardised rate ratio, concentration index and Poisson regression model are used for statistical analysis.
Results: There was a strong inverse association between socioeconomic status (SES) and both mortality and morbidity rates. Mortality and morbidity rates in the low SES group were approximately twice those in the medium SES group, which were, in turn, 50% higher than those in the high SES group. The gradient was present for most disease categories for both deaths and hospital admissions. Residents in remote and very remote areas experienced higher mortality and hospital morbidity than non-remote areas. Approximately 25-30% of the NT Indigenous health disparity may be explained by socioeconomic disadvantage.
Conclusions: Socioeconomic disadvantage is a shared common denominator for the main causes of deaths and principal diagnoses of hospitalisations for the NT population. Closing the gap in health outcomes between Indigenous and non-Indigenous populations will require improving the socioeconomic conditions of Indigenous Australians.
abstract_id: PUBMED:28459122
Characteristics of trauma mortality in the Northern Territory, Australia. Background: While factors including remoteness, alcohol consumption, age and Indigenous ethnicity are well-documented associations of trauma mortality, less is known of trauma seasonality. This is particularly relevant to Australia's Northern Territory, with its tropical regions experiencing a climate of wet (hot and humid) and dry (warm) seasons annually. The aim of this study was to therefore, examine the characteristics of trauma mortality in the Top End, Northern Territory, Australia.
Methods: A retrospective review of the National Coroners Information System (NCIS) database from 1 January 2003 to 31 December 2007 analysed four-hundred and sixteen traumatic deaths where the trauma event and death occurred within the Top End of the Northern Territory.
Results: The annual traumatic death rate for the Top End was 58.7 per 100 000, with variance between regions (accessible 38.1; remote 119.1 per 100000, respectively). Overall alcohol was involved in 56.5% of cases. The three most frequent mechanisms of death were suicide, transport related and assault, accounting for 81.5% of deaths. These respective mechanisms of death demonstrated seasonal influence, with transport related deaths 2.5 times more likely to occur in the dry than the wet season (p < 0.001), while assault related deaths were 3.3 times more likely to occur during the wet season (p = 0.005), and suicide was 1.6 times more likely to occur during the wet season (p = 0.022). Transport related deaths were 2.2 times more likely in remote and very remote settings than in accessible or moderately accessible regions (p < 0.003), whereas death by suicide was less likely to occur in remote and very remote regions than in accessible or moderately accessible areas (p = 0.012).
Conclusion: Excessively high rates of traumatic death in the Top End of the Northern Territory were evident, with contrasting seasonal and regional profiles. Based upon the data of this investigation, existing programmes to minimise trauma in the Northern Territory ought to be evaluated for seasonal and regional specificity.
abstract_id: PUBMED:23952967
Renal transplantation in indigenous Australians of the Northern Territory: closing the gap. Chronic kidney disease causes high morbidity and mortality among Indigenous Australians of the Northern Territory (NT). Studies have shown chronic kidney disease rates of 4-10 times higher in indigenous than non-indigenous Australians and prevalent dialysis rates of 700-1200 per million population. For most patients with end-stage renal failure, renal transplantation provides the optimal treatment for people with end-stage renal disease. It reduces morbidity and mortality, and improves survival and quality of life. Graft and patient survival rates of over 80% at 5 years depending on the donor source (deceased vs living donor) are expected worldwide. However, this is not the case in Indigenous Australians of the NT where graft and patient survival are both around 50% at 5 years suggesting death with functioning graft as the most common cause of graft loss. It would provide the best treatment option for indigenous people most of who live in remote (18%) and very remote communities (63%). Many have to relocate from their communities to urban or regional centres for dialysis. Available options to avoid relocation include peritoneal dialysis, home haemodialysis and community health centre dialysis, but the acceptance rates for these are low, hence renal transplantation would provide the best option. There is evidence of identified barriers to renal transplantation for indigenous people of the NT. This review explores published data on why rates of renal transplantation in indigenous people of the NT are low and the reasons for poor outcomes highlighting possible areas of improvement.
abstract_id: PUBMED:38225723
Bronchiectasis among Indigenous adults in the Top End of the Northern Territory, 2011-2020: a retrospective cohort study. Objectives: To assess the prevalence of bronchiectasis among Aboriginal and Torres Strait Islander (Indigenous) adults in the Top End of the Northern Territory, and mortality among Indigenous adults with bronchiectasis.
Study Design: Retrospective cohort study.
Setting, Participants: Aboriginal and Torres Strait Islander adults (18 years or older) living in the Top End Health Service region of the NT in whom bronchiectasis was confirmed by chest computed tomography (CT) during 1 January 2011 - 31 December 2020.
Main Outcome Measures: Prevalence of bronchiectasis, and all-cause mortality among Indigenous adults with CT-confirmed bronchiectasis - overall, by sex, and by health district - based on 2011 population numbers (census data).
Results: A total of 23 722 Indigenous adults lived in the Top End Health Service region in 2011; during 2011-2020, 459 people received chest CT-confirmed diagnoses of bronchiectasis. Their median age was 47.5 years (interquartile range [IQR], 39.9-56.8 years), 254 were women (55.3%), and 425 lived in areas classified as remote (93.0%). The estimated prevalence of bronchiectasis was 19.4 per 1000 residents (20.6 per 1000 women; 18.0 per 1000 men). The age-adjusted prevalence of bronchiectasis was 5.0 (95% CI, 1.4-8.5) cases per 1000 people in the Darwin Urban health area, and 18-36 cases per 1000 people in the three non-urban health areas. By 30 April 2023, 195 people with bronchiectasis had died (42.5%), at a median age of 60.3 years (IQR, 50.3-68.9 years).
Conclusion: The prevalence of bronchiectasis burden among Indigenous adults in the Top End of the NT is high, but differed by health district, as is all-cause mortality among adults with bronchiectasis. The socio-demographic and other factors that contribute to the high prevalence of bronchiectasis among Indigenous Australians should be investigated so that interventions for reducing its burden can be developed.
abstract_id: PUBMED:22827433
Measuring what matters in delivering services to remote-dwelling Indigenous mothers and infants in the Northern Territory, Australia. Problem: In the Northern Territory, 64% of Indigenous births are to remote-dwelling mothers. Delivering high-quality health care in remote areas is challenging, but service improvements, informed by participative action research, are under way. Evaluation of these initiatives requires appropriate indicators. Few of the many existing maternal and infant health indicators are specifically framed for the remote context or exemplify an Indigenous consumer perspective. We aimed to identify an indicator framework with appropriate indicators to demonstrate improvements in health outcomes, determinants of health and health system performance for remote-dwelling mothers and infants from pregnancy to first birthday.
Design: We reviewed existing indicators; invited input from experts; investigated existing administrative data collections and examined findings from a record audit, ethnographic work and the evaluation of the Darwin Midwifery Group Practice.
Setting: Northern Territory.
Process: About 660 potentially relevant indicators were identified. We adapted the Aboriginal and Torres Strait Islander Health Performance Framework and populated the resulting framework with chosen indicators. We chose the indicators best able to monitor the impact of changes to remote service delivery by eliminating duplicated or irrelevant indicators using expert opinion, triangulating data and identifying key issues for remote maternal and infant health service improvements.
Lessons Learnt: We propose 31 indicators to monitor service delivery to remote-dwelling Indigenous mothers and infants. Our inclusive indicator framework covers the period from pregnancy to the first year of life and includes existing indicators, but also introduces novel ones. We also attempt to highlight an Indigenous consumer.
abstract_id: PUBMED:32609424
Treatment and outcomes for indigenous and non-indigenous lung cancer patients in the Top End of the Northern Territory. Background: Lung cancer is the most common cause of cancer-related mortality for both Indigenous and non-Indigenous Australians, and the death rate of lung cancer in Indigenous Australians is increasing.
Aims: To provide a comprehensive description of patterns of lung cancer presentation, diagnosis, treatment and outcomes in Indigenous and non-Indigenous Australians in the Top End of the Northern Territory.
Methods: Retrospective cohort study of adult patients with a new diagnosis of lung cancer in the Top End between January 2010 and December 2014. Unadjusted survival probabilities by indigenous status were calculated. The primary end-point was all-cause mortality.
Results: Despite receiving similar diagnostic procedures and treatment, Indigenous Australians with lung cancer have poorer 1- and 5-year survival (25.0% and 9.4% respectively), when compared to non-Indigenous Australians included in the study (42.0% and 16.2% respectively). Indigenous lung cancer patients were more likely to be female (51.6% of indigenous patients were female, compared to 30.5% non-indigenous), be current smokers (61.3% vs 36.9%), have more comorbidities (73.6% vs 52.7%, 24.2% vs 5.3% and 30.8% vs 14.2% for respiratory disease, renal insufficiency and diabetes mellitus respectively), and live in more socio-economically disadvantaged (66.7% vs 14.2%) and very remote areas (66.1% vs 6.8%). They were also more likely to die at home, compared to their non-indigenous counterparts (64.3% vs 26.7%).
Conclusions: Indigenous patients from the Top End diagnosed with lung cancer were more likely to have poorer survival outcomes when compared to non-indigenous people. Potential reasons for the discrepancy in survival need to be addressed urgently.
Answer: The study examining patterns of mortality in Indigenous adults in the Northern Territory (NT) between 1998 and 2003 found that Indigenous mortality varied by remoteness of residence. The standardised mortality ratios (SMRs) indicated that mortality was up to nine times higher in remote areas (RAs), three times higher in outer regional areas (ORAs), and two times higher in very remote areas (VRAs) compared with the general Australian population. Interestingly, the study revealed that the mortality rates were lowest in VRAs, which contradicts the common belief that increasing remoteness is associated with poorer health status. Despite the high death rate in RAs, there was a downward trend in mortality in these areas over the study period, partly due to a fall in the absolute number of deaths (PUBMED:19296811).
This finding suggests that while Indigenous adults living in remote areas do experience higher mortality rates compared to the general population, those in the most remote areas (VRAs) were not necessarily worse off than those in less remote areas (RAs and ORAs). The reasons behind these patterns are not fully explained in the abstract, but the results indicate a complex relationship between health outcomes and remoteness that may not be linear or straightforward. |
Instruction: Surgery of the aortic root: should we go for the valve-sparing root reconstruction or the composite graft-valve replacement is still the first choice of treatment for these patients?
Abstracts:
abstract_id: PUBMED:37245627
Valve-sparing root replacement versus composite valve graft root replacement: Analysis of more than 1500 patients from 2 aortic centers. Objectives: The long-term outcomes comparing valve-sparing root replacement, composite valve graft with bioprosthesis, and mechanical prosthesis have yet to be explored. We investigated the long-term survival and reintervention rates after 1 of 3 major aortic root replacements in patients with tricuspid aortic valves and patients with bicuspid aortic valves.
Methods: A total of 1507 patients underwent valve-sparing root replacement (n = 700), composite valve graft with bioprosthesis (n = 703), or composite valve graft with mechanical prosthesis (n = 104) between 2004 and 2021 in 2 aortic centers, excluding those with dissection, endocarditis, stenosis, or prior aortic valve surgery. End points included mortality over time and cumulative incidence of aortic valve/proximal aorta reintervention. Multivariable Cox regression compared adjusted 12-year survival. Fine and Gray competing risk regression compared the risk and cumulative incidence of reintervention. Propensity score-matched subgroup analysis balanced the 2 major groups (composite valve graft with bioprosthesis and valve-sparing root replacement), and landmark analysis isolated outcomes beginning 4 years postoperatively.
Results: On multivariable analysis, both composite valve graft with bioprosthesis (hazard ratio, 1.91, P = .001) and composite valve graft with mechanical prosthesis (hazard ratio, 2.62, P = .005) showed increased 12-year mortality risk versus valve-sparing root replacement. After propensity score matching, valve-sparing root replacement displayed improved 12-year survival versus composite valve graft with bioprosthesis (87.9% vs 78.8%, P = .033). Adjusted 12-year reintervention risk in patients receiving composite valve graft with bioprosthesis or composite valve graft with mechanical prosthesis versus valve-sparing root replacement was similar (composite valve graft with bioprosthesis subdistribution hazard ratio, 1.49, P = .170) (composite valve graft with mechanical prosthesis subdistribution hazard ratio, 0.28, P = .110), with a cumulative incidence of 7% in valve-sparing root replacement, 17% in composite valve graft with bioprosthesis, and 2% in composite valve graft with mechanical prosthesis (P = .420). Landmark analysis at 4 years showed an increased incidence of late reintervention in composite valve graft with bioprosthesis versus valve-sparing root replacement (P = .008).
Conclusions: Valve-sparing root replacement, composite valve graft with mechanical prosthesis, and composite valve graft with bioprosthesis demonstrated excellent 12-year survival, with valve-sparing root replacement associated with better survival. All 3 groups have low incidence of reintervention, with valve-sparing root replacement showing decreased late postoperative need for reintervention compared with composite valve graft with bioprosthesis.
abstract_id: PUBMED:26313725
Surgery of the aortic root: should we go for the valve-sparing root reconstruction or the composite graft-valve replacement is still the first choice of treatment for these patients? Objective: To compare the results of the root reconstruction with the aortic valve-sparing operation versus composite graft-valve replacement.
Methods: From January 2002 to October 2013, 324 patients underwent aortic root reconstruction. They were 263 composite graft-valve replacement and 61 aortic valve-sparing operation (43 reimplantation and 18 remodeling). Twenty-six percent of the patients were NYHA functional class III and IV; 9.6% had Marfan syndrome, and 12% had bicuspid aortic valve. There was a predominance of aneurysms over dissections (81% vs. 19%), with 7% being acute dissections. The complete follow-up of 100% of the patients was performed with median follow-up time of 902 days for patients undergoing composite graft-valve replacement and 1492 for those undergoing aortic valve-sparing operation.
Results: In-hospital mortality was 6.7% and 4.9%, respectively for composite graft-valve replacement and aortic valve-sparing operation (ns). During the late follow-up period, there was 0% moderate and 15.4% severe aortic regurgitation, and NYHA functional class I and II were 89.4% and 94%, respectively for composite graft-valve replacement and aortic valve-sparing operation (ns). Root reconstruction with aortic valve-sparing operation showed lower late mortality (P=0.001) and lower bleeding complications (P=0.006). There was no difference for thromboembolism, endocarditis, and need of reoperation.
Conclusion: The aortic root reconstruction with preservation of the valve should be the operation being performed for presenting lower late mortality and survival free of bleeding events.
abstract_id: PUBMED:37480983
Valve-sparing aortic root replacement versus composite valve graft with bioprosthesis in patients under age 50. Background: Although the unique risks of implanting a prosthetic valve after aortic valve (AV) surgery in young patients are well established, studies of aortic root replacement (ARR) are lacking. We investigated long-term outcomes after valve-sparing root replacement (VSRR) versus the use of a composite valve graft with bioprosthesis (b-CVG) in patients age <50 years.
Methods: A total of 543 patients age <50 years underwent VSRR (n = 335) or b-CVG (n = 208) between 2004 and 2021 from 2 aortic centers, excluding those with dissection or endocarditis. Endpoints included mortality over time, reoperative aortic valve replacement (AVR), and development of greater than moderate aortic insufficiency (AI) or aortic stenosis (AS). Fine and Gray competing risk regression was used to compare the risk of reintervention. Propensity score matching (PSM) balanced patient comorbidities, and landmark analysis isolated outcomes beginning 4 years postoperatively.
Results: Compared with VSRR, b-CVG was associated with lower 12-year survival (88.6% vs 92.9%; P = .036) and a higher rate of AV reintervention (37.6% vs 12.0%; P = .018). After PSM, survival was similar in the 2 arms (93.4% for b-CVG vs 93.0% for VSRR; P = .72). However, both Fine and Gray multivariable risk regression and PSM showed that b-CVG was independently associated with AV reintervention at >4 years postoperatively (Fine and Gray: subdistribution hazard ratio, 4.3 [95% confidence interval, 1.8-10.2; P = .001]; PSM: 35.7% for b-CVG versus 14.3% for VSRR; P = .024]). PSM rates of greater than moderate AI/AS at 10 years were more than 2-fold greater in the b-CVG arm compared with the VSRR arm (37.1% vs 15.9%; P = .571).
Conclusions: b-CVG in young patients is associated with early valvular degeneration, with increasing rates of reoperative AVR occurring even within 10 years. In contrast, VSRR is durable with excellent survival. In eligible young patients, every effort should be made to retain the native AV.
abstract_id: PUBMED:29270369
Systematic review and meta-analysis of surgical outcomes in Marfan patients undergoing aortic root surgery by composite-valve graft or valve sparing root replacement. Background: A major, life-limiting feature of Marfan syndrome (MFS) is the presence of aneurysmal disease. Cardiovascular intervention has dramatically improved the life expectancy of Marfan patients. Traditionally, the management of aortic root disease has been undertaken with composite-valve graft replacing the aortic valve and proximal aorta; more recently, valve sparing procedures have been developed to avoid the need for anticoagulation. This meta-analysis assesses the important surgical outcomes of the two surgical techniques.
Methods: A systematic review and meta-analysis of 23 studies reporting the outcomes of aortic root surgery in Marfan patients with data extracted for outcomes of early and late mortality, thromboembolic events, late bleeding complications and surgical reintervention rates.
Results: The outcomes of 2,976 Marfan patients undergoing aortic root surgery were analysed, 1,624 patients were treated with composite valve graft (CVG) and 1,352 patients were treated with valve sparing root replacement (VSRR). When compared against CVG, VSRR was associated with reduced risk of thromboembolism (OR =0.32; 95% CI, 0.16-0.62, P=0.0008), late hemorrhagic complications (OR =0.18; 95% CI, 0.07-0.45; P=0.0003) and endocarditis (OR =0.27; 95% CI, 0.10-0.68; P=0.006). Importantly there was no significant difference in reintervention rates between VSRR and CVG (OR =0.89; 95% CI, 0.35-2.24; P=0.80).
Conclusions: There is an increasing body of evidence that VSRR can be reliably performed in Marfan patients, resulting in a durable repair with no increased risk of re-operation compared to CVG, thus avoiding the need for systemic anticoagulation in selected patients.
abstract_id: PUBMED:30952541
Valve-sparing root replacement and composite valve graft replacement in patients with aortic regurgitation: From the Japan Cardiovascular Surgery Database. Objectives: The advantage of valve-sparing root replacement (VSRR) over aortic root replacement with a composite valve graft (CVG) remains unclear. We compared these 2 procedures with regard to early outcomes with propensity score matching using the Japan Cardiovascular Surgery Database.
Methods: Of 5303 patients from the Japan Cardiovascular Surgery Database who had undergone aortic root replacement in 2008 to 2017, emergent/urgent or redo cases and those with infective endocarditis or aortic stenosis were excluded (included n = 3841). Two propensity score-matched groups treated with VSRR or CVG replacement (n = 1164 each) were established.
Results: Overall, VSRR was more frequently performed for younger patients with Marfan syndrome with lower operative risk and aortic regurgitation grade compared with CVG replacement. After matching, a weaker but similar trend still existed in baseline characteristics. Although more concomitant procedures were performed in the CVG group, myocardial ischemia and cardiopulmonary bypass time was significantly longer in the VSRR group (median, 193 and 245 minutes) than the CVG group (172 and 223 minutes, both P < .01). The CVG group was associated with a significantly greater incidence of postoperative stroke (2.5% vs 1.1%, P = .01) and prolonged ventilation >72 hours (7.0% vs 4.6%, P = .02). In-hospital mortality rates were significantly greater in the CVG group (1.8%) than the VSRR group (0.8%, P = .02).
Conclusions: In overall Japanese institutions, VSRR was more frequently performed for patients at low risk and was associated with better morbidity and mortality rates than CVG replacement. After matching, VSRR was also associated with better morbidity and mortality rates despite longer procedure time.
abstract_id: PUBMED:33586247
Transcatheter aortic valve replacement after valve-sparing aortic root surgery. The use of transcatheter aortic valves for aortic regurgitation presents unique challenges. Although studies describe their successful off-label use, there is a paucity of literature on transcatheter aortic valve replacement after valve-sparing aortic root surgery. We present a patient with severe aortic regurgitation following valve-sparing aortic root replacement that was treated with an oversized transcatheter aortic valve.
abstract_id: PUBMED:24743005
Valve-sparing aortic root replacement†. Objectives: To evaluate our results of valve-sparing aortic root replacement and associated (multiple) valve repair.
Methods: From September 2003 to September 2013, 97 patients had valve-sparing aortic root replacement procedures. Patient records and preoperative, postoperative and recent echocardiograms were reviewed. Median age was 40.3 (range: 13.4-68.6) years and 67 (69.1%) were male. Seven (7.2%) patients were younger than 18 years, the youngest being 13.4 years. Fifty-four (55.7%) had Marfan syndrome, 2 (2.1%) other fibrous tissue diseases, 15 (15.5%) bicuspid aortic valve and 3 (3.1%) had earlier Fallot repair. The reimplantation technique was used in all, with a straight vascular prosthesis in 11 (26-34 mm) and the Valsalva prosthesis in 86 (26-32 mm). Concomitant aortic valve repair was performed in 43 (44.3%), mitral valve repair in 10 (10.3%), tricuspid valve repair in 5 (5.2%) and aortic arch replacement in 3 (3.1%).
Results: Mean follow-up was 4.2 ± 2.4 years. Follow-up was complete in all. One 14-year old patient died 1.3 years post-surgery presumably of ventricular arrhythmia. One patient underwent reoperation for aneurysm of the proximal right coronary artery after 4.9 years and 4 patients required aortic valve replacement, 3 of which because of endocarditis after 0.1, 0.8 and 1.3 years and 1 because of cusp prolapse after 3.8 years. No thrombo-embolic complications occurred. Mortality, root reoperation and aortic regurgitation were absent in 88.0 ± 0.5% at 5-year follow-up.
Conclusions: Results of valve-sparing root replacement are good, even in association with a high incidence of concomitant valve repair. Valve-sparing aortic root replacement can be performed at a very young age as long as an adult size prosthesis can be implanted.
abstract_id: PUBMED:34279037
Valve-sparing aortic root replacement in adult patients with congenital heart disease. Objectives: Aortic root dilatation is frequently observed in patients with congenital heart defects (CHD), but has received little attention in terms of developing a best practice approach for treatment. In this study, we analysed our experience with aortic valve-sparing root replacement in patients following previous operations to repair CHD.
Methods: In this study, we included 7 patients with a history of previous surgery for CHD who underwent aortic valve-sparing operations. The underlying initial defects were tetralogy of Fallot (n = 3), transposition of great arteries (n = 2), coarctation of the aorta (n = 1), and pulmonary atresia with ventricle septum defect (n = 1). The patients' age ranged from 20 to 40 years (mean age 31 ± 6 years).
Results: David reimplantation was performed in 6 patients and a Yacoub remodelling procedure was performed in 1 patient. Four patients underwent simultaneous pulmonary valve replacement. The mean interval between the corrective procedure for CHD and the aortic valve-sparing surgery was 26 ± 3 years. There was no operative or late mortality. The patient with transposition of great arteries following an arterial switch operation was re-operated 25 months after the valve-sparing procedure due to severe aortic regurgitation. In all other patients, the aortic valve regurgitation was mild or negligible at the latest follow-up (mean 8.7 years, range 2.1-15.1 years).
Conclusions: Valve-sparing aortic root replacement resulted in good aortic valve function during the first decade of observation in 6 of 7 patients. This approach can offer a viable alternative to root replacement with mechanical or biological prostheses in selected patients following CHD repair.
abstract_id: PUBMED:29963383
Biological solutions to aortic root replacement: valve-sparing versus bioprosthetic conduit. Composite valve graft implantation described by Bentall and De Bono is a well-documented technique of aortic root replacement used for a large spectrum of pathologic conditions involving the aortic valve and the ascending aorta. While mechanical valves were initially used, biological prostheses were later introduced in order to avoid long-term anticoagulation and its related complications. The increasing age of patients who undergo aortic root surgery, and data supporting the use of a biological aortic valve in the younger population, have significantly increased the need for a composite biological valved conduit. However, parallel to the increased use of biological valve in the context of a Bentall operation, aortic valve-sparing (AVS) operation have also been performed in a growing number of patients. Sarsam and David described the remodeling and the reimplantation procedures more than 25 years ago with the aim of sparing otherwise normal aortic valves in the presence of a root aneurysm. Important achievements in this discipline have occurred over the past decade including development and refinement of valve preserving aortic root replacement techniques, development of a classification system for aortic insufficiency, surgical approaches to cusp disease with different cusp anatomy. Both procedures can now provide excellent root reconstruction and adequate clinical results in terms of late valve durability. The AVS technique offers several advantages over the Bentall procedure, such as no need for oral anticoagulation and lifestyle adjustments. AVS operations have become established alternatives to Bentall procedures for patients with aortic root pathology. However, data comparing the safety and durability of these approaches are lacking.
abstract_id: PUBMED:30765631
Valve Sparing Aortic Root Replacement The preservation and repair of the aortic valve, since its valve annulus and leaflets have anatomically complicated 3-dimensional structures, is technically demanding with challenging aspects, and has not been a general surgical treatment, yet. However, in the recent trend of sparing and repair of the native valves in the cardiac surgical field, for aortic root diseases in relatively younger patients, valve sparing aortic root replacement has been attempted with the favorable outcome. In terms of its indication, such technically demanding surgeries had previously been indicated predominantly for patients without any valve deformities and significant aortic valve regurgitation, however in recent years, through the expanding indication with technical advances, they have been performed even for cases with some valve deformities including regurgitant bicuspid aortic valves. In this section, an up-to-data review would be described on valve sparing aortic root replacement, including its current indication, surgical techniques, and recent topics for discussion.
Answer: The choice between valve-sparing root replacement (VSRR) and composite graft valve replacement (CVG) for aortic root surgery depends on various factors, including patient characteristics, the underlying condition of the aortic valve, and the long-term outcomes associated with each procedure.
Several studies have compared the outcomes of VSRR and CVG. A study with over 1500 patients from two aortic centers found that VSRR was associated with better 12-year survival compared to both bioprosthesis and mechanical prosthesis composite valve grafts. The incidence of reintervention was low across all groups, but VSRR showed a decreased late postoperative need for reintervention compared with composite valve graft with bioprosthesis (PUBMED:37245627). Another study focusing on patients under age 50 reported that VSRR was associated with better survival and lower rates of reintervention compared to composite valve graft with bioprosthesis (PUBMED:37480983).
A systematic review and meta-analysis of surgical outcomes in Marfan patients undergoing aortic root surgery found that VSRR was associated with reduced risk of thromboembolism, late hemorrhagic complications, and endocarditis compared to CVG, with no significant difference in reintervention rates (PUBMED:29270369). Similarly, a study from the Japan Cardiovascular Surgery Database indicated that VSRR was associated with better morbidity and mortality rates than CVG replacement, even after propensity score matching (PUBMED:30952541).
However, it is important to note that the decision between VSRR and CVG should be individualized based on the patient's specific condition and the surgeon's expertise. For instance, in patients with congenital heart disease, VSRR can offer a viable alternative to root replacement with mechanical or biological prostheses in selected patients following CHD repair (PUBMED:34279037). Additionally, the technical demands and the anatomical considerations of the aortic valve and root should be taken into account (PUBMED:30765631).
In conclusion, while both VSRR and CVG have their merits, current evidence suggests that VSRR may offer better long-term survival, lower rates of reintervention, and fewer complications related to thromboembolism and bleeding, making it a preferred option in suitable patients. However, the choice of procedure should be tailored to the individual patient's needs, the underlying pathology, and the surgeon's experience and expertise. |
Instruction: Can cyclosporine blood level be reduced to half after heart transplantation?
Abstracts:
abstract_id: PUBMED:20430207
Can cyclosporine blood level be reduced to half after heart transplantation? Background: Cyclosporine (CsA) is widely used after heart transplantation. The purpose of this prospective randomized study was to evaluate the safety and efficacy of reduction of CsA blood level to one-half of the traditional blood concentration under a regimen of everolimus (EVL), CsA, and steroid.
Materials And Methods: This prospective, 6 month, randomized, open-label study included adult (aged 18 to 65 years) recipients of a primary heart transplant with serum creatinine<or=2.8 mg/dL. Among 52 patients who underwent heart transplantation from December 2004 to March 2006 we excluded those who were hepatitis B or C carriers, who were recipients of organs from donors>60 years old, had cold ischemia time>6 hours, or had plasma renin activity>or=25%. All patients received CsA (C2 blood level 1000-1400 ng/mL), EVL (C0 target 3-8 ng/mL), and corticosteroids to day 60, before random entry into one of 2 groups: SE (C2 blood level from days 60-149=800-1200 ng/mL, and days 150-180 C2=600-1000 ng/mL), or RE group with CsA reduced by one-half after 3 months (days 90-149 C2=400-600 ng/mL, and from days 150-180 C2=300-500 ng/mL).
Results: The 25 recipients eligible for this study included 13 patients in the SE and 12 in the RE group. There was no operative mortality in either group. No death or graft loss was noted within 6-months in either group. Mean serum creatinine at month 6 tended to be lower in the RE cohort (1.23+/-0.44 mg/dL versus 1.55+/-0.85 mg/dL; P=.093). Biopsy-proven acute rejection>or=grade 3A was observed in only 1 patient (7.7%), who was in the SE group. There were no acute rejection episodes associated with hemodynamic compromise. The incidences of adverse events in each group were similar.
Conclusions: Concentration-controlled EVL (C0 target 3-8 ng/mL) in combination with reduced CsA exposure of one-half the usual concentration achieved good efficacy and safety over 6 months. The renal function at 6 months among the RE group showed a trend toward improvement, suggesting a benefit of halving the target CsA blood level after heart transplantation.
abstract_id: PUBMED:6389991
Targeted blood levels of cyclosporin for cardiac transplantation. Forty-nine patients have undergone cardiac transplantation since July, 1982, and have been treated with maintenance cyclosporin and low-dose prednisone, 15 to 20 mg. Cyclosporin dose has been targeted to a whole-blood level of 1,000 ng/ml as measured by radioimmune assay. The actuarial survival rate in this group of patients has been 79% at 12 months and 71% at 21 months. Histologic rejection has occurred at all blood levels of cyclosporin, as has significant nephrotoxicity. The hepatic toxicity encountered has been more a clinical nuisance than significant problem. The administered dose of cyclosporin required to reach a target of 1,000 ng/ml has varied between 2 and 30 mg/kg/day. The average perioperative and late serum creatinine levels were 1.2 and 1.49 mg/dl and occurred with cyclosporin levels of 1,078 and 1,068 ng/ml, respectively. Late cyclosporin toxicity has persisted despite reduction in the dose of cyclosporin below the targeted 1,000 ng/ml. Some method of blood level monitoring is necessary in patients receiving cyclosporin immune suppression to assure adequacy of the administered dose. The 1,000 ng/ml target has provided adequate immune suppression. Significant nephrotoxicity has not correlated with the blood level measured.
abstract_id: PUBMED:33744197
Impact of intrapatient blood level variability of calcineurin inhibitors on heart transplant outcomes. Introduction And Objectives: Intrapatient blood level variability (IPV) of calcineurin inhibitors has been associated with poor outcomes in solid-organ transplant, but data for heart transplant are scarce. Our purpose was to ascertain the clinical impact of IPV in a multi-institutional cohort of heart transplant recipients.
Methods: We retrospectively studied patients aged ≥18 years, with a first heart transplant performed between 2000 and 2014 and surviving≥ 1 year. IPV was assessed by the coefficient of variation of trough levels from posttransplant months 4 to 12. A composite of rejection or mortality/graft loss or rejection and all-cause mortality/graft loss between years 1 to 5 posttransplant were analyzed by Cox regression analysis.
Results: The study group consisted of 1581 recipients (median age, 56 years; women, 21%). Cyclosporine immediate-release tacrolimus and prolonged-release tacrolimus were used in 790, 527 and 264 patients, respectively. On multivariable analysis, coefficient of variation> 27.8% showed a nonsignificant trend to association with 5-year rejection-free survival (HR, 1.298; 95%CI, 0.993-1.695; P=.056) and with 5-year mortality (HR, 1.387; 95%CI, 0.979-1.963; P=.065). Association with rejection became significant on analysis of only those patients without rejection episodes during the first year posttransplant (HR, 1.609; 95%CI, 1.129-2.295; P=.011). The tacrolimus-based formulation had less IPV than cyclosporine and better results with less influence of IPV.
Conclusions: IPV of calcineurin inhibitors is only marginally associated with mid-term outcomes after heart transplant, particularly with the tacrolimus-based immunosuppression, although it could play a role in the most stable recipients.
abstract_id: PUBMED:16364863
The effect of beta-blocker use on cyclosporine level in cardiac transplant recipients. Background: Beta-blockers are frequently used after cardiac transplantation for blood pressure control. There is no well-known interaction between beta-blockers and cyclosporine A (CsA). However, recent reports have suggested that carvedilol, but not metoprolol, modulates P-glycoprotein (P-gp), a membrane protein that regulates CsA absorption. We evaluated the effects of carvedilol and metoprolol on CsA level when initiated in cardiac transplant recipients.
Methods: Using our cardiac transplant database, we identified patients who were started on either carvedilol or metoprolol for blood pressure control. We then compared their CsA doses and levels before and within 2 weeks after the initiation of beta-blocker therapy.
Results: We found 20 patients taking metoprolol and 12 patients taking carvedilol. With initiation of metoprolol, CsA level decreased in 12 patients and increased in 8 patients. The mean CsA level before and after metoprolol initiation was 236 ng/ml and 253 ng/ml, respectively (p = 0.50). In an attempt to maintain a therapeutic CsA level, the mean CsA dose was not significantly adjusted (from a mean of 293 mg/day to a mean of 294 mg/day; p = 0.92). In the Carvedilol Group, CsA level increased in 10 of 12 patients. The mean CsA level before the initiation of carvedilol was 257 ng/ml. The mean CsA level after carvedilol initiation was 380 ng/ml (p = 0.009). In an attempt to maintain a therapeutic CsA level, the mean CsA dose was reduced by 10%, from a mean of 319 mg/day to a mean of 288 mg/day (p = 0.004).
Conclusion: Carvedilol, but not metoprolol, was associated with a significant increase in CsA levels after initiation in cardiac transplant recipients. Although carvedilol and CsA do not interact at the level of cytochrome P450 system, it appears that carvedilol influences CsA levels through its effects on P-gp. An average reduction of 10% is necessary on the CsA dose upon initiation of carvedilol, and close follow-up of the level is essential.
abstract_id: PUBMED:36722558
An idiosyncratic reaction of unilateral common peroneal nerve palsy associated with below desired therapeutic range of tacrolimus level in a patient postheart transplantation. Tacrolimus (TAC) is a very effective medication in routine use after solid organ transplantation. The potential, but infrequently reported neurological adverse effect of TAC is peripheral neuropathy (PN). This has rarely been reported in heart transplant patients. To the best of our knowledge, the data regarding mononeuropathy of common peroneal nerve presented with foot drop due to low whole blood trough TAC level are very limited in the early days postheart transplantation. An idiosyncratic reaction might be suspected in the early postoperative period, when the whole blood trough levels of TAC fall below or within the desired therapeutic range associated with any adverse events after ruling out other causes. We report a 21-year-old patient, who underwent heart transplantation after a suitable donor was identified, and presented with a new-onset right side foot drop on the 10th postoperative day. According to the WHO-Uppsala Monitoring Center causality assessment scale, the likely culprit agent is TAC. Rapid and progressive improvement of foot drop occurred after stopping it and changed over to cyclosporine.
abstract_id: PUBMED:9272414
A comparison of EMIT and FPIA methods for the detection of cyclosporin A blood levels: does impaired liver function make a difference? Objective: Apparent cyclosporin A (CSA) blood levels, as determined by fluorescence polarization immunoassay (FPIA) and enzyme-multiplied immunoassay technique (EMIT), were compared in CSA-treated patients with various degrees of liver dysfunction.
Methods: FPIA and EMIT were performed in parallel according to test manufacturer instructions in blood from kidney (n = 82), liver (n = 96) and heart transplant (n = 20) patients.
Results: The precision of both techniques was greatest in patients with the highest blood levels, and at each blood level greater for the FPIA than for the EMIT. Apparent CSA blood levels, as determined by EMIT, were typically approximately 70% of those determined by FPIA, indicating greater cross-reaction of the antibody in the FPIA with CSA metabolites. However, the ratio of values determined with EMIT and FPIA was very similar in kidney, liver and heart transplant patients. Among liver transplant patients it was also very similar in those without major alterations of hepatic function and in those with impaired excretory (increased bilirubin and gamma GT) or synthetic (i.e., reduced thromboplastin time) function. Extended storage of blood samples for up to 10 days did not affect apparent CSA blood level estimates by EMIT in a clinically relevant manner.
Conclusions: We conclude that the greater specificity of the antibody in the EMIT for the CSA parent compound does not translate into a clinically relevant advantage for CSA monitoring.
abstract_id: PUBMED:3140966
Treatment with cyclosporin and risks of graft rejection in male kidney and heart transplant recipients with non-O blood. In a consecutive series of 146 kidney transplant recipients treated with cyclosporin A a strong correlation between matching for the HLA-A, HLA-B, and HLA-DR loci specificities and outcome of the grafts was observed in male recipients with non-O blood groups. Such a beneficial effect of matching was not found in female patients or male patients with blood group O. In these patients survival of the grafts at one year was good irrespective of the number of HLA-A, B, and DR mismatches. Also in 47 male heart transplant recipients immune responsiveness against mismatched HLA antigens was related to blood group. A significantly higher incidence of rejection episodes was observed in male patients with non-O blood groups (n = 32) than in those with blood group O (n = 15). Matching for HLA-DR reduced the number of acute rejection episodes in male patients with non-O blood. These findings may help explain the controversial reports about the importance of HLA matching in organ transplantation. Furthermore, as most candidates for heart transplantation are male and not of blood group O, the higher incidence of graft rejection in these patients underscores the need for an exchange strategy of donor hearts.
abstract_id: PUBMED:3541314
Pretransplant conditioning with donor-specific transfusions using heated blood and cyclosporine. Preservation of the transfusion effect in the absence of sensitization. Previous studies from our laboratory showed that pretransplant conditioning with fresh donor-specific blood (DST) combined with cyclosporine (CsA) resulted in long-term prolongation of ACI heterotopic cardiac allografts in LEW recipients treated with subtherapeutic doses of CsA. The concomitant administration of CsA profoundly reduced but did not eliminate the DST-induced sensitization. The purpose of the present study was to investigate in the ACI-to-LEW cardiac allograft model whether heat-treatment of the blood would further reduce the sensitizing potential of DST while maintaining their benefits in our protocol. Fresh heparinized ACI blood was heated at 45 degrees C for 60 min. Then 1.5 ml was administered i.v. to LEW rats on day -8 with respect to grafting (day 0). Controls received heat-treated BUF blood. Donor heat-treated blood (HT-DST), unlike fresh blood, did not induce a humoral cytotoxic response and resulted in the prolongation of cardiac allograft survival (13.2 +/- 2.7 vs. 7.2 +/- 1.0; P less than 0.01). Treatment of HT-DST recipients with postoperative subtherapeutic doses of CsA (2.5 mg/kg/day x 30) extended graft survival (46.6 +/- 22.0 vs. 7.7 +/- 2.0 days; P less than 0.01). The combined pretransplant administration of HT-DST and CsA followed by posttransplant subtherapeutic doses of CsA led to long-term prolongation of cardiac grafts (122.0 +/- 73.0 vs. 31.7 +/- 22.0 days; P less than 0.01). These studies demonstrate that heat-treatment of allogeneic blood eliminates the humoral responses to DST and actually enhances their beneficial effects in terms of graft survival. Such effects can be dramatically increased by CsA. The possible mechanism of these phenomena are discussed.
abstract_id: PUBMED:3916516
Management of cyclosporine toxicity by reduced dosage and azathioprine. While cyclosporine immunosuppression has improved the results of heart transplantation, nephrotoxicity and hypertension occurred in a large percentage of surviving patients. The potential irreversibility of these toxicities was noted in patients chronically exposed to cyclosporine. The immunosuppressive protocol was modified in those patients with a serum creatinine greater than 2.5 mg/100 mL. Azathioprine was added to the immunosuppressive regimen, and the dose of cyclosporine was steadily decreased until the creatinine was lowered. The combination of low-dose cyclosporine and azathioprine provided effective immunosuppression and was associated with a significant decrease in serum creatinine level, in systolic and diastolic blood pressures. Although it is premature to assess the long-term results of this immunosuppressive protocol, the early results are encouraging.
abstract_id: PUBMED:16678029
Early renal benefit of rapamycin combined with reduced calcineurin inhibitor dose in pediatric heart transplantation patients. Background: Calcineurin inhibitors such as cyclosporine are effective in preventing rejection in recipients of solid organ transplants. Unfortunately, the prolonged use of calcineurin inhibitors may result in progressive renal injury.
Methods: We studied the renal function of 15 pediatric heart transplant recipients who were taking calcineurin inhibitors. Their renal function was studied before and after rapamycin was introduced to their immunosuppression regimen. With the introduction of rapamycin, the patients were given a lower dose of calcineurin inhibitors, and the calcineurin inhibitor was discontinued in 5 patients.
Results: Renal function improved significantly in the patients by 30 days after these changes in the calcineurin inhibitor dose were instituted. Mean levels of blood urea nitrogen and mean serum creatinine decreased, and mean creatinine clearance increased. Pre-rapamycin, the patients' mean level of blood urea nitrogen was 27.1 +/- 12.4 mg/dl and post-rapamycin they decreased to 18.6 +/- 11.1 mg/dl (p = 0.014). Similarly, creatinine decreased from 1.0 +/- 0.5 mg/dl to 0.8 +/- 0.3 mg/dl (p = 0.019). Their creatinine clearance increased from 88 +/- 28 ml/min/1.73 mol/liter2 to 105 +/- 27 ml/min/1.73 mol/liter2 (p = 0.008). The patients' lipid levels did not change after they were prescribed rapamycin. Biopsy-negative rejection developed in 2 patients.
Conclusions: The introduction of rapamycin to the immunosuppressive regimen of patients taking calcineurin inhibitors, with a concomitant reduction of the calcineurin inhibitor dose, may improve renal function within 30 days, without a significant increase in rejection.
Answer: Yes, cyclosporine (CsA) blood levels can be reduced to half after heart transplantation under certain conditions. A prospective randomized study evaluated the safety and efficacy of reducing CsA blood levels to one-half of the traditional concentration when used in combination with everolimus (EVL) and steroids. The study included adult recipients of a primary heart transplant and showed that over a 6-month period, there was no operative mortality, death, or graft loss in either the standard exposure (SE) group or the reduced exposure (RE) group. The RE group, which had CsA reduced by one-half after 3 months, showed a trend toward lower mean serum creatinine at month 6, suggesting improved renal function. Only one patient in the SE group experienced biopsy-proven acute rejection of grade 3A or higher, and there were no acute rejection episodes associated with hemodynamic compromise. The incidences of adverse events were similar in both groups. The study concluded that concentration-controlled EVL in combination with reduced CsA exposure achieved good efficacy and safety, indicating that halving the target CsA blood level after heart transplantation could be beneficial (PUBMED:20430207).
However, it is important to note that the management of CsA levels must be carefully monitored, as intrapatient blood level variability (IPV) of calcineurin inhibitors like CsA has been associated with poor outcomes in solid-organ transplants, including heart transplants. While the impact of IPV is only marginally associated with mid-term outcomes after heart transplant, particularly with tacrolimus-based immunosuppression, it could play a role in the most stable recipients (PUBMED:33744197).
Additionally, other studies have shown that the introduction of rapamycin with a concomitant reduction of the CsA dose may improve renal function within 30 days without a significant increase in rejection (PUBMED:16678029).
In summary, under a regimen that includes everolimus and steroids, it is possible to safely reduce CsA blood levels to half after heart transplantation, but careful monitoring and management are crucial to ensure patient safety and graft survival. |
Instruction: Can positron emission tomography/computed tomography with the dual tracers fluorine-18 fluoroestradiol and fluorodeoxyglucose predict neoadjuvant chemotherapy response of breast cancer?
Abstracts:
abstract_id: PUBMED:24205151
Can positron emission tomography/computed tomography with the dual tracers fluorine-18 fluoroestradiol and fluorodeoxyglucose predict neoadjuvant chemotherapy response of breast cancer?--A pilot study. Objective: To assess the clinical value of dual tracers Positron emission tomography/computed tomography (PET/CT) (18)F-fluoroestradiol ((18)F-FES) and (18)F-fluorodeoxyglucose ((18)F-FDG) in predicting neoadjuvant chemotherapy response (NAC) of breast cancer.
Methods: Eighteen consecutive patients with newly diagnosed, non-inflammatory, stage II and III breast cancer undergoing NAC were included. Before chemotherapy, they underwent both (18)F-FES and (18)F-FDG PET/CT scans. Surgery was performed after three to six cycles of chemotherapy. Tumor response was graded and divided into two groups: the responders and non-responders. We used the maximum standardized uptake value (SUVmax) to qualify each primary lesion.
Results: Pathologic analysis revealed 10 patients were responders while the other 8 patients were non-responders. There was no statistical difference of SUVmax-FDG and tumor size between these two groups (P>0.05). On the contrary, SUVmax-FES was lower in responders (1.75±0.66 versus 4.42±1.14; U=5, P=0.002); and SUVmax-FES/FDG also showed great value in predicting outcome (0.16±0.06 versus 0.54±0.22; U=5, P=0.002).
Conclusions: Our study showed (18)F-FES PET/CT might be feasible to predict response of NAC. However, whether the use of dual tracers (18)F-FES and (18)F-FDG has complementary value should be further studied.
abstract_id: PUBMED:23714689
Fluorine-18 fluorodeoxyglucose positron emission tomography-computed tomography in monitoring the response of breast cancer to neoadjuvant chemotherapy: a meta-analysis. Introduction: To evaluate the diagnostic performance of fluorine-18 fluorodeoxyglucose positron emission tomography (FDG-PET) in monitoring the response of breast cancers to neoadjuvant chemotherapy.
Methods: Articles published in medical and oncologic journals between January 2000 and June 2012 were identified by systematic MEDLINE, Cochrane Database for Systematic Reviews, and EMBASE, and by manual searches of the references listed in original and review articles. Quality of the included studies was assessed by using the quality assessment of diagnosis accuracy studies score tool. Meta-DiSc statistical software was used to calculate the summary sensitivity and specificity, positive predictive and negative predictive values, and the summary receiver operating characteristics curve (SROC).
Results: Fifteen studies with 745 patients were included in the study after meeting the inclusion criteria. The pooled sensitivity and specificity of FDG-PET or PET/CT were 80.5% (95% CI, 75.9%-84.5%) and 78.8% (95% CI, 74.1%-83.0%), respectively, and the positive predictive and negative predictive values were 79.8% and 79.5%, respectively. After 1 and 2 courses of chemotherapy, the pooled sensitivity and false-positive rate were 78.2% (95% CI, 73.8%-82.5%) and 11.2%, respectively; and 82.4% (95% CI, 77.4%-86.1%) and 19.3%, respectively.
Conclusions: Analysis of the findings suggests that FDG-PET has moderately high sensitivity and specificity in early detection of responders from nonresponders, and can be applied in the evaluation of breast cancer response to neoadjuvant chemotherapy in patients with breast cancer.
abstract_id: PUBMED:28377466
Complete Metabolic Response on Interim 18F-Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography to Predict Long-Term Survival in Patients with Breast Cancer Undergoing Neoadjuvant Chemotherapy. Background: This study aims to investigate the prognostic role of complete metabolic response (CMR) on interim 18F-fluorodeoxyglucose positron emission tomography/computed tomography (PET/CT) in patients with breast cancer (BC) receiving neoadjuvant chemotherapy (NAC) according to tumor subtypes and PET timing.
Patients And Methods: Eighty-six consecutive patients with stage II/III BC who received PET/CT during or following NAC were included. Time-dependent receiver operating characteristic analysis and Kaplan-Meier analysis were used to determine correlation between metabolic parameters and survival outcomes.
Results: The median follow-up duration was 71 months. Maximum standardized uptake value (SUVmax) on an interim PET/CT independently correlated with survival by multivariate analysis (overall survival [OS]: hazard ratio: 1.139, 95% confidence interval: 1.058-1.226, p = .001). By taking PET timing into account, best association of SUVmax with survival was obtained on PET after two to three cycles of NAC (area under the curve [AUC]: 0.941 at 1 year after initiation of NAC) and PET after four to five (AUC: 0.871 at 4 years), while PET after six to eight cycles of NAC had less prognostic value. CMR was obtained in 62% of patients (23/37) with estrogen receptor-positive (ER+)/human epidermal growth factor receptor 2-negative (HER2-) BC, in 48% (12/25) triple-negative BC (TNBC), and in 75% (18/24) HER2-positive (HER2+) tumors. Patients with CMR on an early-mid PET had 5-year OS rates of 92% for ER+/HER2- tumors and 80% for TNBC, respectively. Among HER2+ subtype, 89% patients (16/18) with CMR had no relapse.
Conclusion: CMR indicated a significantly better outcome in BC and may serve as a favorable imaging prognosticator. The Oncologist 2017;22:526-534 IMPLICATIONS FOR PRACTICE: This study shows a significantly better outcome for breast cancer (BC) patients who achieved complete metabolic response (CMR) on 18F-fluorodeoxyglucose emission tomography/computed tomography (PET/CT) during neoadjuvant chemotherapy, especially for hormone receptor-positive tumors and triple negative BC. Moreover, PET/CT performed during an early- or mid-course neoadjuvant therapy is more predictive for long-term survival outcome than a late PET/CT. These findings support that CMR may serve as a favorable imaging prognosticator for BC and has potential for application to daily clinical practice.
abstract_id: PUBMED:36629643
The effect of positron emission tomography/computed tomography in axillary surgery approach after neoadjuvant treatment in breast cancer. Objective: The aim of this study was to determine the role of positron emission tomography/computed tomography in the decision to perform axillary surgery by comparing positron emission tomography/computed tomography findings with pathology consistency after neoadjuvant chemotherapy.
Methods: Patients who were diagnosed for T1-4, cN1/2 breast cancer receiving neoadjuvant chemotherapy in our clinic between January 2016 and February 2021 were evaluated. Clinical and radiological responses, axillary surgery, and histopathological results after neoadjuvant chemotherapy were evaluated.
Results: Axillary involvement was not detected in positron emission tomography/computed tomography after neoadjuvant chemotherapy in 140 (60.6%) of 231 node-positive patients. In total, 88 (62.8%) of these patients underwent sentinel lymph node biopsy, and axillary lymph node dissection was performed in 29 (33%) of these patients upon detection of 1 or 2 positive lymph nodes. The other 52 (37.1%) patients underwent direct axillary lymph node dissection, and no metastatic lymph nodes were detected in 33 (63.4%) patients. No metastatic lymph node was found pathologically in a total of 92 patients without involvement in positron emission tomography/computed tomography, and the negative predictive value was calculated as 65.7%. Axillary lymph node dissection was performed in 91 (39.4%) patients with axillary involvement in positron emission tomography/computed tomography after neoadjuvant chemotherapy. Metastatic lymph nodes were found pathologically in 83 of these patients, and the positive predictive value was calculated as 91.2%.
Conclusion: Positron emission tomography/computed tomography was found to be useful in the evaluation of clinical response, but it was not sufficient enough to predict a complete pathological response. When planning axillary surgery, axillary lymph node dissection should not be decided only with a positive positron emission tomography/computed tomography. Other radiological images should also be evaluated, and a positive sentinel lymph node biopsy should be the determinant of axillary lymph node dissection.
abstract_id: PUBMED:23787040
Can fluorine-18 fluoroestradiol positron emission tomography-computed tomography demonstrate the heterogeneity of breast cancer in vivo? Aim: Our study was to investigate the heterogeneity of estrogen receptor (ER) expression among tumor sites by using fluorine-18 ((18)F) fluoroestradiol (FES) positron-emission tomography-computed tomography (PET-CT) imaging.
Methods: Thirty-two breast cancer patients underwent both (18)F-FES and (18)F fluorodeoxyglucose (FDG) PET-CTs from June 2010 to December 2011 in our center (mean age, 53 years; range, 27-77 years). We used the maximum standardized uptake value to quantify ER expression and a cutoff value of 1.5 to dichotomize results into ER(+) and ER(-). The difference of heterogeneity between the initial patients and patients with recurrent or metastatic disease after treatments was assessed by using the χ(2) test. Also, the (18)F-FES uptake was compared with the (18)F-FDG uptake by use of Spearman correlation coefficients.
Results: A total number of 237 lesions in 32 patients were detected. Among them, most lesions (64.1% [152/237]) were bone metastasis. A striking 33.4-fold difference in (18)F-FES uptake was observed among different patients (maximum standardized uptake value range, 0.5 to approximately 16.7), and a 8.2-fold difference was observed among lesions within the same individual (1.0 to approximately 8.2). As for (18)F-FDG uptake, the difference was 11.6-fold (1.3 to approximately 15.1) and 9.9-fold (1.4 to approximately 13.8), respectively. In 28.1% (9/32) of the patients, both (18)F-FES(+) and (18)F-FES(-) metastases were present, which suggests partial discordant ER expression. After treatments, 37.5% (9/24) patients with recurrent or metastatic breast cancer showed heterogeneity, whereas no untreated patient was detected to exist discordant ER expression (χ(2), 4.174; P < .05). In addition, the (18)F-FES uptake showed a weak correlation with the (18)F-FDG uptake (ρ = 0.248; P < .05).
Conclusion: (18)F-FES and (18)F-FDG uptake varied greatly both within and among patients. (18)F-FES PET-CT demonstrated a conspicuous number of patients with the heterogeneity of ER expression.
abstract_id: PUBMED:24841218
Early assessment with 18F-fluorodeoxyglucose positron emission tomography/computed tomography can help predict the outcome of neoadjuvant chemotherapy in triple negative breast cancer. Background: In patients with triple-negative breast cancer (TNBC), pathology complete response (pCR) to neoadjuvant chemotherapy (NAC) is associated with improved prognosis. This prospective study was designed and powered to investigate the ability of interim (18)F-fluorodeoxyglucose positron emission tomography/computed tomography ((18)FDG-PET/CT) to predict pathology outcomes to NAC early during treatment.
Patients And Methods: Consecutive TNBC women underwent (18)FDG-PET/CT at baseline and after two courses of NAC. Maximum standardised uptake value (SUV(max)) in the primary tumour and lymph nodes at each examination and the evolution (ΔSUV(max)) between the two scans were measured. NAC was continued irrespective of PET results. Correlations between PET parameters and pathology response, and between PET parameters and event-free survival (EFS), were examined.
Results: Fifty patients without distant metastases were enroled. At completion of NAC, surgery showed pCR in 19 patients, while 31 had residual tumour. Mean follow-up was 30.3 months. Thirteen patients, all with residual tumour, experienced relapse. Of all assessed clinical, biological and PET parameters, ΔSUV(max) in the primary tumour was the most predictive of pathology results (p<0.0001; Mann-Whitney-U test) and EFS (p=0.02; log rank test). A threshold of 42% decrease in SUV was identified because it offered the best accuracy in predicting EFS. There were 32 metabolic responders (⩾ 42% decrease in SUV(max)) and 18 non-responders. Within responders, the pCR rate was 59% and the 3-year EFS 77.5%. In non-responders, the pCR rate was 0% and the 3-year EFS 47.1%.
Conclusion: Interim (18)FDG can early predict the inefficacy of NAC in TNBC patients. It shows promise as a potential contributory biomarker in these patients.
abstract_id: PUBMED:32297448
Subtype-Guided 18 F-FDG PET/CT in Tailoring Axillary Surgery Among Patients with Node-Positive Breast Cancer Treated with Neoadjuvant Chemotherapy: A Feasibility Study. Background: The purpose of this study was to investigate the value of 18 [F]-fluorodeoxyglucose (18 F-FDG) positron emission tomography/computed tomography (PET/CT) in tailoring axillary surgery by predicting nodal response among patients with node-positive breast cancer after neoadjuvant chemotherapy (NAC).
Methods: One hundred thirty-three patients with breast cancer with biopsy-confirmed nodal metastasis were prospectively enrolled. 18 F-FDG PET/CT scan was performed before NAC (a second one after two cycles with baseline maximum standardized uptake value [SUVmax ] ≥2.5), and a subset of patients underwent targeted axillary dissection (TAD). All the patients underwent axillary lymph node dissection (ALND). The accuracy was calculated by a comparison with the final pathologic results.
Results: With the cutoff value of 2.5 for baseline SUVmax and 78.4% for change in SUVmax , sequential 18 F-FDG PET/CT scans demonstrated a sensitivity of 79.0% and specificity of 71.4% in predicting axillary pathologic complete response with an area under curve (AUC) of 0.75 (95% confidence interval, 0.65-0.84). Explorative subgroup analyses indicated little value for estrogen receptor (ER)-negative, human epidermal growth factor receptor 2 (HER2)-positive patients (AUC, 0.55; sensitivity, 56.5%; specificity, 50.0%). Application of 18 F-FDG PET/CT could spare 19 patients from supplementary ALNDs and reduce one of three false-negative cases in TAD among the remaining patients without ER-negative/HER2-positive subtype.
Conclusion: Application of the subtype-guided 18 F-FDG PET/CT could accurately predict nodal response and aid in tailoring axillary surgery among patients with node-positive breast cancer after NAC, which includes identifying candidates appropriate for TAD or directly proceeding to ALND. This approach might help to avoid false-negative events in TAD.
Implications For Practice: This feasibility study showed that 18 [F]-fluorodeoxyglucose (18 F-FDG) positron emission tomography/computed tomography (PET/CT) could accurately predict nodal response after neoadjuvant chemotherapy (NAC) among patients with breast cancer with initial nodal metastasis except in estrogen receptor-negative, human epidermal growth factor receptor 2-positive subtype. Furthermore, the incorporation of 18 F-FDG PET/CT can tailor subsequent axillary surgery by identifying patients with residual nodal disease, thus sparing those patients supplementary axillary lymph node dissection. Finally, we have proposed a possibly feasible flowchart involving 18 F-FDG PET/CT that might be applied in post-NAC axillary evaluation.
abstract_id: PUBMED:33354172
Role of 18F-fluorodeoxyglucose positron emission tomography/computed tomography in the evaluation of breast carcinoma: Indications and pitfalls with illustrative case examples. Whole-body 18F-fluorodeoxyglucose positron emission tomography (PET) has been used extensively in the last decade for the primary staging and restaging and to assess response to therapy in these patients. We aim to discuss the diagnostic performance of PET/computed tomography in the initial staging of breast carcinoma including the locally advanced disease and to illustrate its role in restaging the disease and in the assessment of response to therapy, particularly after the neoadjuvant chemotherapy. Causes of common pitfalls during image interpretations will be also discussed.
abstract_id: PUBMED:33082686
Rare Case of Docetaxel-Induced Myositis Detected on Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography in a Patient with Carcinoma Breast. We report an extremely rare case of acute inflammatory myopathy during combination chemotherapy with docetaxel and transtuzumab for metastatic breast carcinoma in a 44-year-old female patient. Despite the significant response in the follow-up fluorodeoxyglucose (FDG) positron emission tomography/computed tomography, of the underlying malignancy to the chemotherapeutic regimen, there was diffusely increased FDG uptake in the upper and lower limb muscles with associated painful, proximal muscle weakness. These symptoms regressed after the discontinuation of docetaxel and the administration of corticosteroids, suggesting it to be the drug-induced myositis.
abstract_id: PUBMED:21830087
18F-fluorodeoxyglucose positron emission tomography optimizes neoadjuvant chemotherapy for primary breast cancer to achieve pathological complete response. Background: To assess the usefulness of positron emission tomography combined with computed tomography using (18)F-fluorodeoxyglucose (FDG PET/CT) for optimizing chemotherapy during neoadjuvant chemotherapy for primary breast cancer.
Methods: One hundred and eight patients (110 tumors) with breast cancer (≥2 cm, stages II and III) received neoadjuvant chemotherapy consisting of an anthracycline-based regimen and taxane. The maximal value of the baseline standardized uptake value (SUV) and the change in SUV after four cycles of an anthracycline-based regimen relative to baseline SUV were assessed for predicting pathological complete response (pCR) after sequential taxane.
Results: Tumors with pCR had significantly higher baseline SUV (9.3 ± 3.7 SD) compared to those with non-pCR (7.2 ± 3.8 SD) (p = 0.02), but there was a considerable overlap between two groups. On PET scan after four cycles of chemotherapy, thirty-three patients (33.7%) with a 72.1% or greater reduction in SUV were considered as responders and the performance in predicting pCR had a sensitivity of 88.9% and specificity of 78.7%.
Conclusion: The baseline SUV could not be a useful indicator for predicting pCR due to the wide range in sensitivity. On the other hand, a relative change in SUV after completion of an anthracycline-based regimen could be useful for predicting pCR.
Answer: Yes, positron emission tomography/computed tomography (PET/CT) with the dual tracers fluorine-18 fluoroestradiol (18F-FES) and fluorodeoxyglucose (18F-FDG) can potentially predict neoadjuvant chemotherapy response of breast cancer. A pilot study indicated that 18F-FES PET/CT might be feasible for predicting the response of neoadjuvant chemotherapy (NAC). In this study, the maximum standardized uptake value (SUVmax) of 18F-FES was lower in responders compared to non-responders, suggesting its potential in predicting outcomes. However, the study also noted that further research is needed to determine whether the use of dual tracers 18F-FES and 18F-FDG has complementary value (PUBMED:24205151).
Additionally, other studies have shown that 18F-FDG PET/CT has moderately high sensitivity and specificity in early detection of responders from nonresponders and can be applied in the evaluation of breast cancer response to neoadjuvant chemotherapy (PUBMED:23714689). Furthermore, complete metabolic response (CMR) on interim 18F-FDG PET/CT has been associated with better long-term survival in patients with breast cancer undergoing NAC, especially for hormone receptor-positive tumors and triple-negative breast cancer (PUBMED:28377466).
It is also important to note that while PET/CT is useful in evaluating clinical response, it may not be sufficient to predict a complete pathological response on its own, and other radiological images should be evaluated in conjunction with PET/CT findings when planning axillary surgery (PUBMED:36629643).
In summary, PET/CT with dual tracers 18F-FES and 18F-FDG shows promise in predicting neoadjuvant chemotherapy response in breast cancer, but further studies are required to fully establish its clinical value and to determine the best approach for its use in conjunction with other diagnostic tools. |
Instruction: Do modern spectacles endanger surgeons?
Abstracts:
abstract_id: PUBMED:17435558
Do modern spectacles endanger surgeons? The Waikato Eye Protection Study. Background: Despite documented cases of infectious disease transmission to medical staff via conjunctival contamination and widespread recommendation of protective eyewear use during surgical procedures, a large number of surgeons rely on their prescription spectacles as sole eye protection. Modern fashion spectacles, being of increasingly slim design, may no longer be adequate in this role.
Methods: A survey was conducted among the surgeons at Waikato Hospital from December 7, 2004 to February 1, 2005, to assess current operating theater eyewear practices and attitudes. Those who wore prescription spectacles were asked to assume a standardized "operating position" from which anatomic measurements were obtained. These data were mathematically analyzed to determine the degree of palebral fissure protection conferred by their spectacles.
Results: Of 71 surgical practitioners surveyed, 45.1% required prescription lenses for operating, the mean spectacle age being 2.45 years; 84.5% had experienced prior periorbital blood splashes; 2.8% had previously contracted an illness attributed to such an event; 78.8% participants routinely used eye protection, but of the 27 requiring spectacles, 68.0% used these as their sole eye protection. Chief complaints about safety glasses and facial shields were of fogging, poor comfort, inability to wear spectacles underneath, and unavailability. Our model predicted that 100%, 92.6%, 77.8%, and 0% of our population were protected by their spectacles laterally, medially, inferiorly, and superiorly, respectively.
Conclusions: Prescription spectacles of contemporary styling do not provide adequate protection against conjunctival blood splash injuries. Our model predicts the design adequacy of currently available purpose-designed protective eyewear, which should be used routinely.
abstract_id: PUBMED:11623545
Physicians and surgeons in Saragossa during the modern age. Number, social and family structure Documentation at the College of Physicians and Surgeons of Saragossa, scattered throughout many archives, made it possible to trace the evolution of the number of physicians and surgeons in the city of Saragossa in the Modern Age with regard to the number of inhabitants of this city. Also studied are the possible causes of increases or decreases in their numbers, and the proportions of physicians and surgeons to inhabitants are compared with figures from other Spanish regions. By studying a 1723 census, the social and family structures of the different health professions in Saragossa are analyzed. Comparisons of these figures to the structures of other professions made it possible to determine the different social level of each structure. The social level of physicians was the same as that of apothecaries, whereas it was higher than that of surgeons and veterinarians and lower than that of legal professionals, notaries and jurists.
abstract_id: PUBMED:25425093
Do recycled spectacles meet the refractive needs of a developing country? Purpose: The aim was to compare the power of spectacles donated to a recycled spectacle program to the custom-made spectacle refractive prescriptions dispensed in a developing country.
Methods: Two hundred consecutive prescriptions were audited in an optical dispensary in Timor-Leste, a developing nation. These refractions were compared against measurements of 2,075 wearable donated spectacles. We determined how many of the 200 prescriptions could be matched to a donated spectacle measurement, how many donated spectacles could be tried for each prescription and how long it would take to find the matched spectacles.
Results: There were 1,854 donated spectacles identified as being suitable for comparison with the 200 refractive prescriptions. Twenty-nine out of 200 prescriptions (14.5 per cent) were matched to at least one pair of donated spectacles.
Conclusion: Recycling all spectacles is not cost-effective in a developing country that has the ability to make custom-made spectacles and dispense ready-made spectacles.
abstract_id: PUBMED:24397254
Clinical outcomes following the dispensing of ready-made and recycled spectacles: a systematic literature review. Uncorrected refractive error is the leading cause of global visual impairment. Given resource constraints in developing countries, the gold standard method of refractive error correction, custom-made spectacles, is unlikely to be available for some time. Therefore, ready-made and recycled spectacles are in wide use in the developing world. To ensure that refractive error interventions are successful, it is important that only appropriate modes of refractive error correction are used. As a basis for policy development, a systematic literature review was conducted of interventional studies analysing visual function, patient satisfaction and continued use outcomes of ready-made and recycled spectacles dispensed to individuals in developing countries with refractive errors or presbyopia. PubMed and CINAHL were searched by MESH terms and keywords related to ready-made and recycled spectacle interventions, yielding 185 non-duplicated papers. After applying exclusion criteria, eight papers describing seven studies of clinical outcomes of dispensing ready-made spectacles were retained for analysis. The two randomised controlled trials and five non-experimental studies suggest that ready-made spectacles can provide sufficient visual function for a large portion of the world's population with refractive error, including those with astigmatism and/or anisometropia. The follow-up period for many of the studies was too short to confidently comment on patient satisfaction and continued-use outcomes. No studies were found that met inclusion criteria and discussed recycled spectacles. The literature also notes concerns about quality and cost effectiveness of recycled spectacles, as well as their tendency to increase developing countries' reliance on outside sources of help. In light of the findings, the dispensing of ready-made spectacles should be favoured over the dispensing of recycled spectacles in developing countries.
abstract_id: PUBMED:24448013
Working spectacles for sorting mail. Background: Sorting mail into racks for postmen is visually demanding work. This can result in backward inclination of their heads, especially more pronounced for those who use progressive addition lenses.
Objective: To evaluate the effects of customized working spectacles on the physical workload of postmen.
Methods: Twelve male postmen sorted mail on two occasions: once using their private progressive spectacles and once using customized sorting spectacles with inverted progressive lenses. Postures and movements of the head, upper back, neck, and upper arms were measured by inclinometry. The muscular load of the trapezius was measured by surface electromyography.
Results: With the customized sorting spectacles, both the backward inclination of the head and backward flexion of the neck were reduced (3°), as well as the muscular load of the right upper trapezius, compared to sorting with private spectacles. However, with the sorting spectacles, there was a tendency for increased neck forward flexion, and increased sorting time.
Conclusion: The reduction in work load may reduce the risk for developing work-related musculoskeletal disorders due to the positive reduction of the backward inclination of the head. But the tendency for increased neck forward flexion may reduce the positive effects. However, the magnitude of the possible reduction is difficult to predict, especially since quantitative data on exposure-response relationships are unknown. Alternative working spectacles with inverted near progressive lenses ought to be evaluated. They should still result in a positive reduced backward inclination of the head and may not cause any increased forward flexion.
abstract_id: PUBMED:37349118
Which is superior for postural stability: contact lens or spectacles? Clinical Relevance: The visual system plays an important role in providing postural balance. Visual input must have good quality to ensure proper balance.
Background: The aim of this work is to compare the use of soft contact lenses and spectacles in terms of postural stability.
Methods: Patients who wore both soft spherical or toric contact lenses and spectacles were examined between February and July, 2021. A detailed ophthalmic examination, including contact lens evaluation, was performed. The aim was to fully correct the refractive error and to prescribe the most appropriate spectacle and contact lens correction. After 1 month of use, patients were subjected to the balance test. The balance tests were repeated using the Biodex Balance System (Biodex Inc. Shirley, New York, USA), first with contact lenses and then with spectacles, 15 minutes later. Static and dynamic postural stability indices were compared.
Results: Thirty patients were included in the study. The mean age of the study group was 31.33 ± 4.54 (26-40) years. All patients had myopic refractive errors (20 patients with myopia and 10 patients with myopia and astigmatism). The mean spherical equivalent was -2.95 ± 1.81 (-4.50-(-0.50)) D. Static stability index score was found to be statistically significantly better in tests with contact lenses (p = 0.004). Among the dynamic postural stability parameters, overall stability index and antero-posterior stability index (APSI) scores with contact lenses were better than with spectacles, but the difference was not statistically significant (p > 0.05 for both). Medio-lateral stability index (MLSI) score was better in tests with contact lenses (p < 0.001).
Conclusion: Contact lenses may provide better static and dynamic postural balance than spectacles in young patients with myopic refractive errors.
abstract_id: PUBMED:29770491
Customised spectacles using 3-D printing technology. Background: This study describes a novel method of customised spectacles prototyping and manufacturing using 3-D printing technology.
Methods: The procedure for manufacturing customised spectacles using 3-D printing technology in this study involved five steps: patient selection; using surface topography; 3-D printing of the phantom model; 3-D designing of the spectacles; and 3-D printing of the spectacles.
Results: The effective time required for 3-D printing of the spectacles was 14 hours. The spectacles weighed 7 g and cost AUD$160.00 to manufacture. The 3-D-printed spectacles fitted precisely onto the face and were considered to provide a superior outcome compared with conventional spectacles. Optical alignment, good comfort and acceptable cosmesis were achieved. One month after fitting, the 3-D-printed spectacles did not require further changes.
Conclusion: Customised 3-D-printed spectacles can be created and applied to patients with facial deformities. As a significant number of children with facial deformities require spectacle correction, it is essential to provide appropriate frames for this group of patients. The 3-D printing technique described herein may offer a novel and accurate option. It is also feasible to produce customised spectacles with this technique to maximise optical alignment and comfort in special conditions.
abstract_id: PUBMED:37376831
Assessment of optical quality of ready-made reading spectacles for presbyopic correction. Purpose: Many presbyopic patients in both developed and developing countries use ready-made reading spectacles for their near vision correction even though the quality of these spectacles cannot always be assured. This study assessed the optical quality of ready-made reading spectacles for presbyopic correction in comparison with relevant international standards.
Methods: A total of 105 ready-made reading spectacles with powers ranging from +1.50 to +3.50 dioptres (D) in +0.50 D steps were randomly procured from open markets in Ghana and assessed for their optical quality, including induced prisms and safety markings. These assessments were done in line with the International Organization for Standardization (ISO 16034:2002 [BS EN 14139:2010]) as well as the standards used in low-resource countries.
Results: All lenses (100%) had significant induced horizontal prism that exceeded the tolerance levels stipulated by the ISO standards, while 30% had vertical prism greater than the specified tolerances. The highest prevalence of induced vertical prism was seen in the +2.50 and +3.50 D lenses (48% and 43%, respectively). When compared with less conservative standards, as suggested for use in low-resource countries, the prevalence of induced horizontal and vertical prism reduced to 88% and 14%, respectively. While only 15% of spectacles had a labelled centration distance, none had any safety markings per the ISO standards.
Conclusion: The high prevalence of ready-made reading spectacles in Ghana that fail to meet optical quality standards indicates the need for more robust, rigorous and standardised protocols for assessing their optical quality before they are sold on the market. This will alleviate unwanted side effects including asthenopia associated with their use. There is also the need to intensify public health awareness on the use of ready-made reading spectacles, especially by patients with significant refractive errors and ocular pathologies.
abstract_id: PUBMED:27649581
Mechanical properties of protective spectacles fitted with corrective lenses. The majority of commercially available corrective spectacles used by workers do not provide effective eye protection against mechanical hazards in the workplace. One of the risks commonly occurring during work is hitting the head on some protruding elements, such as components of machines, buildings or tree branches in a forest. Because of the considerable weight of the human head and the speed of movement during impact, this type of accident may be very serious. This article presents a method of testing the mechanical strength of corrective lenses, simulating the results of an impact of the head on elements of workplaces. The results of tests of commercially available materials used for the construction of corrective and protective spectacles are also presented and discussed.
abstract_id: PUBMED:36203913
Pattern of Ametropia, Presbyopia, and Barriers to the Uptake of Spectacles in Adult Patients Attending a General Hospital in Kaduna State. Aim: The aim of the study was to assess the pattern of ametropia, presbyopia, and the barriers to the uptake of spectacles among adult patients attending Hajiya Gambo Sawaba General Hospital (HGSGH), Zaria, Nigeria.
Materials And Methods: This study was a descriptive hospital-based study. The study population was selected by systematic sampling over a 3-month period (July-September, 2016). A questionnaire was administered for sociodemographic data, patients' history, and refraction. Patients with visual acuity less than 6/12 or who required at least +1.00DS to read N8 were given spectacle prescriptions and assessed after 2 months to confirm if they purchased the spectacles.
Results: Two hundred and nine patients were assessed: 124 (59.3%) females and 85 (40.7%) males. The age ranged from 18 to 75 years with a mean of 45.5. One hundred and seventy-five (83.7%) had ametropia, 100 (47.8%) had presbyopia, and patients having both were 66 (31.6%). Astigmatism was the most common ametropia, 118 (67.4%), followed by hypermetropic astigmatism 33 (18.9%) and myopia 24 (13.7%). Only 97 (46.6%) patients purchased their spectacles, and 92 of them were using their spectacles. One hundred and twelve (52.2%) patients did not buy their spectacles, with cost being most common barrier to the uptake of spectacles.
Conclusion: There is a high burden of ametropia and presbyopia among patients attending HGSGH, Zaria. Cost of spectacles and 'no felt need' were found to affect spectacle uptake. Health information and availability of low-cost spectacles are important strategies in reducing the burden of uncorrected ametropia and presbyopia in Kaduna State.
Answer: Modern spectacles, particularly those with contemporary styling, do not provide adequate protection against conjunctival blood splash injuries for surgeons. The Waikato Eye Protection Study found that while 78.8% of participants routinely used eye protection, 68.0% of the surgeons who required prescription spectacles used them as their sole eye protection. However, the study's model predicted that modern spectacles provided 100% lateral protection, 92.6% medial protection, 77.8% inferior protection, but 0% superior protection. This indicates that prescription spectacles, especially those with modern slim designs, are not sufficient to protect surgeons from infectious disease transmission through conjunctival contamination during surgical procedures. Surgeons are advised to use purpose-designed protective eyewear that offers more comprehensive coverage (PUBMED:17435558). |
Instruction: Is there an alternative explanation to post-myocardial infarction emergence of mitral regurgitation?
Abstracts:
abstract_id: PUBMED:24383379
Is there an alternative explanation to post-myocardial infarction emergence of mitral regurgitation? A CMR-LGE observational study. Background And Aim Of The Study: Post-myocardial infarction (MI) mitral regurgitation (MR) is thought to be due to a passive, rather than active, remodeling of the mitral valve apparatus and its relationship with other cardiac structures that contribute to MR. Standard contrast-enhanced magnetic resonance (CMR) late gadolinium enhancement (LGE) may be sensitive to non-myocardial pathology involving the mitral valve leaflets. It was hypothesized that the presence of mitral valve enhancement (MVE) on LGE imaging in post-MI patients would be associated with an increased incidence of MR.
Methods: The presence or absence of MVE was noted in patients presenting for CMR with MI and non-MI indications requiring LGE. A chi-square analysis was performed for non-contiguous variables; SPSS (Chicago) software was utilized for the statistical analysis.
Results: Eighty-seven patients (54 males, 33 females) underwent LGE-CMR studies utilizing a 1.5 T GE scanner with MultiHance gadolinium contrast administration. LGE+ (present) was noted in 68 patients, and LGE- (absent) in 19 patients. Post-MI patterns of LGE+ were noted in 51 patients and LGE-in 36 patients; MVE+ was noted in 39 patients and MVE- in 48; and MR+ was present in 67 patients and absent (MR-) in 20 patients. MVE was observed chiefly in post-MI patients (33/51; 65%) and infrequently in non-post-MI patients (6/36; 17%; chi2 = 17.8, p < 0.001, power = 0.995). Further, MR was present more frequently in patients with MVE (36/39; 92%) compared to patients without MVE (31/48; 65%; chi2 = 7.8, p = 0.005, power = 0.814).
Conclusion: MVE is present in a large number of post-MI patients but rarely in non-post-MI patients. Post-MI patients with, rather than without, MVE are far more likely to have MR. These observations suggest a specific but as-yet unknown reactive process that may contribute to mitral leaflet remodeling in post-MI patients, potentially contributing to an increased incidence of MR in post-MI patients.
abstract_id: PUBMED:20301988
Bilateral papillary muscle infarction in a chagasic patient. We report a case of a 59-year-old patient diagnosed with chagasic cardiomyopathy, who manifested sudden heart failure while hospitalized, evolving to death due to cardiogenic and septic shock. Anatomical-pathological studies revealed infarction of the papillary muscles together with histological changes compatible with 48 to 72 hours of evolution. Pulmonary edema was considered the cause of death, probably related to mitral regurgitation of ischemic nature. The cause of the papillary muscle infarction was not elucidated by study of the coronary tree, which presented no signs of recent thrombosis. Explanation for the papillary muscle infarction in this patient may be related to the presence of alterations in microcirculation represented by vasodilation and the consequent phenomenon of "stealing" of blood flow in this territory to the detriment of other areas, or due to the fact that the papillary muscles may represent convergence zones of two distinct coronary circulations.
abstract_id: PUBMED:33061141
Ischemic mitral regurgitation: the way ahead is a step back. Ischemic mitral regurgitation (IMR) has a profound negative effect on survival of patients following myocardial infarction. It occurs when the closing forces are overpowered by the tethering forces as a consequence of ventricular remodeling. Surgeons sought to correct moderate and severe IMR by mitral annuloplasty. Though short-term results were encouraging, survival after 2 years were not so. Higher recurrence rates were also noted with severe mitral regurgitation (MR). Parameters defining severity of IMR were initially formulated in 2003. These were revised enabling intervention in moderate MR in 2014. With the lack of positive medium and long-term evidence, 2017 guidelines have raised the bar, discouraging intervention in moderate IMR. Current guidelines have taken a conservative stance in advocating repair only for severe MR and very symptomatic patients. Till emergence of fresh evidence surgical enthusiasm for repair of IMR has to be restrained.
abstract_id: PUBMED:17669756
Papillary muscle elevation: an alternative subvalvular procedure for selective relocation of displaced posterior papillary muscle in posteroinferior infarction. Several subvalvular procedures have been developed for relocating one or both displaced papillary muscles. We describe an original procedure--papillary muscle elevation--in which we relocated the posterior papillary muscle selectively, through a small inferior ventriculotomy, and reduced the coaptation depth from 5 mm to zero. Our procedure can be considered for cases of posteroinferior infarction, which is a frequent cause of ischemic mitral regurgitation.
abstract_id: PUBMED:11759322
Extracorporeal circulation with danaparoid sodium for valve replacement in thrombocytopenia induced by type II heparin A type II heparin-induced thrombocytopenia (HIT) was diagnosed in a 64-year-old woman at day 20 of intravenous unfractionated heparin (UFH) therapy, given after myocardial infarction treated by angioplasty and intracoronary stent. The infarction was complicated by a mitral insufficiency that led to a mitral valve replacement. Cardiopulmonary bypass was successfully performed with sodium danaparoid (Orgaran), as an alternative to UFH, without thrombotic or haemorrhagic complications and the follow-up was uneventful.
abstract_id: PUBMED:401541
The role of vasodilator therapy in heart failure. This article has attempted to summarize the current status of the therapeutic use of vasodilator drugs in acute and chronic heart failure. It is apparent from the increasing number of publications in this area that this alternative to more standard forms of therapy is likely to find a permanent and important place in the management of patients with heart disease. It should also be apparent that ideal drugs for the therapy of chronic heart failure are not yet available. Nevertheless, it is probable that such drugs will emerge and become at least as important as the routine use of digitalis in such patients.
abstract_id: PUBMED:35733709
Operative strategies for acute mitral regurgitation as a mechanical complication of myocardial infarction. Severe mitral regurgitation secondary to papillary muscle rupture is one of the mechanical complications after an acute myocardial infarction. Surgical strategies represent the cornerstone of treatment in this disease; in addition to surgical valve replacement, approaches involving surgical valve repair have been reported over time in different clinical scenarios to restore valve competency, improve cardiac function and reduce mechanical prosthesis-related risks. Moreover, in recent years, percutaneous trans-catheter procedures have emerged as an important alternative in high risk or inoperable patients.
abstract_id: PUBMED:33847321
Percutaneous mitral valve repair in acute mitral regurgitation Acute mitral regurgitation is a life-threatening pathology. Nowadays, percutaneous mitral valve repair with the MitraClip device offers, in selected patients, a safe and effective therapeutic alternative to open surgery. Hereby, we report the case of an 82-year-old woman with lateral ST-elevation myocardial infarction determining severe acute mitral regurgitation, who was treated with an urgent MitraClip procedure. Moreover, we discuss echocardiographic assessment of acute mitral regurgitation and we review available literature and possible management of this complex scenario.
abstract_id: PUBMED:36380806
Post-infarct mitral insufficiency: when to resort to reparative surgery, when to the mitral clip. Severe mitral insufficiency is a feared complication that develops in 10-12% of patients with myocardial infarction and ST elevation. It results from the rupture of the papillary muscle or is secondary to an acute remodelling of the left ventricle in its entirety or of the infarcted region. The mortality of patients with acute clinical onset reaches 50%. The ideal treatment of acute mitral insufficiency is controversial and still a source of debate. Most of these patients are at high surgical risk or inoperable; therefore, they are treated conservatively and have a poor prognosis. In these candidates, percutaneous treatment of insufficiency with percutaneous edge to edge can be considered an alternative.
abstract_id: PUBMED:30280061
Use of MitraClip for Postmyocardial Infarction Mitral Regurgitation Secondary to Papillary Muscle Dysfunction. Papillary muscle rupture after acute myocardial infarction (AMI) is a dreadful complication and it is associated with five percent of deaths following AMI. Surgery is the recommended treatment of choice; however, it is usually deferred due to the high risk of mortality. MitraClip implantation using a transcatheter approach is an alternative option for patients with severe mitral regurgitation (MR) following AMI or those with high operative risk. We report a case of a 68-year-old male patient who developed severe MR secondary to AMI and underwent successful mitral valve repair using the MitraClip device.
Answer: Yes, there is an alternative explanation to the post-myocardial infarction emergence of mitral regurgitation (MR). A study utilizing contrast-enhanced magnetic resonance (CMR) late gadolinium enhancement (LGE) imaging found that mitral valve enhancement (MVE) was present in a significant number of post-myocardial infarction (MI) patients and was associated with an increased incidence of MR. This suggests a specific reactive process that may contribute to mitral leaflet remodeling in post-MI patients, potentially leading to MR (PUBMED:24383379).
Additionally, in a case of a chagasic patient, bilateral papillary muscle infarction was observed without signs of recent thrombosis in the coronary tree. This suggests that alterations in microcirculation or the convergence zones of two distinct coronary circulations could be an explanation for papillary muscle infarction, which could lead to ischemic MR (PUBMED:20301988).
Moreover, ischemic mitral regurgitation (IMR) can occur when the closing forces of the mitral valve are overpowered by the tethering forces as a consequence of ventricular remodeling post-MI. This understanding has led to various surgical and procedural strategies to address the condition, including subvalvular procedures like papillary muscle elevation (PUBMED:17669756), valve replacement in the presence of heparin-induced thrombocytopenia (PUBMED:11759322), and percutaneous interventions such as the MitraClip device (PUBMED:33847321, PUBMED:36380806, PUBMED:30280061).
These findings indicate that post-MI MR can emerge not only due to passive remodeling of the mitral valve apparatus but also due to active pathological processes affecting the mitral valve leaflets, papillary muscles, and the subvalvular apparatus, as well as changes in the microcirculation and ventricular remodeling. |
Instruction: Heterotaxy syndrome with functional single ventricle: does prenatal diagnosis improve survival?
Abstracts:
abstract_id: PUBMED:17062216
Heterotaxy syndrome with functional single ventricle: does prenatal diagnosis improve survival? Background: Despite improved outcome for many single ventricle lesions, staged reconstruction for heterotaxy syndrome with a functional single ventricle continues to have a high mortality. Prenatal identification of heterotaxy syndrome may improve long-term survival.
Methods: Our database was reviewed from January 1996 to December 2004 for patients with heterotaxy syndrome. Assessment was made for prenatal diagnosis and echocardiographic characteristics of heterotaxy syndrome. We sought to assess the accuracy of fetal echocardiography in the diagnosis of heterotaxy syndrome and determine whether prenatal diagnosis and other risk factors have an impact on survival in patients with heterotaxy syndrome.
Results: Of 81 patients that met criteria, 43 (53%) had prenatal diagnosis. Prenatal diagnosis had high specificity and positive predictive value for all findings but had low sensitivity for anomalous pulmonary veins. Among the 70 patients born alive, survival was 60% with median follow-up of 51.4 months (range, 6.5 to 109.7 months). Prenatal diagnosis did not improve survival (p = 0.09). None of the 11 patients with complete heart block (CHB) survived past 3 months of age. Two patients underwent heart transplantation as their first intervention and have survived. CHB and anomalous pulmonary venous connection were associated with shorter duration of survival.
Conclusions: Prenatal diagnosis of heterotaxy syndrome does not improve survival in patients who undergo single ventricle reconstruction. The most potent risk factors for poor outcome (CHB, anomalous pulmonary veins) are likely not impacted by identification in utero. In light of the poor outcome, cardiac transplantation as an initial therapy may be a viable option for some patients.
abstract_id: PUBMED:29945508
Single Ventricle and Total Anomalous Pulmonary Venous Connection: Implications of Prenatal Diagnosis. Background: Single ventricle (SV) patients with total anomalous pulmonary venous connection (TAPVC) are at high risk. Given the limited published data available, we examined outcomes and the implications of a prenatal diagnosis of SV/TAPVC.
Methods: A single-center, retrospective review was performed in neonates with SV/TAPVC from 1998 to 2014, identified through institutional databases. Patient demographic, perioperative, and follow-up data were collected.
Results: Thirty-four eligible infants with SV/TAPVC were identified (mean birth weight: 3.0 kg). The TAPVC types were supracardiac (59%), infracardiac (21%), mixed (12%), and cardiac (9%). Heterotaxy syndrome was present in 25 (74%) infants. A prenatal diagnosis of SV was made in 26 (76%) infants, with TAPVC identified in 12 (35%). Seventeen (50%) had obstructed TAPVC within the first 48 hours of life; 7 of these patients had obstructed TAPVC identified prenatally. There were two preoperative deaths. Overall survival for the cohort was 65% at 1 year and 50% at 3 years. Survival in the obstructed group was significantly worse compared to the unobstructed group (47% vs 81% at 1 year; 27% vs 73% at 3 years, P = .01). Obstructed TAPVC and a prenatal prediction of obstructed TAPVC were significantly associated with postoperative mortality ( P = .01 and .03, respectively).
Conclusions: Patients with SV/TAPVC remain a high-risk group, with obstructed TAPVC a significant risk factor for mortality. Prenatal diagnosis of TAPVC in SV patients is challenging, but given those with obstructed TAPVC are especially at high risk, improved prenatal diagnostic techniques in this group may enhance counseling/delivery planning.
abstract_id: PUBMED:2023434
Double-inlet ventricle presenting in infancy. I. Survival without definitive repair. Survival before definitive operations was studied in 191 infants with double-inlet ventricle presenting before 1 year of age (1973 to 1988, median follow-up 8.5 years). The morphologic spectrum was broad, with a great prevalence of associated lesions. The actuarial survival rate before definitive repair was 57% at 1 year, 43% at 5 years, and 42% at 10 years, worse than prior reports because of the younger age at entry into our series. Analysis of univariate risk factors established that right atrial isomerism (18% of the group, relative risk 2.9), common atrioventricular orifice (42%, 2.0), pulmonary atresia (20%, 3.4), obstruction of the systemic outflow tract (18%, 2.5), and extracardiac anomalous pulmonary venous connection (13%, 3.1) were strongly associated with poorer survival. Pulmonary stenosis (40%, 0.35), balanced pulmonary blood flow (9%, 0.40), and presentation at an older age (3%, 0.42 to 0.18) were beneficial (p less than 0.05 to 0.0001). Multivariate analysis allowed the creation of patient-specific curves for prediction of survival for different anatomic and physiologic variants of double-inlet ventricle. A simple additive index was then derived from the multivariate Cox coefficients to enable stratification of risk for these morphologic subgroups of patients and so assist in the making of clinical decisions in infancy.
abstract_id: PUBMED:34366230
Improved heart transplant survival for children with congenital heart disease and heterotaxy syndrome in the current era: An analysis from the pediatric heart transplant society. Background: Challenges exist with heterotaxy due to the complexity of heart disease, abnormal venous connections, and infection risks. This study aims to understand heart transplant outcomes for children with heterotaxy.
Methods: All children with congenital heart disease listed for transplant from 1993 to 2018 were included. Those with and without heterotaxy were compared. Waitlist outcomes and survival post-listing and transplant were analyzed. Post-transplant risk factors were identified using multiphase parametric hazard modeling.
Results: There were 4814 children listed, of whom 196 (4%) had heterotaxy. Heterotaxy candidates were older (5.8 ± 5.7 vs 4.2 ± 5.5 years, p < 0.01), listed at a lower urgency status (29.8% vs 18.4%, p < 0.01), more commonly single ventricle physiology (71.3% vs 59.2%, p < 0.01), and less often supported by mechanical ventilation (22% vs 29.1%, p < 0.05) or extracorporeal membrane oxygenation (3.6% vs 7.5%, p < 0.05). There were no differences in waitlist outcomes of transplant, death, or removal. Overall, post-transplant survival was worse for children with heterotaxy: one-year survival 77.2% vs 85.1%, with and without heterotaxy, respectively. Heterotaxy was an independent predictor for early mortality in the earliest era (1993-2004), HR 2.09, CI 1.16-3.75, p = 0.014. When stratified by era, survival improved with time. Heterotaxy patients had a lower freedom from infection and from severe rejection, but no difference in vasculopathy or malignancy.
Conclusions: Mortality risk associated with heterotaxy is mitigated in the recent transplant era. Early referral may improve waitlist outcomes for heterotaxy patients who otherwise have a lower status at listing. Lower freedom from both infection and severe rejection after transplant in heterotaxy highlights the challenges of balancing immune suppression.
abstract_id: PUBMED:21560840
Medium and long-term outcomes of Fontan operation. Background: The Fontan operation had been proposed as the final palliative surgery in the patients with single ventricle physiology. Even though modifications of the operation were developed to improve outcomes, long-term complications remain significant with time. The present study reviewed long-term survival rate, morbidities associated with time, and risk factors during the follow-up period after Fontan operation.
Material And Method: A retrospective study was conducted. Every patient who underwent the Fontan operation at Siriraj Hospital between January 1987 and December 2007 and had available data was included in the present study. The data was collected until the most recent follow-up in December 2008. Demographic data, diagnosis, echocardiographic data, cardiac catheterization data, surgical data, type of modified Fontan procedure, and perioperative data were collected. The follow-up clinical data, cardiac investigation data, complications, and management were also collected and analyzed.
Results: Survival rates were 88.7%, 85.3%, and 83.8% at 1 year, 5 years, and 10 years, respectively. The median follow-up time was 4.75 years (0-17.45). The 10-years survival rate of tricuspid atresia, single ventricle and the heterotaxy syndrome were 94.5%, 79%, and 83.3%, respectively, which were not significantly different (p = 0.09). The 10-years survival rates of the patients that underwent lateral tunnel, extracardiac conduit and atriopulmonary connection were 80.7%, 88% and 84.3%, respectively. A mean pulmonary artery pressure of more than 18 mmHg was the only factor that affected the survival rate after Fontan surgery (p = 0.008). The incidence of postoperative arrhythmia was 7.9%. Age at operation, diagnosis, type of operation, fenestration, systemic EDP, or PVR before operation did not significantly affect the survival rate. Diagnosis and type of surgery did not affect long-term outcome regarding arrhythmia, re-intervention, systemic atrioventricular valve regurgitation, and systemic ventricular dysfunction. Patients post Fontan operation had good survival rate.
Conclusion: Cardiac diagnoses were not significantly different in the medium and long-term survival rate of post Fontan patients, freedom from arrhythmia, re-intervention and systemic atrioventricular regurgitation. Types of Fontan operation did not affect long-term survival rate or long-term complications. Mean pulmonary artery pressure of more than 18 mmHg was the only risk factor to the survival rate.
abstract_id: PUBMED:9464611
Long-term follow-up of surgical patients with single-ventricle physiology: prognostic anatomical determinants. Although univentricular or biventricular repair is not always possible for hearts with single-ventricle physiology, it has been proven to produce a satisfactory outcome. It is hypothesized that patient prognosis can be predicted by anatomical diagnosis at presentation. Between 1961 and 1995, 158 patients with single-ventricle physiology, including tricuspid atresia and hypoplastic left heart syndrome, were referred to the authors' institute, and underwent 260 surgical interventions. Follow-up was 99% complete. Patient survival and anatomical risk factors were examined by the Kaplan-Meier method, and multivariate analysis including the Cox proportional hazard model. Mean (s.e.m.) actuarial survival rates at 1, 5, 10 and 20 years following birth were 70.3(3.6)%, 56.3(4.0)%, 48.8(4.2)%, and 40.9(4.7)%, respectively. Definitive palliation was attempted in 38 patients (univentricular in 35 and biventricular in three). Multivariate analysis identified systemic ventricular outflow tract obstruction, mitral atresia, situs ambiguus, and pulmonary vein drainage tract obstruction as independent prognostic factors for overall survival. Visceral heterotaxy was the only independent risk factor for lack of application or failure (death or take-down within 30 days of operation) of univentricular or biventricular repair. In conclusions, these anatomical factors in hearts with single-ventricle physiology affect long-term mortality, despite multiple and heterogeneous surgical efforts.
abstract_id: PUBMED:33239120
Unique foetal diagnosis of aorto-pulmonary collaterals in right atrial isomerism. Right atrial isomerism is associated with complex cardiac malformations, particularly single-ventricle lesions; right atrial isomerism is rarely associated with aorto-pulmonary collateral arteries. We report a foetal diagnosis of right atrial isomerism, with an unbalanced atrioventricular septal defect, pulmonary stenosis, total anomalous venous drainage, and significant aorto-pulmonary collaterals diagnosed at 22 weeks' gestation.
abstract_id: PUBMED:1380966
Factors influencing survival of patients with heterotaxy syndrome undergoing the Fontan procedure. Objectives: This study was undertaken to determine those factors that may influence survival in patients with heterotaxy syndrome undergoing the Fontan procedure.
Background: The Fontan procedure remains the preferred palliative procedure for patients with heterotaxy syndrome. Although the mortality rate has improved for patients without this syndrome undergoing the Fontan procedure, it remains high for patients with heterotaxy syndrome.
Methods: The medical records of 20 consecutive pediatric patients with asplenia (n = 12) and polysplenia (n = 8) who underwent the Fontan procedure between January 1, 1986 and December 31, 1990 were reviewed. Anatomic and hemodynamic data were collected, as well as data on types of surgical palliative procedures and on outcome of the Fontan procedure.
Results: There were two early and two late deaths for a total mortality rate of 20% in the patients with heterotaxy syndrome, as compared with 8.5% for the patients without this syndrome who underwent the Fontan procedure during the same time period. Factors that significantly increased the risk of the Fontan procedure in these patients were 1) preoperative findings of greater than mild atrioventricular valve regurgitation, b) hypoplastic pulmonary arteries, and c) mean pulmonary artery pressure greater than or equal to 15 mm Hg after 6 months of age. Systemic and pulmonary venous anomalies coupled with single-ventricle anatomy were not significant risk factors for determining a poor outcome of the Fontan procedure.
Conclusions: This study suggests that the outcome of the Fontan procedure in patients with heterotaxy syndrome may be improved by early protection of the pulmonary vascular bed, despite the existence of other cardiac anomalies.
abstract_id: PUBMED:23175683
Long-term results of treatments for functional single ventricle associated with extracardiac type total anomalous pulmonary venous connection. Objectives: Surgical outcomes of patients with functional single ventricle have improved, though those for patients whose condition is complicated by extracardiac type total anomalous pulmonary venous connection (TAPVC) remain poor. We retrospectively reviewed our 21 years of surgical experiences with this challenging group.
Methods: From 1990 to 2010, 48 consecutive patients with functional single ventricle complicated by extracardiac TAPVC (26 males, 46 with right atrial isomerism) underwent initial surgical palliation at our centre. The median age and body weight at surgery were 69 days and 3.5 kg, respectively. The type of TAPVC was supracardiac in 31 patients, infracardiac in 14 and mixed type in 3. TAPVC was repaired in 25 patients before bidirectional Glenn (BDG) and 18 at BDG, while it remained in 3 patients. Since 2007, stent implantation for obstructive drainage veins for patients with preoperative pulmonary venous obstruction and sutureless marsupialization for relief of postoperative pulmonary venous stenosis (PVS) have been initiated. The mean follow-up period was 4.2 ± 5.1 years.
Results: The overall survival rates at 1, 3 and 5 years after the initial surgical intervention were 58.3, 41.1 and 31.3%, respectively. Sixteen patients achieved the Fontan operation (33.3%). The freedom from postoperative PVS rates at 1 and 3 years after repair was 68.7 and 63.4%, respectively. Univariate analysis detected that infracardiac TAPVC (P = 0.036), coexisting major aortopulmonary collaterals (P = 0.017), and TAPVC repair before BDG (P = 0.036) all reduced survival, and multivariable analysis indicated the repair of TAPVC before BDG as the only risk factor (P = 0.032). Whereas the occurrence of postoperative PVS did not reduce survival, which had a significant negative impact on achieving the Fontan operation (P = 0.008). The cumulative survival rate did not improve by surgical era.
Conclusions: Surgical outcomes of patients with functional single ventricle undergoing the repair of extracardiac TAPVC in the neonatal period due to obstruction of the venous drainage pathway remain poor. Stent implantation for obstructive drainage veins to delay the timing of surgical correction and sutureless marsupialization as relief of postoperative PVS are expected to improve the late outcomes; however, the effect is still limited.
abstract_id: PUBMED:23265439
Predictors of repair and outcome in prenatally diagnosed atrioventricular septal defects. Background: Atrioventricular septal defect (AVSD) is a common prenatal diagnosis with great heterogeneity. Few data guide counseling about outcomes and types of repair. The aim of this study was to describe predictors of survival and repair type in prenatally diagnosed atrioventricular septal defect.
Methods: A retrospective review of fetuses diagnosed with AVSD was conducted.
Results: Of 106 fetuses with AVSDs, outcome data were available in 88, 66 with situs solitus and 22 with heterotaxy. Overall mortality was 52%, with 66% surviving to birth, 58% to 1 month, and 47% to 1 year. There was increased mortality in patients with heterotaxy (P = .02). In patients with situs solitus, there was increased mortality in those with unbalanced AVSDs (P < .01). The presence of a chromosomal abnormality did not affect mortality (P = .34). In pregnancies with intent to continue (n = 60), 97% of patients survived to birth, 86% to 1 month, and 69% to 1 year. The presence of heterotaxy (P < .01) or other complex intracardiac disease (P < .01) was associated with single-ventricle repair. In patients with unbalanced AVSDs who underwent surgery, two-ventricle repair was uncommon (29%), but it was performed more often in those with restrictive ventricular septal defects (71%). In the 16 patients with unbalanced AVSDs who underwent multiple studies, 12 showed no change in left atrioventricular valve/right atrioventricular valve ratio from first to the final fetal echocardiographic study; in three patients, this ratio worsened, and in one patient, it improved.
Conclusions: Prenatal diagnosis of AVSD should prompt assessment for chromosomal abnormalities, additional cardiac anomalies, and heterotaxy syndrome. Heterotaxy, unbalanced AVSD, and the need for single-ventricle repair are associated with a poor prognosis. Left atrioventricular valve/right atrioventricular valve ratio early in gestation does not always predict this ratio at delivery.
Answer: The available abstracts suggest that prenatal diagnosis of heterotaxy syndrome with a functional single ventricle does not necessarily improve survival. Specifically, a study reviewing patients with heterotaxy syndrome from January 1996 to December 2004 found that prenatal diagnosis did not improve survival in patients who underwent single ventricle reconstruction (PUBMED:17062216). The study also noted that the most potent risk factors for poor outcome, such as complete heart block (CHB) and anomalous pulmonary venous connection, are likely not impacted by identification in utero.
Another study examining single ventricle (SV) patients with total anomalous pulmonary venous connection (TAPVC) found that these patients remain a high-risk group, and obstructed TAPVC is a significant risk factor for mortality. While prenatal diagnosis of TAPVC in SV patients is challenging, it may enhance counseling and delivery planning, especially since those with obstructed TAPVC are at particularly high risk (PUBMED:29945508).
In the context of heterotaxy syndrome and heart transplantation, a study from the Pediatric Heart Transplant Society indicated that while mortality risk associated with heterotaxy is mitigated in the recent transplant era, early referral may improve waitlist outcomes for heterotaxy patients who otherwise have a lower status at listing (PUBMED:34366230).
Overall, the evidence suggests that while prenatal diagnosis may not directly improve survival rates, it can be valuable for planning and managing expectations. It is also important to note that the presence of specific risk factors, such as obstructed TAPVC, can significantly influence outcomes, and prenatal diagnosis may help in identifying such risk factors. |
Instruction: Does blood pressure differ between users and non-users of hormone replacement therapy?
Abstracts:
abstract_id: PUBMED:12361193
Does blood pressure differ between users and non-users of hormone replacement therapy? The Women's Health In the Lund Area (WHILA) Study. Objective: To examine whether blood pressure over 24 h differed between postmenopausal women receiving and not receiving hormone replacement therapy.
Methods: One group of hormone replacement postmenopausal women (n = 32) and one group of non-hormone replacement (n = 32) postmenopausal women underwent non-invasive 24-h ambulatory blood pressure monitoring. They were randomly selected among the 2000 firstly screened women in an ongoing project in Lund, Sweden. The study was designed to detect a difference of 5 mmHg in diastolic blood pressure over 24 h with a power of 80% and 5% significance (two-tailed test).
Results: The hormone replacement women had a mean (SD) office blood pressure of 128/76 (12/8) mmHg and the non-hormone replacement 126/78 (16/8) mmHg. Mean ambulatory blood pressure over 24 h, day and night, in the hormone replacement group was 121/72 (11/7), 126/76 (12/8), 111/64 (11/7) mmHg. The corresponding values in the non-hormone replacement group were 118/72 (12/7), 124/77 (12/7), and 107/64 (13/7) (p > 0.40 for diastolic blood pressure and p > 0.20 for systolic blood pressure). Mean heart rate over 24 h was 71 (7) and 73 (8) beats/min in the hormone and non-hormone replacement groups, respectively.
Conclusion: There was no difference in blood pressure or heart rate between the hormone replacement and non-hormone replacement postmenopausal women, either over 24 h or during the day or night. Hormone replacement in postmenopausal women seems not to have an influence on blood pressure, but of course we are aware that this is a cross-sectional study, which has its limitations.
abstract_id: PUBMED:16638035
Postmenopausal hormone therapy: who now takes it and do they differ from non-users? Background: Considerable changes in hormone therapy use have taken place in the last few years.
Aims: To determine current usage of postmenopausal hormone therapy and assess the trend and rate of change in hormone therapy usage over the last 13 years. Additionally, to assess differences between current users and non-users for health-related and risk factor variables.
Methods: Questions regarding hormone therapy use have been included in an annual face to face population health survey of South Australians eight times since 1991. In 2004, additional questions on health status and quality of life were included.
Results: In 2004, current use of hormone therapy was 15.4, 19.8 and 31.2% in all women over 40, 50 and 50-59 years, respectively. Ever use of hormone therapy among all women over 50 years was 46.5% with a mean duration of use of 7.46 years. Hormone therapy users did not differ from non-users in chronic disease indicators, body mass index, complementary medicine or therapist use, other health service use, socioeconomic status or quality of life. Increased hormone therapy use was associated with higher income, better educated, employed and married women in their sixth decade. Current use has varied over the years, with an increase to 2000, but a drop in 2003 and 2004.
Conclusion: Apart from menopausal symptoms, there is no evidence to support differences between users and non-users in terms of quality of life or health characteristics, requiring more appropriate selection of women for hormone therapy.
abstract_id: PUBMED:16007297
Quality of life in users and non-users of hormone replacement therapy Objective: To compare the quality of life in postmenopausal women who were users and non-users of hormone replacement therapy (HRT).
Methods: A cross-sectional study was conducted on postmenopausal women aged between 40 and 65 years, who had been menopausal for up to 15 years. Women considered HRT users were those who had undergone this type of treatment for at least six months. Non-users of HRT were those who had not received this type of treatment during the last six months. Two hundred and seven women were included in the study: 106 users and 101 non-users of HRT. Sociodemographic, clinical and behavioral characteristics were assessed. The Kupperman Menopausal Index was applied to rate the intensity of climacteric symptoms and the Medical Outcomes Study 36-item Short-Form Health Survey (SF-36) was applied to assess women's quality of life. For data analysis, a Student's t test, a chi-square analysis, a Fisher's exact test and a Mann-Whitney test were used.
Results: The mean age of HRT users was 52.6 +/- 4.9 years and the mean age of non-users of HRT was 54.3 +/- 4.7 years (p=0.01). There was a statistically significant difference regarding marital status (p=0.04). HRT users reported a lower frequency of moderate and severe climacteric symptoms (p=0.001). Of the eight quality of life domains evaluated, only vitality scored below 50 (45) in both groups. There were no differences between groups regarding the SF-36 components.
Conclusions: Postmenopausal women who were users and non-users of HRT presented a good quality of life. There were no differences between users and non-users of hormone therapy.
abstract_id: PUBMED:7650462
Hormone replacement therapy: characteristics of users and non-users in a British general practice cohort identified through computerised prescribing records. Study Objective: To assess the feasibility of recruiting a cohort of women, including long term users of postmenopausal hormone replacement therapy (HRT), through computerised general practice prescribing records, and to compare clinical and demographic characteristics of users and non-user controls.
Design: Cross sectional analysis of questionnaire data.
Setting: Subjects were recruited through 17 general practices in the Oxfordshire, south west Thames, and north west Thames regions that contributed to the VAMP Research Database.
Participants: A total of 2964 women aged 45-64 years were identified. Altogether 1482 were long term (> 1 year) users of HRT and 1482 were non-user controls: 1037 (70%) of the users and 819 (55.3%) of the controls agreed to participate and provided questionnaire data.
Main Results: Users of HRT were more likely to have undergone hysterectomy than controls. Most women with a history of hysterectomy used unopposed oestrogen, while those with intact uteri generally used a combination of oestrogen and a progestagen. Among women who had undergone hysterectomy, HRT users did not differ significantly from controls over a range of demographic and clinical characteristics but they were more likely to be past users of oral contraceptives. Among women with intact uteri, users were similar to controls in terms of reported clinical characteristics, but were of higher social class and were more likely to be past users of oral contraceptives and to have had a mammogram after the age of 50. Compared with the general population, all categories of women recruited to the study were of higher social class and exhibited more health conscious behaviours.
Conclusions: Electronic general practice prescribing records provide a feasible and efficient method for recruiting women to a cohort of HRT. Women who agreed to participate in this study were not representative of the general population, emphasising the importance of internal controls in such a study. Among participants, HRT users who had not undergone hysterectomy showed evidence of better health than non-users on some dimensions. In the whole sample, however, there were no appreciable differences in social class and self reported health indicators between users and controls.
abstract_id: PUBMED:11511137
Hormone replacement therapy and longitudinal changes in blood pressure in postmenopausal women. Background: The incidence of hypertension in postmenopausal women exceeds that in age-matched men. Longitudinal studies relating hormone replacement therapy (HRT) to blood pressure changes are sparse.
Objective: To investigate the association between HRT and longitudinal changes in blood pressure in postmenopausal women.
Design: Longitudinal observational study.
Setting: Community-dwelling volunteers.
Patients: 226 healthy, normotensive postmenopausal women from the Baltimore Longitudinal Study of Aging with a mean (+/-SD) age of 64 +/- 10 years were followed for 5.7 +/- 5.3 years. Seventy-seven women used both estrogen and progestin, and 149 used neither.
Measurements: Lifestyle variables, blood pressure, and traditional cardiovascular risk factors were measured at baseline and approximately every 2 years thereafter.
Results: Systolic blood pressure at baseline was similar in HRT users and nonusers (133.9 +/- 16.0 mm Hg vs. 132.4 +/- 14.8 mm Hg). Over time, average systolic blood pressure increased less in HRT users than nonusers, independent of other cardiovascular risk factors, physical activity, and alcohol use. For example, HRT users who were 55 years of age at their first Baltimore Longitudinal Study of Aging visit experienced a 7.6-mm Hg average increase in systolic blood pressure over 10 years; in contrast, the average increase in nonusers was 18.7 mm Hg. The lesser increase in systolic blood pressure in HRT users was more evident at older age. Diastolic blood pressure, which did not change statistically over time in either group, was not associated with HRT.
Conclusion: Postmenopausal women taking HRT have a smaller increase in systolic blood pressure over time than those not taking HRT. This difference is intensified at older ages.
abstract_id: PUBMED:11907927
Health status of hormone replacement therapy users and non-users as determined by the SF-36 quality-of-life dimension. Background: The objective of this study was to compare the health status of women who use and do not use hormone replacement therapy (HRT).
Method: The 1994 South Australian Health Omnibus Survey (a population health interview survey) was used to administer the short form-36 health survey questionnaire (SF-36) to users and non-users of HRT. A representative sample of 813 women aged 40 years and older were interviewed. The response rate of the survey was 72.4%. Eight health dimensions of the SF-36 were measured: physical functioning, social functioning, role limitations owing to emotional problems, role limitations owing to physical problems, mental health, vitality, pain and general health.
Results: The mean score for all eight health dimensions was in the bottom 50% of the population for HRT users while non-users were in the upper 50%. Users of HRT had significantly poorer scores for physical limitations, body pain, general health, vitality, social functioning and mental health.
Conclusion: Women who use HRT are less healthy than non-users when measured by a generic health status measure.
abstract_id: PUBMED:10678540
Blood pressure and menopausal transition: the Atherosclerosis Risk in Communities study (1987-95). Objective: Blood pressure changes during menopausal transition have not been studied previously using a biracial sample. We investigated whether menopausal transition was associated with change in blood pressure in African-American or white women.
Design, Setting And Participants: The prospective multicenter study, the Atherosclerosis Risk In Communities (ARIC) Study (1987-95) was utilized. Included were never-users of hormone replacement therapy (3,800 women, 44% of the original sample).
Main Outcome Measure: Changes in blood pressure were adjusted for baseline age and body mass index, baseline blood pressure, antihypertensive use, ARIC field center and weight change. The menopausal transition group was compared to the non-transition group, separately, by ethnicity.
Results: Women undergoing the menopausal transition did not differ significantly in regard to systolic blood pressure change [5.2, 95% confidence interval (CI) 4.0-6.4] from non-transitional women (4.6, 95% CI 4.0-5.2); adjustment for age, baseline systolic blood pressure and other factors did not alter this finding. Transitional women had significantly less diastolic blood pressure change (-0.5, 95% CI -1.1 to 0.2) than non-transitional women (-2.0, 95% CI -2.4 to -1.7, P= 0.000) but, after adjustment for other covariates, the result was not significant African-American women had significantly (P= 0.003) higher systolic blood pressure change compared to white women, but this difference became non-significant (P= 0.21) after restricting the sample to women younger than 55 years of age. Interactions between menopausal transition and ethnicity were not significant, either in systolic blood pressure or diastolic blood pressure change.
Conclusion: Menopausal transition is not associated with significant blood pressure change in African-American or white women.
abstract_id: PUBMED:9277495
Systemic hemodynamic determinants of blood pressure in women: age, physical activity, and hormone replacement. We tested the hypothesis that the age-related changes in systemic hemodynamic determinants of arterial blood pressure in healthy women are related to physical activity and hormone replacement status. We studied 66 healthy, normotensive premenopausal (21-35 yr) and postmenopausal (50-72 yr) sedentary and endurance-trained women under supine resting conditions. Mean blood pressure was 7 mmHg higher in sedentary post- compared with premenopausal women, which was associated with an 11-mmHg higher systolic blood pressure, a 25% lower stroke volume and cardiac output, and a 50% higher systemic vascular resistance (all P < 0.05). Absolute (ml) levels of total blood volume did not differ across age, but resting oxygen consumption was approximately 35% lower in the postmenopausal women (P < 0.05). The elevations in mean and systolic blood pressures with age were similar in endurance-trained runners, but, in contrast to the sedentary women, the elevations were not associated with significant age-related differences in cardiac output, stroke volume, or oxygen consumption, and only a modest (15%) increase in systemic vascular resistance (P = 0.06). Postmenopausal swimmers demonstrated the same systemic hemodynamic profile as that of postmenopausal runners, indicating a nonspecific influence of the endurance-trained state. Blood pressure and its systemic hemodynamic determinants did not differ in postmenopausal users compared with those of nonusers of hormone replacement therapy. Resting oxygen consumption was the strongest physiological correlate of cardiac output in the overall population (r = 0.65, P < 0.001). We conclude that 1) the increases in arterial blood pressure at rest with age in healthy normotensive women are not obviously related to habitual physical activity status; 2) the systemic hemodynamic determinants of the age-related elevations in blood pressure are fundamentally different in sedentary vs. active women, possibly due, in part, to an absence of decline in resting oxygen consumption in the latter; and 3) systemic hemodynamics at rest are not different in healthy normotensive postmenopausal users vs. nonusers of estrogen-based hormone replacement.
abstract_id: PUBMED:31542834
Circulating estrogens and postmenopausal ovarian and endometrial cancer risk among current hormone users in the Women's Health Initiative Observational Study. Purpose: Menopausal hormone therapy (MHT) use induces alterations in circulating estrogens/estrogen metabolites, which may contribute to the altered risk of reproductive tract cancers among current users. Thus, the current study assessed associations between circulating estrogens/estrogen metabolites and ovarian and endometrial cancer risk among MHT users.
Methods: We conducted a nested case-control study among postmenopausal women using MHT at baseline in the Women's Health Initiative Observational Study (179 ovarian cancers, 396 controls; 230 endometrial cancers, 253 controls). Multivariable logistic regression was utilized to estimate odds ratios and 95% confidence intervals overall and by subtype.
Results: Estrogen/estrogen metabolite levels were not associated with overall or serous ovarian cancer risk, examined separately. However, unconjugated estradiol was positively associated with non-serous ovarian cancer risk [quintile 5 vs. quintile 1: 3.01 (1.17-7.73); p-trend = 0.03; p-het < 0.01]. Endometrial cancer risk was unrelated to estrogen/estrogen metabolite levels among women who took combined estrogen/progestin therapy (EPT).
Conclusions: These findings provide novel evidence that may support a heterogeneous hormonal etiology across ovarian cancer subtypes. Circulating estrogens did not influence endometrial cancer risk among women with EPT-induced high-estrogen levels. Larger studies are needed to delineate the relationship between ovarian/endometrial cancer subtypes and estrogen levels in the context of MHT use.
abstract_id: PUBMED:12468133
Hysteroscopic findings in postmenopausal abnormal uterine bleeding: a comparison between HRT users and non-users. Objectives: The aim of our study was to investigate hysteroscopic findings in a sample of 410 menopausal women (hormonal replacement therapy, HRT users n = 219 and HRT non-users n = 191) and to evaluate the relationship between the presence of intrauterine disease, the use of HRT and the presence of AUB.
Methods: Two hundred and nineteen women on HRT underwent standard office hysteroscopy by means of the Hamou hysteroscope (in 94 cases for abnormal uterine bleeding (AUB) and in 125 cases for periodic endometrium monitoring). One hundred and ninety-one women who had never received HRT were submitted to office hysteroscopy (154 for AUB and 37 for other reasons).
Results: Intrauterine diseases are more frequent in patients who do not use HRT (P = 0.02). Endometrial polyps is a frequent disease present in 30% of the sample (23.7% of HRT users and 30.8% of HRT non-users). Myomas were present in 8.7% of all patients examined (6.8% of HRT users and 11% of HRT non-users). Irregular bleeding in menopause is often associated with endouterine abnormalities: in symptomatic patients the frequency of endouterine diseases was 41% while in asymptomatic patients was 28% (P = 0.003). In patients taking HRT (n = 219) endouterine disease is demonstrated in 37% with AUB and in 26% without AUB (P = 0.07).
Conclusion: Benign intrauterine diseases (endometrial polyps and submucous myomas) are more frequent in postmenopausal women who do not use HRT. In patients taking HRT irregular bleeding is associated with intrauterine diseases; however, the absence of AUB does not exclude the presence of endometrial polyps or myomas.
Answer: The studies examining the relationship between blood pressure and hormone replacement therapy (HRT) use in postmenopausal women have yielded mixed results. According to the Women's Health In the Lund Area (WHILA) Study, there was no significant difference in blood pressure or heart rate between HRT users and non-users over 24 hours, during the day, or at night (PUBMED:12361193). Similarly, a study from the Baltimore Longitudinal Study of Aging found that postmenopausal women taking HRT had a smaller increase in systolic blood pressure over time compared to non-users, suggesting a potential protective effect of HRT on systolic blood pressure increase with age (PUBMED:11511137).
In contrast, the Atherosclerosis Risk in Communities study did not find a significant association between menopausal transition and changes in blood pressure among never-users of HRT, indicating that menopausal transition may not be associated with significant blood pressure changes (PUBMED:10678540). Additionally, a study on systemic hemodynamic determinants of blood pressure in women found no differences in systemic hemodynamics at rest between healthy normotensive postmenopausal users and nonusers of estrogen-based HRT (PUBMED:9277495).
Overall, the evidence does not consistently show a difference in blood pressure between HRT users and non-users. Some studies suggest no impact of HRT on blood pressure, while others indicate a potential protective effect against systolic blood pressure increase with age. However, it is important to note that individual studies have their limitations, and factors such as the type of HRT used, duration of therapy, and individual health profiles could influence outcomes. Therefore, more research may be needed to fully understand the relationship between HRT use and blood pressure changes. |
Instruction: Is a patient's type of substance dependence (alcohol, drug or both) associated with the quality of primary care they receive?
Abstracts:
abstract_id: PUBMED:23540818
Is a patient's type of substance dependence (alcohol, drug or both) associated with the quality of primary care they receive? Background: Primary care clinicians' attitudes may differ based on patients' substance dependence type (alcohol, other drugs or both).
Aim: The aim of this study was to evaluate whether substance dependence type is associated with primary care quality (PCQ).
Methods: We tested the association between substance dependence type and six PCQ scales of the Primary Care Assessment Survey (PCAS) in multivariable linear regression models. We studied alcohol- and/or drug-dependent patients followed prospectively who reported having a PCC ( n = 427) in a primary care setting.
Results: We used the Composite International Diagnostic Interview-Short Form to assess substance dependence type and we used the PCAS questionnaire to measure primary care quality. Dependence type was significantly associated with PCQ for all PCAS scales except whole-person knowledge. For the significant associations, subjects with drug dependence (alone or together with alcohol) had lower observed PCAS scores compared with those with alcohol dependence only, except for preventive counselling.
Conclusions: Drug dependence was associated with worse PCQ for most domains. Understanding the reasons for these differences and addressing them may help improve the quality of primary care for patients with addictions.
abstract_id: PUBMED:35277305
Patient-centered primary care and receipt of evidence-based alcohol-related care in the national Veterans Health Administration. Background: Health care systems are increasingly integrating screening and care for unhealthy alcohol use into primary care settings. However, gaps remain in receipt of evidence-based care after the detection of unhealthy alcohol use. Patient-centered primary care may be an important determinant of alcohol-related care receipt, but its role is underexamined.
Methods: We examined associations between previously developed, clinic-level measures of patient-centered care (indicative of medical home model implementation) and receipt of alcohol-related care in a national cohort of VA patients who screened positive for unhealthy alcohol use (defined by AUDIT-C alcohol screen of ≥5; n = 568,909) for whom brief intervention is recommended. We also assessed alcohol-related care in a subsample of these patients with a past-year alcohol use disorder (AUD) diagnosis (n = 144,511) for whom specialty addictions care and medications are recommended. The study used modified Poisson models to assess associations between measures of patient-centered care and individual-level receipt of recommended alcohol-related care. We presented prevalence ratios (PR) and marginal probabilities to illustrate relative and absolute differences, respectively, in outcomes associated with clinic-level measures.
Results: Compared to patients in the lowest-ranked clinics, patients were more likely to receive brief intervention in clinics with the highest rankings of self-management support (PR: 1.06; 95% CI: 1.10, 1.11), communication (PR: 1.08; 95% CI: 1.04, 1.12), access (PR: 1.11; 95% CI: 1.06, 1.17), and care coordination (PR: 1.09; 95% CI: 1.03, 1.15). The study also observed a greater likelihood of receiving AUD medications among those receiving care at clinics with higher ratings of comprehensiveness (PR: 1.35; 95% CI: 1.10, 1.66) and shared decision-making (PR: 1.35; 95% CI: 1.12, 1.61); higher clinic-level access ratings were associated with specialty addictions care (PR: 1.15; 95% CI: 1.00, 1.32). Patients in the clinics with the highest summary patient-centered care ratings, compared to the lowest, had higher likelihoods of receiving brief intervention (PR: 1.07; 95% CI: 1.03, 1.12) and medications (PR: 1.16; 95% CI: 1.00, 1.35). The study did not identify any other statistically significant findings.
Conclusions: This observational study found that dimensions of patient-centered care were associated with increased receipt of recommended alcohol-related care. Future studies should investigate strategies to improve patients' experience of alcohol-related care.
abstract_id: PUBMED:21863331
Alcohol screening and brief intervention among drug users in primary care: a discussion paper. Background: Problem alcohol use is common among problem drug users (PDU) and associated with adverse health outcomes. Primary care has an important role in the overall stepped approach to alcohol treatment, especially screening and brief intervention (SBI).
Aim: To discuss three themes that emerged from an exploration of the literature on SBI for problem alcohol use in drug users attending primary care.
Methods: Material for this discussion paper was gathered from three biomedical databases (PubMed, PsycINFO and Cochrane library), conference proceedings and online resources of professional organisations or national health agencies.
Results: Themes discussed in this paper are: (a) the potential of primary care for delivery of alcohol SBIs to PDUs, (b) screening methods and (c) application of brief interventions to PDUs.
Conclusions: Although SBI improves health outcomes associated with problem alcohol use in the general population, further research is needed among high-risk patient groups, especially PDUs.
abstract_id: PUBMED:17362216
Primary care quality and addiction severity: a prospective cohort study. Background: Alcohol and drug use disorders are chronic diseases that require ongoing management of physical, psychiatric, and social consequences. While specific addiction-focused interventions in primary care are efficacious, the influence of overall primary care quality (PCQ) on addiction outcomes has not been studied. The aim of this study was to prospectively examine if higher PCQ is associated with lower addiction severity among patients with substance use disorders.
Study Population: Subjects with alcohol, cocaine, and/or heroin use disorders who initiated primary care after being discharged from an urban residential detoxification program.
Measurements: We used the Primary Care Assessment Survey (PCAS), a well-validated, patient-completed survey that measures defining attributes of primary care named by the Institute of Medicine. Nine summary scales cover two broad areas of PCQ: the patient-physician relationship (communication, interpersonal treatment, thoroughness of the physical exam, whole-person knowledge, preventive counseling, and trust) and structural/organizational features of care (organizational access, financial access, and visit-based continuity). Each of the three addiction outcomes (alcohol addiction severity (ASI-alc), drug addiction severity (ASI-drug), and any drug or heavy alcohol use) were derived from the Addiction Severity Index and assessed 6-18 months after PCAS administration. Separate longitudinal regression models included a single PCAS scale as the main predictor variable as well as variables known to be associated with addiction outcomes.
Main Results: Eight of the nine PCAS scales were associated with lower alcohol addiction severity at follow-up (p<or=.05). Two measures of relationship quality (communication and whole- person knowledge of the patient) were associated with the largest decreases in ASI-alc (-0.06). More whole-person knowledge, organizational access, and visit-based continuity predicted lower drug addiction severity (ASI-drug: -0.02). Two PCAS scales (trust and whole-person knowledge of the patient) were associated with lower likelihood of subsequent substance use (adjusted odds ratio, [AOR]=0.76, 95 percent confidence interval [95% CI]=0.60, 0.96 and AOR=0.66, 95 percent CI=0.52, 0.85, respectively).
Conclusion: Core features of PCQ, particularly those reflecting the quality of the physician-patient relationship, were associated with positive addiction outcomes. Our findings suggest that the provision of patient-centered, comprehensive care from a primary care clinician may be an important treatment component for substance use disorders.
abstract_id: PUBMED:18259871
Improving care for the treatment of alcohol and drug disorders. The Network for the Improvement of Addiction Treatment (NIATx) teaches alcohol and drug treatment programs to apply process improvement strategies and make organizational changes that improve quality of care. Participating programs reduce days to admission, increase retention in care, and spread the application of process improvement within their treatment centers. More generally, NIATx provides a framework for addressing the Institute of Medicine's six dimensions of quality care (i.e., safe, effective, patient-centered, efficient, timely, and equitable) in treatments for alcohol, drug, and mental health disorders. NIATx and its extensions illustrate how the behavioral health field can respond to the demand for higher quality treatment services.
abstract_id: PUBMED:34823726
Screening for Unhealthy Alcohol and Drug Use in General Medicine Settings. Unhealthy alcohol and drug use are among the top 10 causes of preventable death in the United States, but they are infrequently identified and addressed in medical settings. Guidelines recommend screening adult primary care patients for alcohol and drug use, and routine screening should be a component of high-quality clinical care. Brief, validated screening tools accurately detect unhealthy alcohol and drug use, and their thoughtful implementation can facilitate adoption and optimize the quality of screening results. Recommendations for implementation include patient self-administered screening tools, integration with electronic health records, and screening during routine primary care visits.
abstract_id: PUBMED:23781538
Obstacles to alcohol and drug care -- are medicare locals the answer? Background: Harms related to alcohol and drug use have an enormous cost on the community, yet most patients with substance use disorders do not receive care from primary healthcare providers. The establishment of a system of large primary healthcare organisations (Medicare Locals) across Australia provides an opportunity to address this service gap.
Objective: This article considers barriers to delivering alcohol and drug interventions from primary healthcare settings, strategies for their resolution, and the ensuing benefits for patients.
Discussion: Help seeking for alcohol and drug problems is low. Stigmatisation can be countered by policy development, training and support to increase staff awareness and skills, and building relationships with specialist services. Co-location, outreach clinics, and collaborative models simplify access, tailor intensity of interventions, and improve patient satisfaction and health outcomes. Screening and brief intervention at intake, with appropriate training and support for nursing staff, can advance the delivery of timely and effective care.
abstract_id: PUBMED:27482999
Unhealthy alcohol use in primary care patients who screen positive for drug use. Background: Unhealthy alcohol use (UAU) is common among people who use other drugs; however, little information is available about UAU among patients who screen positive for drugs in primary care, where the clinical priority might be assumed to be drug use. This study aimed at describing the occurrence of UAU and its association with substance use-related outcomes in such patients.
Methods: This cohort study is a secondary analysis of data from a randomized trial of brief intervention for primary care patients screening positive for drug use. UAU was assessed at baseline; the main independent variable was any heavy drinking day in the past month. Outcomes including drug use characteristics and substance use-related consequences were assessed at baseline and 6 months later.
Results: Of 589 primary care patients with drug use, 48% had at least 1 past-month heavy drinking day. The self-identified main drug was marijuana for 64%, cocaine for 18%, and an opioid for 16%. Any heavy drinking at baseline was negatively associated with number of days use of the main drug at 6 months (incidence rate ratio [IRR] = 0.75, 95% confidence interval [CI]: 0.62-0.91), but positively associated with the use of more than 1 drug (IRR = 1.73, 95% CI: 1.17-2.55) and unsafe sex (odds ratio [OR] = 1.90, 95% CI: 1.21-2.98).
Conclusion: Unhealthy alcohol use is common among patients identified by screening in primary care as using other drugs. Unexpectedly, UAU was negatively associated with days of main drug use. But, as expected, it was positively associated with other drug use characteristics and substance use-related consequences. These findings suggest that attention should be given to alcohol use among primary care patients who use other drugs.
abstract_id: PUBMED:34014326
Comparison of Methods for Alcohol and Drug Screening in Primary Care Clinics. Importance: Guidelines recommend that adult patients receive screening for alcohol and drug use during primary care visits, but the adoption of screening in routine practice remains low. Clinics frequently struggle to choose a screening approach that is best suited to their resources, workflows, and patient populations.
Objective: To evaluate how to best implement electronic health record (EHR)-integrated screening for substance use by comparing commonly used screening methods and examining their association with implementation outcomes.
Design, Setting, And Participants: This article presents the outcomes of phases 3 and 4 of a 4-phase quality improvement, implementation feasibility study in which researchers worked with stakeholders at 6 primary care clinics in 2 large urban academic health care systems to define and implement their optimal screening approach. Site A was located in New York City and comprised 2 clinics, and site B was located in Boston, Massachusetts, and comprised 4 clinics. Clinics initiated screening between January 2017 and October 2018, and 93 114 patients were eligible for screening for alcohol and drug use. Data used in the analysis were collected between January 2017 and October 2019, and analysis was performed from July 13, 2018, to March 23, 2021.
Interventions: Clinics integrated validated screening questions and a brief counseling script into the EHR, with implementation supported by the use of clinical champions (ie, clinicians who advocate for change, motivate others, and use their expertise to facilitate the adoption of an intervention) and the training of clinic staff. Clinics varied in their screening approaches, including the type of visit targeted for screening (any visit vs annual examinations only), the mode of administration (staff-administered vs self-administered by the patient), and the extent to which they used practice facilitation and EHR usability testing.
Main Outcomes And Measures: Data from the EHRs were extracted quarterly for 12 months to measure implementation outcomes. The primary outcome was screening rate for alcohol and drug use. Secondary outcomes were the prevalence of unhealthy alcohol and drug use detected via screening, and clinician adoption of a brief counseling script.
Results: Patients of the 6 clinics had a mean (SD) age ranging from 48.9 (17.3) years at clinic B2 to 59.1 (16.7) years at clinic B3, were predominantly female (52.4% at clinic A1 to 64.6% at clinic A2), and were English speaking. Racial diversity varied by location. Of the 93,114 patients with primary care visits, 71.8% received screening for alcohol use, and 70.5% received screening for drug use. Screening at any visit (implemented at site A) in comparison with screening at annual examinations only (implemented at site B) was associated with higher screening rates for alcohol use (90.3%-94.7% vs 24.2%-72.0%, respectively) and drug use (89.6%-93.9% vs 24.6%-69.8%). The 5 clinics that used a self-administered screening approach had a higher detection rate for moderate- to high-risk alcohol use (14.7%-36.6%) compared with the 1 clinic that used a staff-administered screening approach (1.6%). The detection of moderate- to high-risk drug use was low across all clinics (0.5%-1.0%). Clinics with more robust practice facilitation and EHR usability testing had somewhat greater adoption of the counseling script for patients with moderate-high risk alcohol or drug use (1.4%-12.5% vs 0.1%-1.1%).
Conclusions And Relevance: In this quality improvement study, EHR-integrated screening was feasible to implement in all clinics and unhealthy alcohol use was detected more frequently when self-administered screening was used at any primary care visit. The detection of drug use was low at all clinics, as was clinician adoption of counseling. These findings can be used to inform the decision-making of health care systems that are seeking to implement screening for substance use.
Trial Registration: ClinicalTrials.gov Identifier: NCT02963948.
abstract_id: PUBMED:9549604
Tobacco, alcohol, and drug use in a primary care sample: 90-day prevalence and associated factors. Background: Primary care settings are an ideal system in which to identify and treat substance use disorders.
Objective: To ascertain the prevalence of tobacco, alcohol, and drug use in the office of 88 primary care clinicians by gender, age and ethnicity.
Method: 21,282 adults ages 18-65 completed a self-administered Health Screening Survey while participating in a trial for early alcohol treatment.
Results: The period prevalence of tobacco use was 27%. For alcohol: abstainers 40%, low risk drinkers 38%, at-risk drinkers 9%, problem drinkers 8%, and dependent drinkers 5%. Twenty percent of the sample reported using illicit drugs five or more times in their lifetime and 5% reported current illicit drug use. There were marked differences in alcohol use disorders by age and ethnicity. The majority of persons who smoked reported the desire to cut down or stop using tobacco.
Significance: This is the first report on the combined prevalence of tobacco, alcohol and drug disorders in a large sample of persons attending community-based non-academic primary care clinics. This report confirms the high prevalence of these problems and suggests that patients will accurately complete a self-administered screening test such as the Health Screening Survey. The office procedures developed for this study provide Managed Care Organizations with a system of care that can be used to screen all persons for tobacco, alcohol and drug use disorders.
Answer: Yes, a patient's type of substance dependence—whether it is alcohol, drug, or both—is associated with the quality of primary care they receive. A study found that drug dependence was associated with worse primary care quality (PCQ) for most domains of the Primary Care Assessment Survey (PCAS) except for preventive counseling. Patients with drug dependence (alone or together with alcohol) had lower observed PCAS scores compared to those with alcohol dependence only. This suggests that understanding and addressing the reasons for these differences may help improve the quality of primary care for patients with addictions (PUBMED:23540818).
Additionally, primary care quality, particularly aspects reflecting the quality of the physician-patient relationship, was associated with positive addiction outcomes. Core features of primary care quality, such as communication and whole-person knowledge of the patient, were linked to lower addiction severity among patients with substance use disorders (PUBMED:17362216).
Moreover, patient-centered primary care, indicative of medical home model implementation, was found to be associated with increased receipt of recommended alcohol-related care. Dimensions of patient-centered care, such as self-management support, communication, access, and care coordination, were linked to a higher likelihood of patients receiving brief intervention for unhealthy alcohol use. Higher ratings of comprehensiveness and shared decision-making were associated with a greater likelihood of receiving alcohol use disorder (AUD) medications (PUBMED:35277305).
These findings highlight the importance of the quality and patient-centeredness of primary care in managing substance dependence and suggest that improving these aspects of care could lead to better health outcomes for patients with substance use disorders. |
Instruction: The impact of incarceration on obesity: are prisoners with chronic diseases becoming overweight and obese during their confinement?
Abstracts:
abstract_id: PUBMED:33618770
Short-term impact of the COVID-19 confinement measures on health behaviours and weight gain among adults in Belgium. Background: In Belgium, confinement measures were introduced on the 13th of March 2020 to curb the spread of the coronavirus disease (COVID-19). These measures may affect health behaviours of the population such as eating habits, physical activity and alcohol consumption, which in turn can lead to weight gain resulting in overweight and obesity, increasing the risk of several chronic diseases, but also of severe COVID-19. The purpose of this study is to assess the impact of confinement measures on health behaviours and their associations with weight gain.
Methods: Data were derived from the second national COVID-19 health survey. Data were collected between the 16th and the 23rd of April 2020. The recruitment of participants was based on snowball sampling via Sciensano's website, invitations via e-mail and social media. The study sample includes participants aged 18 years and over with no missing data on the variables of interest (n = 28,029). The association between self-reported weight gain and health behaviour changes, adjusted for gender, age group and household composition was assessed through OR's (95% CI) calculated with logistic regression models, using post-stratification weights.
Results: Overall, 28.6% reported weight gain after 6 weeks of confinement. Higher odds of weight gain were observed among participants who increased or decreased their consumption of sugar-sweetened beverages (OR = 1.39 (1.15-1.68) and 1.29 (1.04-1.60), respectively), among those who increased their consumption of sweet or salty snacks (OR = 3.65 (3.27-4.07)), among those who became less physically active (OR = 1.91 (1.71-2.13)), and among those who increased their alcohol consumption (OR = 1.86 (1.66-2.08)).
Conclusions: The most important correlates of weight gain during confinement were an increased consumption of sweet or salty snacks and being less physically active. These findings confirm the impact of diet and exercise on short term weight gain and plead to take more action, in supporting people to achieve healthier behaviours in order to tackle overweight and obesity, especially during the COVID-19 pandemic.
abstract_id: PUBMED:25866674
The impact of incarceration on obesity: are prisoners with chronic diseases becoming overweight and obese during their confinement? Introduction: The association between incarceration and weight gain, along with the public health impact of former prisoners who are overweight or obese, warrants more investigation to understand the impact of prison life. Studies regarding incarceration's impact on obesity are too few to support assertions that prisons contribute to obesity and comorbid conditions. This study examined a statewide prison population over several years to determine weight gain.
Methods: Objective data for weight, height, and chronic diseases, along with demographics, were extracted from an electronic health record. These data were analyzed statistically to determine changes over time and between groups.
Results: As a total population, prisoners not only gained weight, but also reflected the distribution of BMIs for the state. There were differences within the population. Male prisoners gained significantly less weight than females. The population with chronic diseases gained less weight than the population without comorbid conditions. Prisoners with diabetes lost weight while hypertension's impact was negligible.
Conclusion: This study found that weight gain was a problem specifically to females. However, this prison system appears to be providing effective chronic disease management, particularly for prisoners with diabetes and hypertension. Additional research is needed to understand the impact incarceration has on the female population.
abstract_id: PUBMED:34959806
Perceived Diet Quality, Eating Behaviour, and Lifestyle Changes in a Mexican Population with Internet Access during Confinement for the COVID-19 Pandemic: ESCAN-COVID19Mx Survey. Perceived changes in diet quality, emotional eating, physical activity, and lifestyle were evaluated in a group of Mexican adults before and during COVID-19 confinement. In this study, 8289 adults answered an online questionnaire between April and May 2020. Data about sociodemographic characteristics, self-reported weight and height, diet quality, emotional eating, physical activity, and lifestyle changes were collected. Before and after confinement, differences by sociodemographic characteristics were assessed with Wilcoxon, Anova, and linear regression analyses. Most participants were women (80%) between 18 and 38 years old (70%), with a low degree of marginalisation (82.8%) and a high educational level (84.2%); 53.1% had a normal weight and 31.4% were overweight. Half (46.8%) of the participants perceived a change in the quality of their diet. The Diet Quality Index (DQI) was higher during confinement (it improved by 3 points) in all groups, regardless of education level, marginalisation level, or place of residence (p < 0.001). Lifestyle changes were present among some of the participants, 6.1% stopped smoking, 12.1% stopped consuming alcohol, 53.3% sleep later, 9% became more sedentary, and increased their screen (43%) as well as sitting and lying down time (81.6%). Mexicans with Internet access staying at home during COVID-19 confinement perceived positive changes in the quality of their diet, smoking, and alcohol consumption, but negative changes in the level of physical activity and sleep quality. These results emphasise the relevance of encouraging healthy lifestyle behaviours during and after times of crisis to prevent the risk of complications due to infectious and chronic diseases.
abstract_id: PUBMED:34240509
COVID-19 confinement and related well being measurement using the EQ-5D questionnaire: A survey among the Palestinian population. Purpose: This study aims to assess the effect of the COVID-19 confinement on the population wellbeing using the EQ-5D questionnaire.
Methods: After receiving the written permission from the EuroQol Research Foundation, an online-based survey was prepared and a total of 1380 participants were recruited via social media. The relationships of all the factors were studied as well as the scores of the EQ-5D including EQ-5D Index, Visual Analogue Scale (VAS), and each of the EQ-5D dimension. Linear regression for the Index and VAS and Logistic regression model was used to examine each dimension.
Results: The median EQ-5D Index and VAS scores were 0.65 (0.5-0.75) and 80 (60-90), respectively. The most frequently reported problem was anxiety/depression (67.3%), followed by usual activities (48.6%). The statistical analysis showed that factors significantly associated with more reported problems in at least one EQ-5D dimension (P < .05) were: females, ageing, being unmarried, low income, school studies, living in refugee camps, and villages, unemployment, having chronic diseases or pain, and obesity. It is important to note that participants who responded in November showed more problems compared with December 2020. On the other hand, more problems were reported by participants who were infected, had known affected persons, had no enough information, perceived negative effect of confinement, and indicated having a high infection chance (P < .05).
Conclusions: This work provides important evidence on the health status and wellbeing during the COVID-19 confinement in a sample of the Palestinian population, affecting almost all the aspects of the health state and wellbeing. This effect could be minimised by improving the COVID-19 preventive education and monitoring that can play an important role in all health and life aspects among the Palestinian population in facing this pandemic.
abstract_id: PUBMED:22875879
Migraine, weight gain and the risk of becoming overweight and obese: a prospective cohort study. Background: Some cross-sectional studies have suggested an association between migraine and increased body weight. However, prospective data on the association are lacking.
Methods: We conducted a prospective cohort study among 19,162 participants in the Women's Health Study who had a body mass index (BMI) of 18.5- <25 kg/m(2) at baseline. Migraine was self-reported by standardized questionnaires. Main outcome measures were incident overweight (BMI ≥ 25 kg/m(2)), incident obesity (BMI ≥ 30 kg/m(2)) and mean weight change. Age- and multivariable-adjusted hazard ratios (HRs) were calculated for the association between migraine and incident overweight and obesity. Differences in weight change were evaluated by analysis of covariance (ANCOVA).
Results: A total of 3,483 (18.2%) women reported any migraine history. After 12.9 years of follow-up, 7916 incident overweight and 730 incident obesity cases occurred. Migraineurs had multivariable-adjusted HRs (95% confidence interval) of 1.11 (1.05-1.17) for becoming overweight and 1.00 (0.83-1.19) for becoming obese. These associations remained stable after censoring for chronic diseases and were similar according to migraine aura status. Multivariable-adjusted mean weight change from baseline to the end of study was +4.7 kg for migraineurs and +4.4 kg for women without migraine (p = 0.02).
Conclusion: Results of this large prospective study of middle-aged women do not indicate a consistent association between migraine and incident overweight, obesity or relevant weight gain.
abstract_id: PUBMED:17721642
Quantifying the impact of obesity category on major chronic diseases in Canada. Adverse health effects differ with various levels of obesity, but limited national data existed previously for the Canadian population. We examined the associations of sociodemographic and behavioral factors with obesity levels in Canada, and measured the impact of each level on major chronic diseases. Data were extracted from the 2003 Canadian Community Health Survey. We grouped overweight/obese participants aged 18 years and over into four levels based on body mass index (BMI, kg/m2): overweight (25.0 C29.9), class I obesity (30.0 C34.9), class II obesity (35 C39.9), and class III obesity (extreme/clinical obesity, BMI > or = 40.0). We used logistic regression models to identify potential risk factors for the obesity levels and to estimate adjusted odds ratios (ORs) for major chronic diseases related to each level. We calculated population attributable risks (PARs) to help understand the impact of obesity levels on these chronic diseases. The overall prevalence of obesity was 16.2% in men and 14.6% in women, and the prevalence of obesity III was 1.0% in men and 1.4% in women. All levels of obesity increased with age, but then decreased in elderly participants. The prevalence of diabetes, hypertension, heart disease, arthritis, and asthma increased with increasing BMI level, and the highest values appeared in participants at the obesity III level. PAR was highest in the obesity III group for hypertension, followed by diabetes, and lowest for heart disease. When correlated with risk factors, fewer statistically significant ORs, comparing to the normal weight category, appeared for obesity II and III levels than for overweight and obesity I. ORs for the combination of low education level, infrequent exercise, and low household income rose significantly with BMI levels until the obesity II level, and in obesity III level, the OR remained at the same level as for obesity II, most significantly in women. These results suggest that the impact of obesity on Canadians inverted exclamation mark health should be studied and dealt with by obesity level. The greatest impact of clinical obesity was on hypertension and diabetes control in Canada.
abstract_id: PUBMED:28453069
Double burden malnutrition during growth: is becoming a reality in Colombia? Several reports have described in the last decade the coexistence of accelerated increase of obesity with micronutrient deficiencies in developed countries, and is becoming evident in developing nations. This condition may be especially deleterious in children and adolescents with consequences in metabolic risk and growth since early in life. This review describes the evidence of double burden malnutrition during growth period focused on six nutrients (iron, zinc, calcium, vitamin D, vitamin A, sodium, folic acid and vitamin B12) and its biological mechanisms associated with non-communicable disease through span life. In Colombia, according to the last national health and nutrition surveys (2005 vs. 2010), there is an increase in the prevalence of obesity in all age groups; that is accompanied with alarming figures of zinc and vitamin A deficiency and anemia in children under 5 years. This reality of double burden malnutrition should be considered urgently on the public health agenda, implementing robust strategies adapted to the reality of the country based on scientific evidence to prevent mobility and mortality associated with this condition.
abstract_id: PUBMED:17286260
Are the circumpolar Inuit becoming obese? This paper reviews the ethnographic, historical, and recent epidemiological evidence of obesity among the Inuit/Eskimo in the circumpolar region. The Inuit are clearly at higher risk for obesity than other populations globally, if "universal" measures based on body mass index (BMI) and waist circumference and criteria such as those of WHO are used. Inuit women in particular have very high mean waist circumference levels in international comparisons. Given the limited trend data, BMI-defined obesity is more common today than even as recently as three decades ago. Inuit are not immune from the health hazards associated with obesity. However, the "dose-response" curves for the impact of obesity on metabolic indicators such as plasma lipids and blood pressure are lower than in other populations. Long-term, follow-up studies are needed to determine the metabolic consequences and disease risks of different categories of obesity. At least in one respect, the higher relative sitting height among Inuit, obesity measures based on BMI may not be appropriate for the Inuit. Ultimately, it is important to go beyond simple anthropometry to more accurate determination of body composition studies, and also localization of body fat using imaging techniques such as ultrasound and computed tomography. Internationally, there is increasing recognition of the need for ethnospecific obesity criteria. Notwithstanding the need for better quality epidemiological data, there is already an urgent need for action in the design and evaluation of community-based health interventions, if the emerging epidemic of obesity and other chronic diseases are to be averted.
abstract_id: PUBMED:37432239
Vitamin D Levels in the Pre- and Post-COVID-19 Pandemic Periods and Related Confinement at Pediatric Age. Coronavirus disease 2019 (COVID-19) restrictions have been correlated with vitamin D deficiency in children, but some uncertainties remain. We retrospectively studied vitamin 25-(OH) D blood levels in 2182 Italian children/adolescents hospitalized for various chronic diseases in the year before (n = 1052) and after (n = 1130) the nationwide lockdown. The type of underlying disease, gender, and mean age (91 ± 55 and 91 ± 61 months, respectively) of patients included in the two periods were comparable. Although mean levels were the same (p = 0.24), deficiency status affected a significantly higher number of subjects during the lockdown period than in the pre-COVID period (p = 0.03), particularly in summer (p = 0.02), and there was also a smoothing of seasonal variations in vitamin D levels. Particularly at risk were males (OR = 1.22; p = 0.03), the 1-5 year age group (OR = 1.57; p < 0.01) and the 6-12 year age group (OR = 1.30; p = 0.04). Infants appeared not to be affected (p = 1.00). In the post-COVID period, the risk of vitamin D deficiency was unchanged in disease-specific groups. However, the proportion of deficiency or severe deficiency differed significantly in the subgroup with endocrinopathy (higher; Chi-square p = 0.04), and with respiratory problems and obesity (lower; Chi-square p = 0.01 and p < 0.01, respectively). Conflicting/opposite literature results advocate for further studies to clearly indicate the need for supplementation during possible future periods of confinement.
abstract_id: PUBMED:23055232
The DYNAMO-HIA model: an efficient implementation of a risk factor/chronic disease Markov model for use in Health Impact Assessment (HIA). In Health Impact Assessment (HIA), or priority-setting for health policy, effects of risk factors (exposures) on health need to be modeled, such as with a Markov model, in which exposure influences mortality and disease incidence rates. Because many risk factors are related to a variety of chronic diseases, these Markov models potentially contain a large number of states (risk factor and disease combinations), providing a challenge both technically (keeping down execution time and memory use) and practically (estimating the model parameters and retaining transparency). To meet this challenge, we propose an approach that combines micro-simulation of the exposure information with macro-simulation of the diseases and survival. This approach allows users to simulate exposure in detail while avoiding the need for large simulated populations because of the relative rareness of chronic disease events. Further efficiency is gained by splitting the disease state space into smaller spaces, each of which contains a cluster of diseases that is independent of the other clusters. The challenge of feasible input data requirements is met by including parameter calculation routines, which use marginal population data to estimate the transitions between states. As an illustration, we present the recently developed model DYNAMO-HIA (DYNAMIC MODEL for Health Impact Assessment) that implements this approach.
Answer: The study presented in abstract PUBMED:25866674 investigated the impact of incarceration on obesity, particularly focusing on whether prisoners with chronic diseases are becoming overweight and obese during their confinement. The study utilized objective data for weight, height, and chronic diseases, along with demographics, extracted from an electronic health record. The analysis aimed to determine changes in weight over time and between different groups within a statewide prison population.
The results indicated that, as a total population, prisoners gained weight and reflected the distribution of BMIs for the state. However, there were differences within the population. Male prisoners gained significantly less weight than females. Interestingly, the population with chronic diseases gained less weight than the population without comorbid conditions. Specifically, prisoners with diabetes lost weight, while the impact of hypertension on weight gain was negligible.
The study concluded that weight gain was a particular problem for female prisoners. However, the prison system appeared to be providing effective chronic disease management, especially for prisoners with diabetes and hypertension. The study suggested that additional research is needed to understand the impact of incarceration on the female population and to explore the effectiveness of chronic disease management in prisons. |
Instruction: Should function and bother be measured and reported separately for prostate cancer quality-of-life domains?
Abstracts:
abstract_id: PUBMED:16979720
Should function and bother be measured and reported separately for prostate cancer quality-of-life domains? Objectives: To evaluate the psychometric properties of the three domains bowel, urinary, and sexual function as they were measured in the Prostate Cancer Outcomes Study and examine their use in different research and practice settings. Leading prostate cancer health-related quality-of-life questionnaires include questions that measure patients' bowel, urinary, and sexual function and their perceived annoyance (or bother) caused by limited functioning. The published results are mixed on reporting function and bother independently or together as a single domain.
Methods: Statistical tools from classical measurement theory and factor analytic methods were used to evaluate the psychometric properties of the Prostate Cancer Outcomes Study disease-specific scales. The findings from studies of other prostate cancer outcomes scales and clinical input were included to formulate the conclusions.
Results: Factor analysis results uncovered a multidimensional structure within two of the three domains. The urinary domain consisted of items measuring two factors: incontinence and urinary obstructive symptoms. Sexual dysfunction consisted of two dimensions: interest in sexual activity and erectile function.
Conclusions: These empirical results suggest that bowel dysfunction and urinary incontinence can each be combined with measures of bother to produce overall measures of function; however, evidence was present for the need for separate measures of sexual function, sexual interest, and perceived bother with sexual function. For informing patient-doctor communications, function and bother on all three domains should be reported separately, because treatment decisions or symptom management may vary depending on a patient's perceived concern about their health-related quality-of-life.
abstract_id: PUBMED:20184571
Changes in specific domains of sexual function and sexual bother after radical prostatectomy. Objective: To quantitatively assess the effect of radical prostatectomy (RP) on the specific domains that comprise overall sexual function (SF), focusing on the relationships among these domains and overall SF, and to identify predictors for recovery of SF over time, as a decline in SF and sexual bother (SB) are known potential complications of treatment for prostate cancer.
Patients And Methods: Within the Cancer of the Prostate Strategic Urologic Research Endeavor database, we identified men diagnosed between 1995 and 2001 with localized prostate cancer treated with RP. SF and SB outcomes, measured using the University of California Los Angeles Prostate Cancer Index, were assessed at 6-month intervals for 4 years after RP.
Results: In all, 620 men met the study criteria; at 6 months after RP, overall and all the specific domains of SF declined, with improvement in most specific domains by 2 years after RP. The greatest declines were in the ability to achieve erections, high-quality erections, and frequent erections; these domains were also most strongly correlated with overall SF. Sexual desire was relatively preserved, and there was a weak correlation between overall SF and sexual desire after RP, when there was the greatest discrepancy between sexual desire and other domains of function. SB showed continued improvement over time to 4 years but was not well correlated with any measurements of SF assessed. Younger age, college education, sexual aid and medication use, the absence of comorbid conditions, and nerve-sparing surgery were predictive of significant recovery of function in several specific domains of SF.
Conclusions: RP affects specific domains of SF to differing degrees. Compromised erectile function is most commonly reported among these specific domains and seems to play a more dominant role in determining overall SF, but notably none of the domains of function were closely linked to SB. Because education is protective in the perception of bother, appropriate counselling and the setting of expectations for outcomes in overall and specific domains of SF might lead to improved quality of life after treatment for prostate cancer.
abstract_id: PUBMED:25059094
Sexual bother in men with advanced prostate cancer undergoing androgen deprivation therapy. Introduction: Men with advanced prostate cancer (APC) undergoing androgen deprivation therapy (ADT) often experience distressing sexual side effects. Sexual bother is an important component of adjustment. Factors associated with increased bother are not well understood.
Aims: This study sought to describe sexual dysfunction and bother in APC patients undergoing ADT, identify socio-demographic and health/disease-related characteristics related to sexual bother, and evaluate associations between sexual bother and psychosocial well-being and quality of life (QOL).
Methods: Baseline data of a larger psychosocial intervention study was used. Pearson's correlation and independent samples t-test tested bivariate relations. Multivariate regression analysis evaluated relations between sexual bother and psychosocial and QOL outcomes.
Main Outcome Measures: The Expanded Prostate Cancer Index Composite sexual function and bother subscales, Center for Epidemiologic Studies Depression Scale, Functional Assessment of Cancer Therapy--General, and Dyadic Adjustment Scale were the main outcome measures.
Results: Participants (N = 80) were 70 years old (standard deviation [SD] = 9.6) and reported 18.7 months (SD = 17.3) of ADT. Sexual dysfunction (mean = 10.1; SD = 18.0) was highly prevalent. Greater sexual bother (lower scores) was related to younger age (β = 0.25, P = 0.03) and fewer months of ADT (β = 0.22, P = 0.05). Controlling for age, months of ADT, current and precancer sexual function, sexual bother correlated with more depressive symptoms (β = -0.24, P = 0.06) and lower QOL (β = 0.25, P = 0.05). Contrary to hypotheses, greater sexual bother was related to greater dyadic satisfaction (β = -0.35, P = 0.03) and cohesion (β = -0.42, P = 0.01).
Conclusions: The majority of APC patients undergoing ADT will experience sexual dysfunction, but there is variability in their degree of sexual bother. Psychosocial aspects of sexual functioning should be considered when evaluating men's adjustment to ADT effects. Assessment of sexual bother may help identify men at risk for more general distress and lowered QOL. Psychosocial interventions targeting sexual bother may complement medical treatments for sexual dysfunction and be clinically relevant, particularly for younger men and those first starting ADT.
abstract_id: PUBMED:28557112
Effect of androgen deprivation therapy on sexual function and bother in men with prostate cancer: A controlled comparison. Objectives: The adverse sexual effects of androgen deprivation therapy (ADT) on men with prostate cancer have been well described. Less well known is the relative degree of sexual dysfunction and bother associated with ADT compared to other primary treatment modalities such as radical prostatectomy. We sought to describe the trajectory and relative magnitude of changes in sexual function and bother in men on ADT and to examine demographic and clinical predictors of ADT's adverse sexual effects.
Methods: Prostate cancer patients treated with ADT (n = 60) completed assessments of sexual function and sexual bother 3 times during a 1-year period after the initiation of ADT. Prostate cancer patients treated with radical prostatectomy only and not receiving ADT (n = 85) and men with no history of cancer (n = 86) matched on age and education completed assessments at similar intervals.
Results: Androgen deprivation therapy recipients reported worsening sexual function and increasing bother over time compared to controls. Effect sizes for the differences in sexual function were large to very large, and for bother were small to very large. Age younger than 83 years predicted relatively poorer sexual function, and age younger than 78 years predicted greater sexual bother at 12 months in men on ADT compared to men not on ADT.
Conclusions: Most men on ADT for prostate cancer will never return to baseline levels of sexual function. Interventions focused on sexual bother over function and designed to help couples build and maintain satisfying relationship intimacy are likely to more positively affect men's psychological well-being while on ADT than medical or sexual aids targeting sexual dysfunction.
abstract_id: PUBMED:22341412
Bother problems in prostate cancer patients after curative treatment. Background: Most previous studies of prostate cancer (CaP) patients have focused on functional side effects. In the decision about treatment, the patients' subjective experience of function (bother) should also be considered. In this prospective study of CaP patients, we used both categorical and dimensional methods to examine changes of sexual, urinary, and bowel bother after robot-assisted prostatectomy (RALP), after high dose radiotherapy alone (RAD), or with adjuvant androgen deprivation therapy (RAD + ADT). We also studied the associations between psychosocial factors and post-treatment bother and the correlations between bother and function at the follow-up time points.
Methods: A total of 462 patients (n = 150 RALP, n = 104 RAD, and n = 208 RAD + ADT) completed questionnaires at all time points (baseline, 3, 6, 12, and 24 months post-treatment). Our outcome measures were the proportion of patients who regained their baseline bother core (PBS-100) and the mean group scores on sexual, urinary, and bowel bother based on the UCLA-PCI questionnaire. Generalized estimating equation (GEE) identified the time points at which various variables were significantly associated with bother at 2 years. The time points at which the proportions of bothered patients became stable were defined.
Results: The different treatment modalities provided distinctive patterns over time regarding urinary, sexual, and bowel bothers. RALP gave sexual and urinary bother, RAD + ADT patients reported bowel and sexual bother, while RAD patient suffered mainly from bowel bother. According to GEE, the bother scores at 3 or 6 months were significantly associated with the bother scores at 24 months for all groups. PBS-100 and stability of the recovered bother domains was reached at 3 to 6 months. Strong correlations were observed between function and bother for the urinary and bowel domains but not for the sexual domain. The associations between psychosocial factors and bother were weak.
Conclusions: Two years after treatment, RALP patients mainly reported sexual and urinary bother, while irradiated patients were bothered by bowel dysfunction. Sexual, urinary, and bowel bother reached stable proportions at 3 to 6 months post-treatment. Based on GEE, bother at 6 months was in general significantly associated with bother at 24 months.
abstract_id: PUBMED:15247718
Bowel function and bother after treatment for early stage prostate cancer: a longitudinal quality of life analysis from CaPSURE. Purpose: We measured bowel function and bowel bother longitudinally the first 2 years after treatment for early stage prostate cancer.
Materials And Methods: We studied bowel function and bother in 1,584 men recently diagnosed with early stage prostate cancer and followed for 2 years after radical prostatectomy, external beam radiation or brachytherapy. Principal outcomes were assessed with the UCLA Prostate Cancer Index, a validated instrument that includes these 2 domains. Multivariate analyses were conducted to ascertain significant predictors of bowel function and bother. Subjects were drawn from Cancer of the Prostate Strategic Urologic Research Endeavor (CaPSURE, TAP Pharmaceutical Products, Inc., Lake Forest, Illinois), a national, longitudinal registry of men with prostate cancer.
Results: Men treated with external beam radiation or brachytherapy suffered worse bowel function and were more bothered by it than men treated surgically. After an initial period of posttreatment impairment, all 3 groups demonstrated improvement with time in both domains, although bowel bother persisted longer in men treated with external beam radiation. Surgery patients reached a steady state by 3 months, while those treated with external beam radiation or brachytherapy continued to improve for more than a year after treatment. Older men were more bothered by bowel dysfunction than younger men. Ethnicity, comorbidity and education did not affect either bowel function or bother.
Conclusions: Patients undergoing surgery, external beam radiation or brachytherapy have different longitudinal profiles of bowel function and bother during the first 2 years after treatment. Bowel function and bother are worse after external beam radiation but they are also impaired after brachytherapy. Men choosing surgery experience transient impairment in the bowel domains. This information may be useful to patients making treatment decisions for early stage prostate cancer.
abstract_id: PUBMED:20399462
Health related quality of life for men treated for localized prostate cancer with long-term followup. Purpose: Men who undergo primary treatment for prostate cancer can expect changes in health related quality of life. Long-term changes after treatment are not yet fully understood. We characterized health related quality of life evolution from baseline to 4 years after treatment.
Materials And Methods: We identified 1,269 men in CaPSURE who underwent primary treatment for clinically localized prostate cancer and completed followup health related quality of life questionnaires for at least 4 years. The men underwent radical prostatectomy, external beam radiotherapy, brachytherapy, combined external beam radiotherapy/brachytherapy or androgen deprivation therapy. Health related quality of life was measured using patient reported questionnaires. Effects of select covariates on quality of life were measured with a multivariate mixed model.
Results: Age at diagnosis, time from treatment and primary treatment were significant predictors of health related quality of life in all domains (p <0.05) except primary treatment on sexual bother. Men who underwent radical prostatectomy experienced the most pronounced worsening urinary function but also had the greatest recovery. All treatments worsened urinary bother, and sexual function and bother. All forms of radiotherapy moderately worsened bowel function and bother after treatment but eventual recovery to baseline was noted.
Conclusions: Age at diagnosis, time from treatment and primary treatment type affect health related quality of life. Treatment has a greater impact on disease specific than general health related quality of life. All treatments adversely affect urinary and sexual function. Most adverse changes develop immediately after treatment. Recovery occurs mostly within 2 years after treatment with little change beyond 3 years.
abstract_id: PUBMED:29304808
Prospective longitudinal outcomes of quality of life after laparoscopic radical prostatectomy compared with retropubic radical prostatectomy. Background: There have been few reports on health-related quality of life (HRQOL) after laparoscopic radical prostatectomy (LRP) in Japanese patients. The aim of this study is to assess changes in HRQOL during 36 months after LRP compared with retropubic radical prostatectomy (RRP).
Methods: The subjects were 105 consecutive patients treated with LRP between 2011 and 2012. HRQOL was evaluated using the International Prostate Symptom Score (IPSS), Medical Outcome Study 8-Items Short Form Health Survey (SF-8), and Expanded Prostate Cancer Index Composite (EPIC) at baseline and 1, 3, 6, 12 and 36 months after surgery. These results were compared with data for 107 consecutive patients treated with RRP between 2005 and 2007. The comparison between LRP and RRP was examined at every time point by Mann-Whitney U-test and chi-square test. Multiple linear regression analysis was used to identify independent factors related to the urinary domain in EPIC.
Results: The IPSS change was similar in both groups. The LRP group had a better SF-8 mental component summary score at baseline and a better SF-8 physical component summary score at 1 month after surgery. In EPIC, urinary function and bother were worse after LRP, but improved at 12 months and did not differ significantly from those after RRP; however, these factors then worsened again at 36 months after LRP. Urinary incontinence was also worse at 36 months after LRP, compared to RRP. In patients treated with nerve-sparing surgery, urinary function and urinary incontinence were similar and good at 12 and 36 months in both groups. Bowel function and bother, and sexual function and bother were similar in both groups and showed no changes from 12 to 36 months. Age and salvage radiotherapy were independent predictors of incontinence (daily use of two or more pads) in multivariate analysis. Surgical procedure was not an independent factor for incontinence, but incontinence defined as use of one pad or more was associated with the surgical procedure.
Conclusions: Urinary function and bother at 36 months were worse after LRP than after RRP. Age, salvage radiotherapy and surgical procedure were associated with urinary incontinence after 36 months.
abstract_id: PUBMED:32153216
Exploration of baseline patient-reported side effect bother from cancer therapy. Background: Patient reports of expected treatment side effects are increasingly collected as part of the assessment of patient experience in clinical trials. A global side effect item that is patient-reported has the potential to inform overall tolerability. Therefore, the aim of this study was to examine the completion and distribution of such a global single-item measure of side effect burden in five cancer clinical trials.
Methods: Data from five trials from internal Food and Drug Administration databases that included the Functional Assessment of Cancer Therapy-General single-item measure of overall side effect burden (i.e. impact on degree of bother) were analyzed. Completion rates for the side effect bother item, items adjacent to this item, and two non-adjacent items on the Functional Assessment of Cancer Therapy-General that are related to health-related quality of life were calculated at the baseline assessment and at the 3-month assessment. To evaluate the distribution, the percentage of patients reporting high levels (quite a bit or very much bother) of side effect bother at baseline and 3 months was assessed.
Results: Completion rates for all items were at least 80% regardless of time point or trial population. However, in three of the five trials, completion rates for the side effect bother item were lower at baseline compared to adjacent and non-adjacent items. This difference was not observed at 3 months. Up to 9.4% of patients reported high levels of side effect bother at baseline.
Conclusion: Patients may enter trials already reporting some bother from side effects. This can make interpretation of results with respect to the investigational agent under study challenging. Patients may skip an item evaluating side effect bother at baseline, suggesting some difficulty with interpretation of what is being asked. Further study of the wording and utility of a baseline side effect bother assessment is warranted.
abstract_id: PUBMED:11061894
Urinary function and bother after radical prostatectomy or radiation for prostate cancer: a longitudinal, multivariate quality of life analysis from the Cancer of the Prostate Strategic Urologic Research Endeavor. Purpose: We measure the effect of time on urinary function and bother during the first 2 years following treatment for early stage prostate cancer.
Materials And Methods: We studied urinary function and bother in 564 men recently diagnosed with early stage prostate cancer and treated with radiotherapy or radical prostatectomy with or without nerve sparing. Outcomes were assessed with the UCLA Prostate Cancer Index, which is a validated, health related quality of life instrument that includes these 2 domains. To minimize the influence of other factors we adjusted for age, co-morbidity, general health, pad use, anticholinergics or procedures for urethral stricture. All subjects were drawn from the Cancer of the Prostate Strategic Urologic Research Endeavor (CaPSURE), which is a national longitudinal database.
Results: Urinary function improved with time during the first year after surgery but remained fairly constant during year 2. Urinary function remained stable throughout the 2 years after radiation. Urinary bother was worse after radiation throughout the 2 years, although it improved markedly by the end of year 1. Age, ethnicity and co-morbidity did not impact urinary function or bother but being married did have an advantage.
Conclusions: Patients undergoing surgery or radiation showed different longitudinal profiles of urinary function and bother during the first 2 years after treatment.
Answer: Yes, function and bother should be measured and reported separately for prostate cancer quality-of-life domains. The psychometric properties of the Prostate Cancer Outcomes Study disease-specific scales suggest that bowel dysfunction and urinary incontinence can each be combined with measures of bother to produce overall measures of function. However, there is evidence for the need for separate measures of sexual function, sexual interest, and perceived bother with sexual function. Reporting function and bother separately is important for informing patient-doctor communications, as treatment decisions or symptom management may vary depending on a patient's perceived concern about their health-related quality-of-life (PUBMED:16979720).
Additionally, changes in specific domains of sexual function and sexual bother after radical prostatectomy indicate that compromised erectile function is most commonly reported among these specific domains and seems to play a more dominant role in determining overall sexual function. However, none of the domains of function were closely linked to sexual bother, suggesting that they should be considered separately to provide a more comprehensive understanding of a patient's quality of life after treatment for prostate cancer (PUBMED:20184571).
Moreover, in men with advanced prostate cancer undergoing androgen deprivation therapy, there is variability in their degree of sexual bother, which is related to younger age and fewer months of ADT. Sexual bother also correlates with more depressive symptoms and lower quality of life, indicating that it is an important aspect of a patient's experience that should be assessed independently of sexual function (PUBMED:25059094).
In conclusion, the evidence supports the notion that function and bother are distinct components of quality of life in prostate cancer patients and should be measured and reported separately to better understand and address the individual needs of patients (PUBMED:16979720; PUBMED:20184571; PUBMED:25059094). |
Instruction: Should pre-operative left atrial volume receive more consideration in patients with degenerative mitral valve disease undergoing mitral valve surgery?
Abstracts:
abstract_id: PUBMED:27855288
Should pre-operative left atrial volume receive more consideration in patients with degenerative mitral valve disease undergoing mitral valve surgery? Background: Severe primary mitral regurgitation (MR) carries a significant incidence of mortality and morbidity. Though a number of prognostic factors have been identified, the best timing for mitral valve repair is still debated. We assessed the role of Left Atrial Volume Indexed (LAVI) as predictor of adverse events after mitral valve surgery.
Methods: 134 patients with severe MR were studied with a follow-up of 42±16months. Endpoints were Post-Operative Atrial Fibrillation (POAF), atrial and ventricular remodeling (LARR/LVRR) and correlation with outcome. POAF was defined as AF occurring within 2weeks and late AF (LAF) more than 2weeks after surgery. LARR was defined as LAVI reduction ≥15% and LVRR as any reduction of ventricular mass after surgery.
Results: Forty-one patients experienced POAF, 26 had LAF. Pre-operative LAVI was an independent risk factor for POAF (OR 1.03, CI [1.00-1.06], p=0.01), LAF (OR 1.03, CI [1.00-1.06], p=0.02), LARR and LVRR (OR 1.04, CI [1.01-1.07], p=0.002, respectively). LARR was found in 75 patients, while LVRR in 111. Patients with heart remodeling had less incidence of LAF and cardiac adverse events, better diastolic function and improved their NYHA class after surgery.
Conclusions: LAVI should be given more weight into decision making for patients with MR as it predicts POAF and LAF and reverse atrial and ventricular remodeling, both associated to long-term outcome.
abstract_id: PUBMED:21935581
Atrial function after left atrial epicardial cryoablation for atrial fibrillation in patients undergoing mitral valve surgery. Purpose: To explore the effects on atrial and ventricular function of restoring sinus rhythm (SR) after epicardial cryoablation and closure of the left atrial appendage (LAA) in patients with mitral valve disease and atrial fibrillation (AF) undergoing surgery.
Methods: Sixty-five patients with permanent AF were randomized to mitral valve surgery combined with left atrial epicardial cryoablation and LAA closure (ABL group, n = 30) or to mitral valve surgery alone (control group, n = 35). Two-dimensional and Doppler echocardiography were performed before and 6 months after surgery.
Results: At 6 months, 73% of the patients in the ABL group and 46% of the controls were in SR. Patients in SR at 6 months had a reduction in their left ventricular diastolic diameter while the left ventricular ejection fraction was unchanged. In patients remaining in AF, the left ventricular ejection fraction was lower than at baseline. The left atrial diastolic volume was reduced after surgery, more in patients with SR than AF. In patients in SR, the peak velocity during the atrial contraction and the reservoir function were lower in the ABL group than in the control group.
Conclusions: In patients in SR, signs of atrial dysfunction were observed in the ABL but not the control group. Atrial dysfunction may have existed before surgery, but the difference between the groups implies that the cryoablation procedure and/or closure of the LAA might have contributed.
abstract_id: PUBMED:15711196
Ablation of atrial fibrillation with mitral valve surgery. Purpose Of Review: Recent advances in understanding of the pathogenesis of atrial fibrillation and development of new technology have resulted in a surge of interest in the surgical ablation of atrial fibrillation, particularly in patients with mitral valve disease. For patients with both mitral valve dysfunction and atrial fibrillation, a variety of new approaches are available to enable a complete operation that includes both mitral valve repair and ablation of atrial fibrillation. The purposes of this review are to review the rationale for surgical ablation of atrial fibrillation (AF) in mitral valve patients, describe the classic Maze procedure and its results, detail new approaches to surgical ablation of AF, emphasize the importance of the left atrial appendage, and consider challenges and future directions in the ablation of AF in mitral valve patients.
Recent Findings: Left untreated, atrial fibrillation increases mortality and morbidity in patients undergoing mitral valve surgery. While the Maze procedure effectively eliminates atrial fibrillation in most of these patients, its complexity and increased operative time has precluded widespread application. New operations that use alternative energy sources to create left atrial lesion sets ablate atrial fibrillation in 60 to 80% of patients having mitral valve surgery.
Summary: In mitral valve patients with atrial fibrillation of more than 6 months' duration, the operative strategy should include both mitral valve surgery and ablation of atrial fibrillation. In many cases, these procedures can be performed minimally invasively. Refinements in mapping and ablation technology are on the horizon, and these will facilitate more widespread application of minimally invasive approaches and further improve results.
abstract_id: PUBMED:27986559
Left atrial appendage in rheumatic mitral valve disease: The main source of embolism in atrial fibrillation Objective: To demonstrate that surgical removal of the left atrial appendage in patients with rheumatic mitral valve disease and long standing persistent atrial fibrillation decreases the possibility of stroke. This also removes the need for long-term oral anticoagulation after surgery.
Method: A descriptive, prospective, observational study was conducted on 27 adult patients with rheumatic mitral valve disease and long standing persistent atrial fibrillation, who had undergone mitral valve surgery and surgical removal of the left atrial appendage. Oral anticoagulation was stopped in the third month after surgery. The end-point was the absence of embolic stroke. An assessment was also made of postoperative embolism formation in the left atrium using transthoracic echocardiography.
Results: None of the patients showed embolic stroke after the third post-operative month. Only one patient exhibited transient ischaemic attack on warfarin therapy within the three postoperative months. Left atrial thrombi were also found in 11 (40.7%) cases during surgery. Of these, 6 (54.5%) had had embolic stroke, with no statistical significance (P=.703).
Conclusions: This study suggests there might be signs that the left atrial appendage may be the main source of emboli in rheumatic mitral valve disease, and its resection could eliminate the risk of stroke in patients with rheumatic mitral valve disease and long-standing persistent atrial fibrillation.
abstract_id: PUBMED:33893493
Does gender bias affect outcomes in mitral valve surgery for degenerative mitral regurgitation? Objectives: This study was conducted to determine if gender bias explains the worse outcomes in women than in men who undergo mitral valve surgery for degenerative mitral regurgitation.
Methods: Patients who underwent mitral valve surgery for degenerative mitral regurgitation with or without concomitant ablation surgery for atrial fibrillation were identified from the Cardiovascular Research Database of the Clinical Trial Unit of the Bluhm Cardiovascular Institute at Northwestern Memorial Hospital and were defined according to the Society of Thoracic Surgery National Adult Cardiac Surgery Database. Of the 1004 patients (33% female, mean age 62.1 ± 12.4 years; 67% male, mean age 60.1 ± 12.4 years) who met this criteria, propensity score matching was utilized to compare sex-related differences.
Results: Propensity score matching of 540 patients (270 females, mean age 61.0 ± 12.2; 270 males, mean age 60.9 ± 12.3) demonstrated that 98% of mitral valve surgery performed in both groups was mitral valve repair and 2% was mitral valve replacement. Preoperative CHA2DS2-VASc scores were higher in women and fewer women were discharged directly to their homes. Before surgery, women had smaller left heart chambers, lower cardiac outputs, higher diastolic filling pressures and higher volume responsiveness than men. However, preoperative left ventricular and right ventricular strain values, which are normally higher in women, were similar in the 2 groups, indicating worse global strain in women prior to surgery.
Conclusions: The worse outcomes reported in women compared to men undergoing surgery for degenerative mitral regurgitation are misleading and not based on gender bias except in terms of referral patterns. Men and women who present with the same type and degree of mitral valve disease and similar comorbidities receive the same types of surgical procedures and experience similar postoperative outcomes. Speckle-tracking echocardiography to assess global longitudinal strain of the left and right ventricles should be utilized to monitor for myocardial dysfunction related to chronic mitral regurgitation.
abstract_id: PUBMED:23884089
Preoperative left ventricular function in degenerative mitral valve disease. Aim: The aim of the study is to determine the impact of the underlying etiology (Barlow's disease or fibroelastic deficiency) on left ventricular function in patients with degenerative mitral valve disease and severe mitral regurgitation.
Methods: We studied 233 patients (mean age: 53.8 ± 12.9) undergoing surgery for severe mitral regurgitation due to degenerative mitral valve disease at Almazov Federal Heart Centre between 2009 and 2011. Pathologic diagnoses for valvular tissue specimens were provided by an experienced pathologist. Preoperative strain and strain rate were determined using speckle tracking (Vivid 7 Dimension, EchoPAC'08).
Results: Barlow's disease was identified by the pathologist in 60 patients (25.8%), and fibroelastic deficiency in 173 patients (74.2%). There were no significant differences between groups in preoperative mitral regurgitation volume (70.5 ± 9.6 vs. 71.6 ± 8.5 ml, P = 0.40), and in global systolic (ejection fraction: 52.7 ± 6.6 vs. 52.0 ± 7.4%, P = 0.53) and diastolic (E/e': 12.2 ± 3.9 vs. 12.8 ± 4.2, P = 0.35) left ventricular function. Despite the lack of difference in ejection fraction and diastolic tissue Doppler parameters, in patients with Barlow's disease in comparison with fibroelastic deficiency a significant decrease of the left ventricular longitudinal systolic strain (-13.5 ± 2.2 vs. -15.6 ± 2.3%, P = 0.00001) and early diastolic strain rate (1.04 ± 0.20 vs. 1.14 ± 0.18 s, P = 0.0004) were detected.
Conclusion: Patients with severe mitral regurgitation due to Barlow's disease have a lower preoperative left ventricular systolic function than those with fibroelastic deficiency, which may affect their postoperative prognosis.
abstract_id: PUBMED:30524564
Left atrial thrombus in a patient without mitral valve disease or atrial fibrillation. A 54-year-old man presented with back pain. His medical history included hypertension and gout. There was no history of heart disease or arrhythmia. The electrocardiogram showed normal sinus rhythm. Chest computed tomography demonstrated a large calcified tumor (65 mm) in the left atrium (LA). The echocardiogram showed a round hyperechoic mass in the enlarged LA (56 mm) attached to the atrial septum without mitral valve disease. Urgent surgery for excision of the LA mass with the atrial septum and reconstruction by autologous pericardial patch was performed. There was no pathological change in the mitral valve. Due to surgical injury to the conduction system, implantation of a permanent pacemaker was required postoperatively. Histopathological examination revealed calcification, fibrosis, and thrombus formation. LA thrombus without any history of mitral valve disease or atrial fibrillation is rare. Although the mechanism of the present case was unclear, extensive calcified LA myxoma or undiagnosed patent foramen ovale might have been associated with the disease. <Learning objective: A smooth surface, floating left atrial "ball thrombus" occurs rarely in patients with mitral valve disease or atrial fibrillation. We present a rare case of a giant round left atrial thrombus in a patient without any history of mitral valve disease or atrial fibrillation. Transesophageal echocardiogram showed that the thrombus was round, fixed to the septum, and not floating, and that its surface was calcified. This disease in this patient might have been associated with extensive calcified left atrial myxoma, paroxysmal atrial fibrillation, or undiagnosed patent foramen ovale.>.
abstract_id: PUBMED:31416529
Prognostic Implications of Left Atrial Enlargement in Degenerative Mitral Regurgitation. Background: Left atrial enlargement is frequent in degenerative mitral regurgitation (DMR), but its link to outcomes remains unproven in routine clinical practice.
Objectives: The purpose of this study was to assess whether left atrial volume index (LAVI) measured in routine clinical practice of multiple sonographers/cardiologists is associated independently with DMR survival.
Methods: A cohort of 5,769 (63 ± 16 years, 47% women) consecutive patients with degenerative mitral valve disease, in whom LAVI was prospectively measured, was enrolled and the long-term survival was analyzed.
Results: LAVI (43 ± 24 ml/m2) was widely distributed (<40 ml/m2 in 3,154 patients, 40 to 59 ml/m2 in 1,606, and ≥60 ml/m2 in 1,009). Overall survival throughout follow-up (10-year 66 ± 1%) was strongly associated with LAVI (79 ± 1% vs. 65 ± 2% and 54 ± 2% for LAVI <40, 40 to 59, and ≥60 ml/m2, respectively; p < 0.0001) even after comprehensive adjustment, including for DMR severity (adjusted hazard ratio [HR]: 1.05 [95% confidence interval (CI): 1.03 to 1.08] per 10 ml/m2; p < 0.0001). Mortality under medical management was profoundly affected by LAVI (adjusted HR: 1.07 [95% CI: 1.04 to 1.10] per 10 ml/mm2 and 1.55 [95% CI: 1.31 to 1.84] for LAVI ≥60 ml/m2 vs. <40 ml/m2; both p < 0.0001) incrementally to adjusting variables (p < 0.0001) and in all subgroups, particularly sinus rhythm (adjusted HR: 1.25 [95% CI: 1.21 to 1.28]) or atrial fibrillation (adjusted HR: 1.10 [95% CI: 1.06 to 1.13] per 10 ml/m2; both p < 0.0001). Thresholds of excess mortality in spline curve analysis were approximated at 40 ml/m2 in all subgroups. Survival markedly improved after mitral surgery (time-dependent adjusted HR: 0.43 [95% CI: 0.36 to 0.53]; p < 0.0001) but remained modestly linked to LAVI (10-year survival 85 ± 3% vs. 86 ± 2% and 75 ± 3% for LAVI <40, 40 to 59, and ≥60 ml/m2, respectively; p < 0.0001).
Conclusions: The frequent left atrial enlargement of DMR as measured by LAVI in routine practice displays, overall and in all subsets, a powerful, incremental, and independent link to excess mortality, which is partially alleviated by mitral surgery. Hence, LAVI measurement should be part of routine DMR evaluation and the clinical decision-making process.
abstract_id: PUBMED:35470696
Incremental Prognosis by Left Atrial Functional Assessment: The Left Atrial Coupling Index in Patients With Floppy Mitral Valves. Background Emerging data suggest important prognostic value to left atrial (LA) characteristics, but the independent impact of LA function on outcome remains unsubstantiated. Thus, we aimed to define the incremental prognostic value of LA coupling index (LACI), coupling volumetric and mechanical LA characteristics and calculated as the ratio of left atrial volume index to tissue Doppler imaging a', in a large cohort of patients with isolated floppy mitral valve. Methods and Results All consecutive 4792 patients (61±16 years, 48% women) with isolated floppy mitral valve in sinus rhythm diagnosed at Mayo Clinic from 2003 to 2011, comprehensively characterized and with prospectively measured left atrial volume index and tissue Doppler imaging a' in routine practice, were enrolled, and their long-term survival analyzed. Overall, LACI was 5.8±3.7 and was <5 in 2422 versus ≥5 in 2370 patients. LACI was independently higher with older age, more mitral regurgitation (no 3.8±2.3, mild 5.1±3.0, moderate 6.5±3.8, and severe 7.8±4.3), and with diastolic (higher E/e') and systolic (higher end-systolic dimension) left ventricular dysfunction (all P≤0.0001). At diagnosis, higher LACI was associated with more severe presentation (more dyspnea, more severe functional tricuspid regurgitation, and elevated pulmonary artery pressure, all P≤0.0001) independently of age, sex, comorbidity index, ventricular function, and mitral regurgitation severity. During 7.0±3.0 years follow-up, 1146 patients underwent mitral valve surgery (94% repair, 6% replacement), and 880 died, 780 under medical management. In spline curve analysis, LACI ≥5 was identified as the threshold for excess mortality, with much reduced 10-year survival under medical management (60±2% versus 85±1% for LACI <5, P<0.0001), even after comprehensive adjustment (adjusted hazard ratio, 1.30 [95% CI, 1.10-1.53] for LACI ≥5; P=0.002). Association of LACI ≥5 with higher mortality persisted, stratifying by mitral regurgitation severity of LA enlargement grade (all P<0.001) and after propensity-score matching (P=0.02). Multiple statistical methods confirmed the significant incremental predictive power of LACI over left atrial volume index (all P<0.0001). Conclusions LA functional assessment by LACI in routine practice is achievable in a large number of patients with floppy mitral valve using conventional Doppler echocardiographic measurements. Higher LACI is associated with worse clinical presentation, but irrespective of baseline characteristics, LACI is strongly, independently, and incrementally determinant of outcome, demonstrating the crucial importance of LA functional response to mitral valve disease.
abstract_id: PUBMED:33399531
Left Atrial Mechanical Function Predicts Postoperative AF in Patients with Rheumatic Mitral Valve Disease Who Underwent Mitral Valve Surgery. Objective: Postoperative atrial fibrillation (PoAF) is a common complication after surgical mitral valve replacement. Late PoAF is independently associated with long-term mortality. This study aimed to test the utility of preoperative left atrial mechanical function as a predictor of early and late PoAF in clinical practice.
Methods: Patients (N = 150) with a rheumatic mitral valve who underwent mitral valve replacement with or without tricuspid valvuloplasty and who were in stable sinus rhythm were included. Baseline characteristics and transthoracic echocardiographic assessment information on the day before surgery were collected. Em, Em´, and Ei´ indicate early diastolic peak velocity of the mitral valve, early diastolic velocity at the lateral wall of the mitral annulus, and early diastolic velocity at the interventricular septal annulus, respectively.
Results: Early PoAF was present in 59 of 150 patients (39.3%), and 32 of 150 patients (21.3%) developed late PoAF within 1 year after surgery. Among all of the variables examined, age, diabetes, early mitral filling velocity (Mitral E), left atrial mechanical function (Mitral A), Em/Em´, Em/Ei´, and mitral transvalvular gradient showed a significant correlation with PoAF. Only age, Mitral A, and mitral transvalvular gradient showed strong, significant correlations with the occurrence of late PoAF. In a multivariate analysis, predictors of late PoAF recurrence included early PoAF and Mitral A.
Conclusion: Routine evaluation of Mitral A is feasible and useful to predict early and late PoAF in patients with a rheumatic mitral valve undergoing surgical mitral valve replacement.
Answer: Yes, pre-operative left atrial volume should receive more consideration in patients with degenerative mitral valve disease undergoing mitral valve surgery. Studies have shown that Left Atrial Volume Indexed (LAVI) is a predictor of adverse events after mitral valve surgery. Specifically, pre-operative LAVI was found to be an independent risk factor for Post-Operative Atrial Fibrillation (POAF), late AF (LAF), and reverse atrial and ventricular remodeling (LARR and LVRR), which are associated with long-term outcomes (PUBMED:27855288). Additionally, left atrial enlargement, as measured by LAVI in routine practice, has been shown to have a powerful, incremental, and independent link to excess mortality, which is partially alleviated by mitral surgery (PUBMED:31416529). Furthermore, the Left Atrial Coupling Index (LACI), which couples volumetric and mechanical left atrial characteristics, has been identified as a strong, independent, and incremental determinant of outcome, emphasizing the importance of left atrial functional response to mitral valve disease (PUBMED:35470696). Therefore, incorporating LAVI measurement into routine evaluation of patients with degenerative mitral valve disease is recommended for clinical decision-making processes. |
Instruction: Secular trends in kidney disease: is the decreased incidence of renal replacement therapy due to a decrease in chronic kidney disease incidence?
Abstracts:
abstract_id: PUBMED:18793559
Secular trends in kidney disease: is the decreased incidence of renal replacement therapy due to a decrease in chronic kidney disease incidence? Aims: Little is known about trends in renal replacement therapy among patients with chronic kidney disease (CKD) or about changes in the incidence of CKD. We studied the incidence of renal replacement therapy within the population of a health maintenance organization (HMO) both among the entire HMO population and among those with CKD.
Methods: We calculated yearly incidence rates of renal replacement therapy for each year from 1998 to 2005. We defined CKD using the National Kidney Foundation definition of 2 estimated glomerular filtration rates below 60 ml/min/1.73 m2 90 or more days apart. Poisson regression assessed year-to-year differences.
Results: The number of patients with CKD rose consistently from 3,861 in 1998 to 5,242 in 2005. The proportion of patients who had been diagnosed with hypertension rose from 86.7% (starting renal replacement therapy) or 34.5% (with CKD) to 99.1 and 46.9%. The proportion of patients with diabetes changed little throughout the years studied. The mean estimated glomerular filtration rate among CKD patients rose minimally from 38.4 ml/min/1.73 m2 in 1998 to 39.9 ml/min/1.73 m2 in 2005. Age- and sex-adjusted rates of RRT among patients with CKD varied (p=0.0034), but did not follow a consistent pattern over time.
Conclusions: Incidence of renal replacement therapy among patients with CKD changed little between 1998 and 2005, despite an increase in the number of patients diagnosed with CKD. The discrepancy may be due to increased laboratory identification of CKD.
abstract_id: PUBMED:16688116
Stabilized incidence of diabetic patients referred for renal replacement therapy in Denmark. Despite an improvement in diabetes care during the last 20 years, the number of diabetic patients starting renal replacement therapy (RRT) has continued to increase in the Western world. The aim was to study the incidence of patients starting RRT in Denmark from 1990 to 2004. Data were obtained from The Danish National Registry; Report on Dialysis and Transplantation, where all patients actively treated for end-stage renal disease have been registered since 1990. The incidence of end-stage renal disease increased until 2001. Thereafter the incidence stabilized on 130 per million people (pmp). The number of diabetic patients starting RRT increased steadily from: 52 (number of patients) in 1990, 113 in 1995, 150 in 2000, 168 in 2001, and 183 in 2002. However, during the years 2003 and 2004 this number was significantly reduced by 15% to 156 and 155, respectively. This was mainly due to a 22% reduction in the number of non-insulin- treated (type II) diabetic patients (number of patients): 98, 82, and 76 in 2002, 2003, and 2004, respectively. The mean age in the background population, the mean age in diabetic patients starting RRT and the incidence of type I and type II diabetes increased during the study period. The encouraging stabilization in the incidence of diabetic patients referred for RRT observed in Denmark could be the result of implementation of a multifactorial and more intensive renoprotective intervention in patients with diabetes and chronic progressive renal disease.
abstract_id: PUBMED:36461735
Trends in the incidence of renal replacement therapy by type of primary kidney disease in Japan, 2006-2020. Aim: Age-standardized incidence of end stage kidney disease requiring renal replacement therapy (RRT) has stabilized in men and declined in women in Japan since 1996. However, recent trends by primary kidney disease are unknown. The present study aimed to examine recent trends in incidence rates of RRT by primary kidney disease in Japan.
Methods: Numbers of incident RRT patients aged ≥20 years by sex and primary kidney disease from 2006 to 2020 were extracted from the Japanese Society of Dialysis Therapy registry. Using the census population as the denominator, annual incidence rates of RRT were calculated and standardized to the WHO World Standard Population (2000-2025). Average annual percentage change (AAPC) and corresponding 95% confidence intervals (CIs) were calculated for trends using Joinpoint regression analysis.
Results: From 2006 to 2020, the crude number of incident RRT patients due to nephrosclerosis increased by 132% for men and 62% for women. Age-standardized incidence rates of RRT due to nephrosclerosis increased significantly, by 3.3% (95% CI: 2.9-3.7) and 1.4% (95% CI: 0.8-1.9) per year for men and women, respectively. The AAPC of chronic glomerulonephritis (-4.4% [95% CI: -5.3 to -3.8] for men and -5.1% [95% CI: -5.5 to -4.6] for women) and diabetic nephropathy (-0.6% [95% CI: -0.9 to -0.3] for men and -2.8% [95% CI: -3.1 to -2.6] for women) significantly decreased from 2006 to 2020.
Conclusion: Incident RRT due to chronic glomerulonephritis and diabetic nephropathy decreased, while the number and incident rates of RRT due to nephrosclerosis increased, from 2006 to 2020 in Japan.
abstract_id: PUBMED:24821288
Prevalence and incidence of chronic kidney disease stage G5 in Japan. The prevalence and incidence of end-stage kidney disease (ESKD) have continued to increase worldwide. Japan was known as having the highest prevalence of ESKD in the world; however, Taiwan took this place in 2001, with the USA still in third position. However, the prevalence data from Japan and Taiwan consisted of dialysis patients only. The prevalence and incidence of Kidney Transplantation (KT) in Japan were quite low, and the number of KT patients among those with ESKD was regarded as negligibly small. However, the number of KT recipients has increased recently. Furthermore, there are no reports about nationwide surveys on the prevalence and incidence of predialysis chronic kidney failure patients in Japan. This review describes our recent study on the estimated number of chronic kidney disease (CKD) stage G5 patients and the number of ESKD patients living in Japan, obtained via the cooperation of five related medical societies. From the results, as of Dec 31, 2007, 275,242 patients had received dialysis therapy and 10,013 patients had a functional transplanted kidney, and as of Dec 31, 2008, 286,406 patients had received dialysis therapy and 11,157 patients had a functional transplanted kidney. Consequently, there were 285,255 patients with CKD who reached ESKD and were living in Japan in 2008 and 297,563 in 2009. We also estimated that there were 67,000 predialysis CKD stage G5 patients in 2009, 37,365 patients introduced to dialysis therapy, and 101 patients who received pre-emptive renal transplantation in this year. In total, there were 37,466 patients who newly required renal replacement therapy (RRT) in 2009. Not only the average ages, but also the primary renal diseases of the new ESKD patients in each RRT modality were different.
abstract_id: PUBMED:32434496
Incidence of acute kidney injury and use of renal replacement therapy in intensive care unit patients in Indonesia. Background: Currently, there is limited epidemiology data on acute kidney injury (AKI) in Indonesia. Therefore, we assessed the incidence of AKI and the utilization of renal replacement therapy (RRT) in Indonesia.
Methods: Demographic and clinical data were collected from 952 ICU participants. The participants were categorized into AKI and non-AKI groups. The participants were further classified according to the 3 different stages of AKI as per the Kidney Disease Improving Global Outcome (KDIGO) criteria.
Results: Overall incidence of AKI was 43%. The participants were divided into three groups based on the AKI stages: 18.5% had stage 1, 33% had stage 2, and 48.5% had stage 3. Primary diagnosis of renal disease and high APACHE II score were the risk factors associated with AKI (OR = 4.53, 95% CI: 1.67-12.33, p = 0.003 and OR = 1.14 per 1 unit increase, 95% CI: 1.09-1.20, p < 0.001, respectively). Chronic kidney disease was the risk factor for severe AKI. Sepsis was the leading cause of AKI. Among the AKI participants, 24.6% required RRT. The most common RRT modalities were intermittent hemodialysis (71.7%), followed by slow low-efficiency dialysis (22.8%), continuous renal replacement therapy (4.3%), and peritoneal dialysis (1.1%).
Conclusions: This study showed that AKI was a common problem in the Indonesian ICU. We strongly believe that identification of the risk factors associated with AKI will help us develop a predictive score for AKI so we can prevent and improve AKI outcome in the future.
abstract_id: PUBMED:30456883
International comparison of trends in patients commencing renal replacement therapy by primary renal disease. Aim: To examine international time trends in the incidence of renal replacement therapy (RRT) for end-stage renal disease (ESRD) by primary renal disease (PRD).
Methods: Renal registries reporting on patients starting RRT per million population for ESRD by PRD from 2005 to 2014, were identified by internet search and literature review. The average annual percentage change (AAPC) with a 95% confidence interval (CI) of the time trends was computed using Joinpoint regression.
Results: There was a significant decrease in the incidence of RRT for ESRD due to diabetes mellitus (DM) in Europe (AAPC = -0.9; 95%CI -1.3; -0.5) and to hypertension/renal vascular disease (HT/RVD) in Australia (AAPC = -1.8; 95%CI -3.3; -0.3), Canada (AAPC = -2.9; 95%CI -4.4; -1.5) and Europe (AAPC = -1.1; 95%CI -2.1; -0.0). A decrease or stabilization was observed for glomerulonephritis in all regions and for autosomal dominant polycystic kidney disease (ADPKD) in all regions except for Malaysia and the Republic of Korea. An increase of 5.2-16.3% was observed for DM, HT/RVD and ADPKD in Malaysia and the Republic of Korea.
Conclusion: Large international differences exist in the trends in incidence of RRT by primary renal disease. Mapping of these international trends is the first step in defining the causes and successful preventative measures of CKD.
abstract_id: PUBMED:22923546
Socio-economic status and incidence of renal replacement therapy: a registry study of Australian patients. Background: Socio-economic disadvantage has been linked to higher incidence of end-stage kidney disease in developed countries. Associations between socio-economic status (SES) and incidence of renal replacement therapy (RRT) have not been explored for different kidney diseases, genders or age groups in a country with universal access to healthcare.
Methods: We investigated the incidence of non-indigenous patients commencing RRT in Australia in 2000-09, using the Australia and New Zealand Dialysis and Transplant (ANZDATA) Registry. Patient postcodes were grouped into deciles using a standard SES index. We analysed incidence by five groups of kidney diseases, age groups, gender and geographic remoteness.
Results: Incidence of RRT decreased with increasing area advantage. Differences were most evident for the most disadvantaged areas [markedly increased burden; incident rate ratio (IRR) 1.27; 95% confidence interval (CI) 1.18-1.38] and most advantaged decile (decreased burden, IRR 0.76; 95% CI 0.72-0.81), compared with decile 5. Patients with diabetic nephropathy showed the greatest disparities: residents of the most disadvantaged decile were 2.38 (95% CI 2.09-2.71) times more at risk than the most advantaged decile. Congenital and genetic kidney diseases showed lesser gradients-the most disadvantaged decile was 1.28 times (95% CI 0.98-1.68) more at risk. SES was not associated with incidence for patients older than 69 years.
Discussion: These SES gradients existed, despite all Australians having access to healthcare. Diseases associated with lifestyle show the greatest gradients with SES.
abstract_id: PUBMED:9496579
Renal replacement therapy for chronic renal failure patients in the Languedoc-Roussillon region in 1994 Background: Unbiased and reliable data are presently required for health planning concerning end stage renal diseases (ESRD) in Languedoc-Roussillon region of France.
Methods: A comprehensive retrospective study has been carried out on patients with ESRD in 1994 in this area. Information was collected from medical and social documents by physicians. The present report describes the management of patients and their demographic and epidemiologic characteristics. Multiple correspondence analysis was carried out to estimate to what extent mode of renal replacement therapy is determined by patient characteristics.
Results: An incidence of 11.4 for new cases of renal replacement therapy was found per 100,000 inhabitants. This represents an increase of 4.8% in the total number of patients. The patients were found to be elderly (25% being over 72 years) and to present with multiple pathologies (32.5% severe cardiac pathology; 20.7% arteritis of the lower limbs; 15.1% diabetes; 11.2% manifesting malignant tumors). Only 57.5% received dialysis within a hospital setting; 30.1% received dialysis at home; 13% perform autodialysis; 1.2% were being trained for home dialysis in December. The renal transplantation rate was 5.5%. No significant relationship was found between choice of therapy and age, renal disease, comorbidities and place of dwelling.
Conclusions: This study demonstrates the great variety in the modes of treatment used, the facilities provided and the evolutive trend, which together make programming planning difficult.
abstract_id: PUBMED:24332762
Renal replacement therapy due to type 1 diabetes; time trends during 1995-2010--a Swedish population based register study. Background: End stage renal disease (ESRD), is the most severe complication of diabetes mellitus. This population-based study analysed time trends for start of renal replacement therapy (RRT) due to type 1 diabetes compared to type 2 diabetes and other diagnoses.
Material And Methods: We used data on patients who were registered 1995-2010 in the Swedish Renal Registry, a nationwide register covering 95 % of all patients with uraemia. The patients were analysed according to their original kidney disease. The incidence was analysed by calendar year, age at start of RRT and gender.
Results: Of 17389 patients who were registered, 1833 had type 1 diabetes; 65% were men. The mean age at onset of RRT for patients with type 1 diabetes was 52.8 years which increased by more than 3 years over the studied period. The number of patients in need of RRT due to type 1 diabetes decreased, while RRT due to type 2 diabetes increased during the period studied.
Conclusions: The overall incidence of RRT in Sweden is rather constant over the years but the need for RRT in type 1 diabetes patients decreased and patients with type 1 diabetes tend to become older at onset of RRT.
abstract_id: PUBMED:26361801
The changing trends and outcomes in renal replacement therapy: data from the ERA-EDTA Registry. Background: This study examines the time trends in incidence, prevalence, patient and kidney allograft survival and causes of death (COD) in patients receiving renal replacement therapy (RRT) in Europe.
Methods: Eighteen national or regional renal registries providing data to the European Renal Association-European Dialysis and Transplant Association Registry between 1998 and 2011 were included. Incidence and prevalence time trends between 2001 and 2011 were studied with Joinpoint and Poisson regression. Patient and kidney allograft survival and COD between 1998 and 2011 were analysed using Kaplan-Meier and competing risk methods and Cox regression.
Results: From 2001 to 2008, the adjusted incidence of RRT rose by 1.1% (95% CI: 0.6, 1.7) annually to 131 per million population (pmp). During 2008-2011, the adjusted incidence fell by 2.2% (95% CI: -4.2, -0.2) annually to 125 pmp. This decline occurred predominantly in patients aged 45-64 years, 65-74 years and in the primary renal diseases diabetes mellitus type 1 and 2, renovascular disease and glomerulonephritis. Between 2001 and 2011, the overall adjusted prevalence increased from 724 to 1032 pmp (+3.3% annually, 95% CI: 2.8, 3.8). The adjusted 5-year patient survival on RRT improved between 1998-2002 and 2003-2007 [adjusted hazard ratio (HRa) 0.85, 95% CI: 0.84, 0.86]. Comparing these time periods, the risk of cardiovascular deaths fell by 25% (HRa 0.75, 95% CI: 0.74, 0.77). However the risk of malignant death rose by 9% (HRa 1.09, 95% CI: 1.03, 1.16) in patients ≥65 years.
Conclusion: This European study shows a declining RRT incidence, particularly in patients aged 45-64 years, 65-74 years and secondary to diabetic nephropathy. Encouragingly, the adjusted RRT patient survival continues to improve. The risk of cardiovascular death has decreased, though the risk of death from malignancy has increased in the older population.
Answer: The abstracts provided offer insights into the trends in kidney disease and the incidence of renal replacement therapy (RRT). The incidence of RRT among patients with chronic kidney disease (CKD) changed little between 1998 and 2005, despite an increase in the number of patients diagnosed with CKD, suggesting that the decrease in RRT incidence may not be due to a decrease in CKD incidence but rather to increased laboratory identification of CKD (PUBMED:18793559). In Denmark, the incidence of diabetic patients starting RRT stabilized from 2001 onwards, which could be attributed to more intensive renoprotective interventions in patients with diabetes and chronic progressive renal disease (PUBMED:16688116). In Japan, the incidence rates of RRT due to nephrosclerosis increased, while incident RRT due to chronic glomerulonephritis and diabetic nephropathy decreased from 2006 to 2020 (PUBMED:36461735).
The prevalence and incidence of CKD stage G5 in Japan have been increasing, with a notable rise in the number of kidney transplant recipients (PUBMED:24821288). In Indonesia, the incidence of AKI was found to be high in ICU patients, with a significant proportion requiring RRT (PUBMED:32434496). An international comparison showed a decrease in the incidence of RRT for ESRD due to diabetes mellitus in Europe and to hypertension/renal vascular disease in several countries, with large international differences in trends by primary renal disease (PUBMED:30456883).
Socio-economic status was linked to the incidence of RRT in Australia, with higher incidence in more disadvantaged areas, particularly for diabetic nephropathy (PUBMED:22923546). In the Languedoc-Roussillon region of France, an increase in the total number of patients requiring RRT was observed in 1994, with no significant relationship between choice of therapy and patient characteristics (PUBMED:9496579). In Sweden, the need for RRT due to type 1 diabetes decreased, and patients with type 1 diabetes became older at the onset of RRT (PUBMED:24332762). |
Instruction: The western corn rootworm, a new threat to European agriculture: opportunities for biotechnology?
Abstracts:
abstract_id: PUBMED:27218412
RNAi as a management tool for the western corn rootworm, Diabrotica virgifera virgifera. The western corn rootworm (WCR), Diabrotica virgifera virgifera, is the most important pest of corn in the US Corn Belt. Economic estimates indicate that costs of control and yield loss associated with WCR damage exceed $US 1 billion annually. Historically, corn rootworm management has been extremely difficult because of its ability to evolve resistance to both chemical insecticides and cultural control practices. Since 2003, the only novel commercialized developments in rootworm management have been transgenic plants expressing Bt insecticidal proteins. Four transgenic insecticidal proteins are currently registered for rootworm management, and field resistance to proteins from the Cry3 family highlights the importance of developing traits with new modes of action. One of the newest approaches for controlling rootworm pests involves RNA interference (RNAi). This review describes the current understanding of the RNAi mechanisms in WCR and the use of this technology for WCR management. Further, the review addresses ecological risk assessment of RNAi and insect resistance management of RNAi for corn rootworm. © 2016 Society of Chemical Industry.
abstract_id: PUBMED:20730987
The western corn rootworm, a new threat to European agriculture: opportunities for biotechnology? Background: During the early 1990s, the western corn rootworm, Diabrotica virgifera virgifera Le Conte (WCR), a maize pest, invaded the European continent. The continuous spread of the pest has introduced a new constraint into European maize production. As the damage caused by the invasive species is highly variable and different crop protection (CP) strategies are available, farmers' optimal strategies are not obvious. This study uses a simulation model to assess the competitiveness of different CP strategies in seven Central European countries.
Results: Results indicate a high degree of heterogeneity in the profitability of different CP strategies, depending on the production parameters in each country. In general, crop rotation and Bt maize offer the best solutions to farmers, but, in continuous (non-rotated) maize cultivation, chemical CP options may capture part of the market. For Austrian continuous maize production it is found that not deregulating Bt maize implies that farmers forego revenues of up to euro 59 ha(-1).
Conclusions: In the presence of WCR, producing maize by an economically sound method requires incorporating country- and farm-specific characteristics into the decision framework. Also, not deregulating Bt maize has direct monetary consequences for many farmers that could influence total maize output and resistance management.
abstract_id: PUBMED:30649522
Effects of Cold Storage on Nondiapausing Eggs of the Western Corn Rootworm (Coleoptera: Chrysomelidae). Western corn rootworm, Diabrotica virgifera virgifera LeConte, became much easier to research with the development of a nondiapausing rootworm strain. In the event that the eggs cannot be used immediately researchers have been known to delay egg hatch by storing the eggs at low temperatures. It is not well known how this technique could affect egg hatch or larval development, which could alter the results of an experiment. To test for this nondiapausing eggs of the western corn rootworm were stored at low temperatures to test for potential negative effects on hatch and larval development. Eggs were stored in either soil or agar and placed in refrigerators set to 4 or 8.5°C. Nondiapausing eggs were exposed to the cold for 1, 2, or 4 wk and then placed in a chamber set to 25°C. Eggs were then tested for average hatch percentage in Petri dishes and average larval recovery from containers with seedling corn. Results showed a significant reduction in percent hatch for eggs stored at 4°C for 4 wk. Larval recovery was significantly reduced in eggs stored for 4 wk at both 4 and 8.5°C. Within the treatments tested, egg storage for less than 4 wk in soil at 8.5°C provided the best hatch and larval recovery. Researchers wishing to store eggs may use these results to improve their rearing or testing of western corn rootworm.
abstract_id: PUBMED:34323975
Characterizing the Relationship Between Western Corn Rootworm (Coleoptera: Chrysomelidae) Larval Survival on Cry3Bb1-Expressing Corn and Larval Development Metrics. The western corn rootworm, Diabrotica virgifera virgifera LeConte (Coleoptera: Chrysomelidae), is a significant pest of field corn, Zea mays L. (Poales: Poaceae), across the United States Corn Belt. Widespread adoption and continuous use of corn hybrids expressing the Cry3Bb1 protein to manage the western corn rootworm has resulted in greater than expected injury to Cry3Bb1-expressing hybrids in multiple areas of Nebraska. Single-plant bioassays were conducted on larval western corn rootworm populations to determine the level of resistance present in various Nebraska counties. The results confirmed a mosaic of susceptibility to Cry3Bb1 across Nebraska. Larval development metrics, including head capsule width and fresh weight, were measured to quantify the relationship between the level of resistance to Cry3Bb1 and larval developmental rate. Regression and correlation analyses indicate a significant positive relationship between Cry3Bb1 corrected survival and both larval development metrics. Results indicate that as the level of resistance to Cry3Bb1 within field populations increases, mean head capsule width and larval fresh weight also increase. This increases our understanding of western corn rootworm population dynamics and age structure variability present in the transgenic landscape that is part of the complex interaction of factors that drives resistance evolution. This collective variability and complexity within the landscape reinforces the importance of making corn rootworm management decisions based on information collected at the local level.
abstract_id: PUBMED:29846650
A Simple and Sensitive Plant-Based Western Corn Rootworm Bioassay Method for Resistance Determination and Event Selection. We report here a simple and sensitive plant-based western corn rootworm, Diabrotica virgifera virgifera LeConte (Coleoptera: Chrysomelidae), bioassay method that allows for examination of multiple parameters for both plants and insects in a single experimental setup within a short duration. For plants, injury to roots can be visually examined, fresh root weight can be measured, and expression of trait protein in plant roots can be analyzed. For insects, in addition to survival, larval growth and development can be evaluated in several aspects including body weight gain, body length, and head capsule width. We demonstrated using the method that eCry3.1Ab-expressing 5307 corn was very effective against western corn rootworm by eliciting high mortality and significantly inhibiting larval growth and development. We also validated that the method allowed determination of resistance in an eCry3.1Ab-resistant western corn rootworm strain. While data presented in this paper demonstrate the usefulness of the method for selection of events of protein traits and for determination of resistance in laboratory populations, we envision that the method can be applied in much broader applications.
abstract_id: PUBMED:19277256
Parasitism of Western Corn Rootworm Larvae and Pupae by Steinernema carpocapsae. Virulence and development of the insect-parasitic nematode, Steinernema carpocapsae (Weiser) (Mexican strain), were evaluated for the immature stages of the western corn rootworm, Diabrotica virgifera virgifera LeConte. Third instar rootworm larvae were five times more susceptible to nematode infection than second instar larvae and 75 times more susceptible than first instar larvae and pupae, based on laboratory bioassays. Rootworm eggs were not susceptible. Nematode development was observed in all susceptible rootworm stages, but a complete life cycle was observed only in second and third instar larvae and pupae. Nematode size was affected by rootworm stage; the smallest infective-stage nematodes were recovered from second instar rootworm larvae. Results of this study suggest that S. carpocapsae should be applied when second and third instar rootworm larvae are predominant in the field.
abstract_id: PUBMED:33671118
Western Corn Rootworm, Plant and Microbe Interactions: A Review and Prospects for New Management Tools. The western corn rootworm, Diabrotica virgifera virgifera LeConte, is resistant to four separate classes of traditional insecticides, all Bacillius thuringiensis (Bt) toxins currently registered for commercial use, crop rotation, innate plant resistance factors, and even double-stranded RNA (dsRNA) targeting essential genes via environmental RNA interference (RNAi), which has not been sold commercially to date. Clearly, additional tools are needed as management options. In this review, we discuss the state-of-the-art knowledge about biotic factors influencing herbivore success, including host location and recognition, plant defensive traits, plant-microbe interactions, and herbivore-pathogens/predator interactions. We then translate this knowledge into potential new management tools and improved biological control.
abstract_id: PUBMED:33607298
Elucidation of the MicroRNA Transcriptome in Western Corn Rootworm Reveals Its Dynamic and Evolutionary Complexity. Diabrotica virgifera virgifera (western corn rootworm, WCR) is one of the most destructive agricultural insect pests in North America. It is highly adaptive to environmental stimuli and crop protection technologies. However, little is known about the underlying genetic basis of WCR behavior and adaptation. More specifically, the involvement of small RNAs (sRNAs), especially microRNAs (miRNAs), a class of endogenous small non-coding RNAs that regulate various biological processes, has not been examined, and the datasets of putative sRNA sequences have not previously been generated for WCR. To achieve a comprehensive collection of sRNA transcriptomes in WCR, we constructed, sequenced, and analyzed sRNA libraries from different life stages of WCR and northern corn rootworm (NCR), and identified 101 conserved precursor miRNAs (pre-miRNAs) in WCR and other Arthropoda. We also identified 277 corn rootworm specific pre-miRNAs. Systematic analyses of sRNA populations in WCR revealed that its sRNA transcriptome, which includes PIWI-interacting RNAs (piRNAs) and miRNAs, undergoes a dynamic change throughout insect development. Phylogenetic analysis of miRNA datasets from model species reveals that a large pool of species-specific miRNAs exists in corn rootworm; these are potentially evolutionarily transient. Comparisons of WCR miRNA clusters to other insect species highlight conserved miRNA-regulated processes that are common to insects. Parallel Analysis of RNA Ends (PARE) also uncovered potential miRNA-guided cleavage sites in WCR. Overall, this study provides a new resource for studying the sRNA transcriptome and miRNA-mediated gene regulation in WCR and other Coleopteran insects.
abstract_id: PUBMED:27018423
IPM Use With the Deployment of a Non-High Dose Bt Pyramid and Mitigation of Resistance for Western Corn Rootworm (Diabrotica virgifera virgifera). Recent detection of western corn rootworm resistance to Bt (Bacillus thuringiensis) corn prompted recommendations for the use of integrated pest management (IPM) with planting refuges to prolong the durability of Bt technologies. We conducted a simulation experiment exploring the effectiveness of various IPM tools at extending durability of pyramided Bt traits. Results indicate that some IPM practices have greater merits than others. Crop rotation was the most effective strategy, followed by increasing the non-Bt refuge size from 5 to 20%. Soil-applied insecticide use for Bt corn did not increase the durability compared with planting Bt with refuges alone, and both projected lower durabilities. When IPM participation with randomly selected management tools was increased at the time of Bt commercialization, durability of pyramided traits increased as well. When non-corn rootworm expressing corn was incorporated as an IPM option, the durability further increased. For corn rootworm, a local resistance phenomenon appeared immediately surrounding the resistant field (hotspot) and spread throughout the local neighborhood in six generations in absence of mitigation. Hotspot mitigation with random selection of strategies was ineffective at slowing resistance, unless crop rotation occurred immediately; regional mitigation was superior to random mitigation in the hotspot and reduced observed resistance allele frequencies in the neighborhood. As resistance alleles of mobile pests can escape hotspots, the scope of mitigation should extend beyond resistant sites. In the case of widespread resistance, regional mitigation was less effective at prolonging the life of the pyramid than IPM with Bt deployment at the time of commercialization.
abstract_id: PUBMED:32501631
Evaluation of pyrethroids and organophosphates in insecticide mixtures for management of western corn rootworm larvae. Background: The western corn rootworm is an economically important pest of corn. Management tactics include pyrethroid and organophosphate insecticides, which may be applied as a mixture to protect corn roots. The goal of our study was to characterize the effects of pyrethroids and organophosphates alone and in combination on larval corn rootworm mortality and injury to corn roots. We evaluated two insecticide combinations: tebupirimphos with β-cyfluthrin and chlorethoxyfos with bifenthrin. Using a soil-based, laboratory bioassay, we exposed larvae to five concentrations of the pyrethroid alone, the organophosphate alone, the combined formulation, and a water control. We calculated LC50 values and co-toxicity factors to determine synergism or antagonism between organophosphates and pyrethroids. We also measured adult emergence and root injury in a field experiment that tested tebupirimphos alone, β-cyfluthrin alone, the combined formulation, and an untreated control.
Results: Bioassay results indicated antagonism between the pyrethroid and organophosphate at most concentrations for both insecticide combinations. In the field experiment, tebupirimphos alone or in combination with β-cyfluthrin significantly reduced adult emergence and root injury compared to the untreated controls, but β-cyfluthrin alone did not differ from the untreated control for either metric.
Conclusions: These results suggest that, at the concentrations tested, the pyrethroid component of pyrethroid-organophosphate mixtures may not contribute to a reduction of rootworm emergence or root injury. While these pyrethroids may confer a management benefit for other pests, such as seedcorn maggot, the concentrations of pyrethroids present in current formulations of these mixtures are likely too low for effective rootworm management. © 2020 Society of Chemical Industry.
Answer: The western corn rootworm (WCR), Diabrotica virgifera virgifera, has emerged as a significant threat to European agriculture since its invasion in the early 1990s, particularly affecting maize production. The pest's continuous spread has introduced new challenges for crop protection strategies, and the damage caused by WCR is highly variable, making it difficult for farmers to choose the optimal management approach (PUBMED:20730987).
Biotechnology offers several opportunities for managing WCR. Transgenic plants expressing Bt (Bacillus thuringiensis) insecticidal proteins have been a novel development in rootworm management since 2003. However, field resistance to proteins from the Cry3 family has been observed, underscoring the need for traits with new modes of action (PUBMED:27218412). RNA interference (RNAi) is one of the newest approaches being explored for controlling WCR, which involves targeting essential genes in the pest through environmental RNAi. Despite not being commercially sold to date, RNAi technology is being assessed for its ecological risk and potential for insect resistance management (PUBMED:33671118).
The use of integrated pest management (IPM) strategies in conjunction with Bt crops is recommended to prolong the durability of Bt technologies. Crop rotation has been identified as the most effective IPM strategy, followed by increasing the non-Bt refuge size. However, the use of soil-applied insecticides for Bt corn did not show an increase in durability compared to planting Bt with refuges alone (PUBMED:27018423).
In summary, biotechnology provides several tools for managing WCR, including transgenic Bt crops and RNAi technology. However, resistance management and the use of IPM strategies are crucial to ensure the sustainability and effectiveness of these biotechnological solutions in European agriculture. |
Instruction: Post-stroke fatigue: a problem of altered corticomotor control?
Abstracts:
abstract_id: PUBMED:25367024
Post-stroke fatigue: a deficit in corticomotor excitability? The pathophysiology of post-stroke fatigue is poorly understood although it is thought to be a consequence of central nervous system pathophysiology. In this study we investigate the relationship between corticomotor excitability and self-reported non-exercise related fatigue in chronic stroke population. Seventy first-time non-depressed stroke survivors (60.36 ± 12.4 years, 20 females, 56.81 ± 63 months post-stroke) with minimal motor and cognitive impairment were included in the cross-sectional observational study. Fatigue was measured using two validated questionnaires: Fatigue Severity Scale 7 and Neurological Fatigue Index - Stroke. Perception of effort was measured using a 0-10 numerical rating scale in an isometric biceps hold-task and was used as a secondary measure of fatigue. Neurophysiological measures of corticomotor excitability were performed using transcranial magnetic stimulation. Corticospinal excitability was quantified using resting and active motor thresholds and stimulus-response curves of the first dorsal interosseous muscle. Intracortical M1 excitability was measured using paired pulse paradigms: short and long interval intracortical inhibition in the same hand muscle as above. Excitability of cortical and subcortical inputs that drive M1 output was measured in the biceps muscle using a modified twitch interpolation technique to provide an index of central activation failure. Stepwise regression was performed to determine the explanatory variables that significantly accounted for variance in the fatigue and perception scores. Resting motor threshold (R = 0.384; 95% confidence interval = 0.071; P = 0.036) accounted for 14.7% (R(2)) of the variation in Fatigue Severity Scale 7. Central activation failure (R = 0.416; 95% confidence interval = -1.618; P = 0.003) accounted for 17.3% (R(2)) of the variation in perceived effort score. Thus chronic stroke survivors with high fatigue exhibit high motor thresholds and those who perceive high effort have low excitability of inputs that drive motor cortex output. We suggest that low excitability of both corticospinal output and its facilitatory synaptic inputs from cortical and sub-cortical sites contribute to high levels of fatigue after stroke.
abstract_id: PUBMED:25886778
Post-stroke fatigue: a problem of altered corticomotor control? Objectives: We recently showed that diminished motor cortical excitability is associated with high levels of post-stroke fatigue. Motor cortex excitability impacts movement parameters such as reaction and movement times. We predicted that one or both would be influenced by the presence of post-stroke fatigue.
Methods: 41 first-time stroke survivors (high fatigue n=21, Fatigue Severity Scale 7 (FSS-7) score >5; low fatigue n=20, FSS-7 score <3) participated in the study. Movement times, choice and simple reaction times were measured in all participants.
Results: A three way ANOVA with fatigue (high and low), task (movement time, simple reaction time and choice reaction time) and hand (affected and unaffected) as the three factors, revealed a significant difference between affected (but not unaffected) hand movement times in the high compared to low fatigue groups. Reaction times, however, were not different between the high-fatigue and low-fatigue groups in either the affected or unaffected hand.
Conclusions: Previously, we showed that motor cortex excitability is lower in patients with high post-stroke fatigue. Our current findings suggest that post-stroke fatigue (1) is a problem of movement speed (possibly a consequence of diminished motor cortex excitability) and not movement preparation, and (2) may have a focal origin confined to the lesioned hemisphere. We suggest that low motor cortex excitability in the lesioned hemisphere is a viable therapeutic target in post-stroke fatigue.
abstract_id: PUBMED:28994667
Relative contribution of different altered motor unit control to muscle weakness in stroke: a simulation study. Objective: Chronic muscle weakness impacts the majority of individuals after a stroke. The origins of this hemiparesis is multifaceted, and an altered spinal control of the motor unit (MU) pool can lead to muscle weakness. However, the relative contribution of different MU recruitment and discharge organization is not well understood. In this study, we sought to examine these different effects by utilizing a MU simulation with variations set to mimic the changes of MU control in stroke.
Approach: Using a well-established model of the MU pool, this study quantified the changes in force output caused by changes in MU recruitment range and recruitment order, as well as MU firing rate organization at the population level. We additionally expanded the original model to include a fatigue component, which variably decreased the output force with increasing length of contraction. Differences in the force output at both the peak and fatigued time points across different excitation levels were quantified and compared across different sets of MU parameters.
Main Results: Across the different simulation parameters, we found that the main driving factor of the reduced force output was due to the compressed range of MU recruitment. Recruitment compression caused a decrease in total force across all excitation levels. Additionally, a compression of the range of MU firing rates also demonstrated a decrease in the force output mainly at the higher excitation levels. Lastly, changes to the recruitment order of MUs appeared to minimally impact the force output.
Significance: We found that altered control of MUs alone, as simulated in this study, can lead to a substantial reduction in muscle force generation in stroke survivors. These findings may provide valuable insight for both clinicians and researchers in prescribing and developing different types of therapies for the rehabilitation and restoration of lost strength after stroke.
abstract_id: PUBMED:36561032
Comparison of Neurological Manifestations in the Two Waves of COVID-19 Infection: A Cross-Sectional Study. Introduction: Coronavirus Disease-19 (COVID-19) is an ongoing pandemic caused by highly contagious virus severe acute respiratory syndrome coronavirus-2 (SARS-COV-2) that has infected millions of people across the world. Most of the countries have seen two wave patterns of the pandemic. The second wave is potentially more challenging due to high influx of cases, differing properties of the emerging mutants, and other dynamics of the evolving pandemic. Neurological manifestations are common among COVID-19 positive patients. In this context, the present study attempts to compare the neurological manifestation in the first and second waves of COVID-19.
Methodology: A single-center retrospective observational study was undertaken to compare neurological manifestations in the first and second waves of COVID-19. A sample of 1500 patients in the second wave admitted with COVID-19 were included in this study and the findings were compared with 1700 patients in the first wave (data derived from a former study in the same center). A detailed questionnaire addressing co-morbidities, admission details, and clinical features was employed to collect data from the hospital records.
Results: Out of 1500 COVID-19 patients in the second wave of COVID-19, 355 (23.7%) of them had one or more neurological manifestations during their in-patient stay. The most common neurological symptom in the 2nd wave of COVID-19 was headache reported in 216 (14.4%) of patients followed by fatigue in 130 (8.7%), myalgia in 120 (8.0%), smell and taste disorders (STD) in 90 (6.0%), altered sensorium in 40 (2.7%), dizziness in 24 (1.6%), seizures in 34 (2.3%), encephalopathy in 26 (1.7%), strokes in 13 (0.9%), etc., Compared to the first wave of COVID-19, dizziness (P < 0.001), myalgia (P = 0.001), headache (P < 0.001) and meningoencephalitis (P = 0.01) were more common while cerebrovascular syndromes (P = 0.001) were less common in the second wave. The mortality in the 2nd wave neurological subgroup was higher [66 (18.6%)] than 1st wave neurological subgroup [23 (10%)].
Conclusion: Meningoencephalitis, headache, and seizures were found to be more common in second wave as compared to first wave. The severity and mortality rate were higher in the second wave.
abstract_id: PUBMED:29284747
Motor dexterity and strength depend upon integrity of the attention-control system. Attention control (or executive control) is a higher cognitive function involved in response selection and inhibition, through close interactions with the motor system. Here, we tested whether influences of attention control are also seen on lower level motor functions of dexterity and strength-by examining relationships between attention control and motor performance in healthy-aged and hemiparetic-stroke subjects (n = 93 and 167, respectively). Subjects undertook simple-tracking, precision-hold, and maximum force-generation tasks, with each hand. Performance across all tasks correlated strongly with attention control (measured as distractor resistance), independently of factors such as baseline performance, hand use, lesion size, mood, fatigue, or whether distraction was tested during motor or nonmotor cognitive tasks. Critically, asymmetric dissociations occurred in all tasks, in that severe motor impairment coexisted with normal (or impaired) attention control whereas normal motor performance was never associated with impaired attention control (below a task-dependent threshold). This implies that dexterity and force generation require intact attention control. Subsequently, we examined how motor and attention-control performance mapped to lesion location and cerebral functional connectivity. One component of motor performance (common to both arms), as well as attention control, correlated with the anatomical and functional integrity of a cingulo-opercular "salience" network. Independently of this, motor performance difference between arms correlated negatively with the integrity of the primary sensorimotor network and corticospinal tract. These results suggest that the salience network, and its attention-control function, are necessary for virtually all volitional motor acts while its damage contributes significantly to the cardinal motor deficits of stroke.
abstract_id: PUBMED:27268565
Falls and Fear of Falling After Stroke: A Case-Control Study. Background: Falls are common after stroke, with potentially serious consequences. Few investigations have included age-matched control participants to directly compare fall characteristics between older adults with and without stroke. Further, fear of falling, a significant psychological consequence of falls, has only been examined to a limited degree as a risk factor for future falls in a stroke population.
Objective: To compare the fall history between older adults with and without a previous stroke and to identify the determinants of falls and fear of falling in older stroke survivors.
Design: Case-control observational study.
Setting: Primary teaching hospital.
Participants: Seventy-five patients with stroke (mean age ± standard deviation, 66 ± 7 years) and 50 age-matched control participants with no previous stroke were tested.
Methods: Fall history, fear of falling, and physical, cognitive, and psychological function were assessed. A χ2 test was performed to compare characteristics between groups, and logistic regression was performed to determine the risk factors for falls and fear of falling.
Main Outcome Measures: Fall events in the past 12 months, Fall Efficacy Scale-International, Berg Balance Scale, Functional Ambulation Category, Fatigue Severity Scale, Montreal Cognitive Assessment, and Patient Healthy Questionnaire-9 were measured for all participants. Fugl-Meyer Motor Assessment was used to quantify severity of stroke motor impairments.
Results: Twenty-three patients and 13 control participants reported at least one fall in the past 12 months (P = .58). Nine participants with stroke had recurrent falls (≥2 falls) compared with none of the control participants (P < .01). Participants with stroke reported greater concern for falling than did nonstroke control participants (P < .01). Female gender was associated with falls in the nonstroke group, whereas falls in the stroke group were not significantly associated with any measured outcomes. Fear of falling in the stroke group was associated with functional ambulation level and balance. Functional ambulation level alone explained 22% of variance in fear of falling in the stroke group.
Conclusions: Compared with persons without a stroke, patients with stroke were significantly more likely to experience recurrent falls and fear of falling. Falls in patients with stroke were not explained by any of the outcome measures used, whereas fear of falling was predicted by functional ambulation level. This study has identified potentially modifiable risk factors with which to devise future prevention strategies for falls in patients with stroke.
Level Of Evidence: III.
abstract_id: PUBMED:38188863
Monitoring walking asymmetries and endpoint control in persons living with chronic stroke: Implications for remote diagnosis and telerehabilitation. Objective: The objective of this study was to assess the feasibility of monitoring and diagnosing compromised walking motion in the frontal plane, particularly in persons living with the chronic effects of stroke (PwCS). The study aimed to determine whether active control of walking in the frontal plane could be monitored and provide diagnostic insights into compensations made by PwCS during community living.
Methods: The study recruited PwCS with noticeable walking asymmetries and employed a monitoring method to assess frontal plane motion. Monitoring was conducted both within a single assessment and between assessments. The study aimed to uncover baseline data and diagnostic information about active control in chronic stroke survivors. Data were collected using sensors during 6 minutes of walking and compared between the paretic and non-paretic legs.
Results: The study demonstrated the feasibility of monitoring frontal plane motion and diagnosing disturbed endpoint control (p < 0.0125) in chronic stroke survivors when comparing the paretic leg to the non-paretic leg. A greater variability was observed in the paretic leg (p < 0.0125), and sensors were able to diagnose a stronger coupling of the body with its endpoint on the paretic side (p < 0.0125). Similar results were obtained when monitoring was conducted over a six-minute walking period, and no significant diagnostic differences were found between the two monitoring assessments. Monitoring did not reveal performance fatigue or debilitation over time.
Conclusions: This study's findings indicate that monitoring frontal plane motion is a feasible approach for diagnosing compromised walking motion. The results suggest that individuals with walking asymmetries, exhibit differences in endpoint control and variability between their paretic and non-paretic legs. These insights could contribute to more effective rehabilitation strategies and highlight the potential for monitoring compensations during various activities of daily living.
abstract_id: PUBMED:26397231
A model of poststroke fatigue based on sensorimotor deficits. Purpose Of Review: This review examines recent studies in poststroke fatigue that might contribute in understanding the pathophysiology of poststroke fatigue. Poststroke fatigue is a common problem in stroke survivors. Little is known about the pathophysiology of chronic fatigue in stroke survivors. It has long been thought of as a neuropsychiatric problem. However, there is gathering evidence to the contrary. In this study, we propose a new model of poststroke fatigue based on recent findings of corticomotor neurophysiological, behavioural, and perceptual deficits in the poststroke fatigue population.
Summary: The current evidence suggests that poststroke fatigue may not be a neuropsychiatric problem but a problem of the sensorimotor system. Future studies need to address the causal link between sensorimotor deficits and poststroke fatigue.
abstract_id: PUBMED:31273853
Visuomotor control of ankle joint using position vs. force. Ankle joint plays a critical role in daily activities involving interactions with environment using force and position control. Neuromechanical dysfunctions (e.g., due to stroke or brain injury), therefore, have a major impact on individuals' quality of life. The effective design of neuro-rehabilitation protocols for robotic rehabilitation platforms relies on understanding the control characteristics of the ankle joint in interaction with external environment using force and position, as the findings in upper limb may not be generalizable to the lower limb. This study aimed to characterize the skilled performance of ankle joint in visuomotor position and force control. A two-degree-of-freedom (DOF) robotic footplate was used to measure individuals' force and position. Healthy individuals (n = 27) used ankle force or position for point-to-point and tracking control tasks in 1-DOF and 2-DOF virtual game environments. Subjects' performance was quantified as a function of accuracy and completion time. In contrast to comparable performance in 1-DOF control tasks, the performance in 2-DOF tasks was different and had characteristic patterns in the position and force conditions, with a significantly better performance for position. Subjective questionnaires on the perceived difficulty matched the objective experimental results, suggesting that the poor performance in force control was not due to experimental set-up or fatigue but can be attributed to the different levels of challenge needed in neural control. It is inferred that in visuomotor coordination, the neuromuscular specialization of ankle provides better control over position rather than force. These findings can inform the design of neuro-rehabilitation platforms, selection of effective tasks and therapeutic protocols.
abstract_id: PUBMED:28616033
Exercise-induced muscle fatigue in the unaffected knee joint and its influence on postural control and lower limb kinematics in stroke patients. This study aimed to investigate the effects of exercise-induced muscle fatigue in the unaffected knee joint on postural control and kinematic changes in stroke patients. Forty participants (20 stroke patients, 20 age-matched healthy participants) were recruited. To induce fatigue, maximum voluntary isometric contractions were performed in the unaffected knee joint in a Leg Extension Rehab exercise machine using the pneumatic resistance. We measured static and dynamic balance and lower-limb kinematics during gait. Changes in postural control parameters anteroposterior sway speed and total center of pressure distance differed significantly between the stroke and control groups. In addition, changes in gait kinematic parameters knee and ankle angles of initial contact differed significantly between stroke (paretic and non-paretic) and control groups. Muscle fatigue in the unaffected knee and ankle impaired postural control and debilitates kinematic movement of ipsilateral and contralateral lower limbs, and may place the fatigued stroke patients at greater risk for falls.
Answer: Yes, post-stroke fatigue appears to be a problem of altered corticomotor control. Studies have shown that diminished motor cortical excitability is associated with high levels of post-stroke fatigue (PUBMED:25367024; PUBMED:25886778). Specifically, chronic stroke survivors with high fatigue exhibit high motor thresholds, and those who perceive high effort have low excitability of inputs that drive motor cortex output (PUBMED:25367024). Furthermore, it has been suggested that low motor cortex excitability in the lesioned hemisphere is a viable therapeutic target in post-stroke fatigue, as it impacts movement parameters such as movement times but not reaction times, indicating a problem with movement speed rather than movement preparation (PUBMED:25886778).
Additionally, a model of post-stroke fatigue based on sensorimotor deficits has been proposed, suggesting that post-stroke fatigue may not be a neuropsychiatric problem but rather a problem of the sensorimotor system (PUBMED:26397231). This is supported by findings that altered control of motor units alone can lead to a substantial reduction in muscle force generation in stroke survivors (PUBMED:28994667), and that attention control, which is closely linked to the motor system, is necessary for virtually all volitional motor acts (PUBMED:29284747).
Moreover, exercise-induced muscle fatigue in the unaffected knee joint can impair postural control and debilitate kinematic movement of both the ipsilateral and contralateral lower limbs in stroke patients, potentially increasing the risk of falls (PUBMED:28616033). This further underscores the relationship between motor control and post-stroke fatigue.
In summary, the evidence suggests that post-stroke fatigue is indeed a problem of altered corticomotor control, with implications for both the understanding of its pathophysiology and the development of targeted therapeutic interventions. |
Instruction: Multiphase multi-detector row computed tomography in the setting of chronic liver disease and orthotopic liver transplantation: can a series be eliminated in order to reduce radiation dose?
Abstracts:
abstract_id: PUBMED:23674013
Multiphase multi-detector row computed tomography in the setting of chronic liver disease and orthotopic liver transplantation: can a series be eliminated in order to reduce radiation dose? Objective: The objectives of this study were to explore utilization of multi-detector row computed tomography (MDCT) in screening for hepatocellular carcinoma (HCC) and to modify a liver CT protocol with a goal of dose reduction.
Methods: An electronic mail survey querying HCC surveillance practices was sent. One hundred forty consecutive patients referred for HCC indications underwent 4-phase MDCT of the liver. The unenhanced and delayed phases were evaluated by 3 readers for identification of HCC and reader confidence. The estimated effective dose (ED) was calculated.
Results: Computed tomography is primarily used to screen for HCC. Average estimated ED was 35.5 mSv. Unenhanced phase did not add to reader confidence; delayed phase increased confidence in 47% of cases. Thirty-two percent of the screening population had cumulative ED of greater than 200 mSv.
Conclusions: Multi-detector row CT of the liver is used frequently in screening for HCC. Unenhanced phase imaging does not add to HCC detection and may be eliminated to reduce radiation dose.
abstract_id: PUBMED:19222090
Imaging in liver transplantation. The aim of this study was to illustrate the role of non-invasive imaging tools such as ultrasonography, multi-detector row computed tomography, and magnetic resonance imaging in the evaluation of pediatric and adult liver recipients and potential liver donors, and in the detection of potential complications arising from liver transplantation.
abstract_id: PUBMED:31847826
Diagnostic accuracy of multi-slice computed tomography in children with Abernethy malformation. Background: Abernethy malformation is a rare congenital abnormality. Imaging examination is an important method for the diagnosis. The purpose of this study was to demonstrate manifestations of multi-slice computed tomography (MSCT) in Abernethy malformation and its diagnostic accuracy.
Methods: Fourteen children with Abernethy malformation were admitted to our center in China between July 2011 and January 2018. All 14 patients (eight males and six females) received MSCT and digital subtraction angiography (DSA) while eight patients also received ultrasound. The patients' age ranged from 1 to 14 (median age 8 years old). The clinical records of the patients were retrospectively reviewed. MSCT raw data were transferred to an Advantage Windows 4.2 or 4.6 workstation (General Electric Medical Systems, Waukesha, WI). We compared the findings of MSCT with DSA and surgical results in order to ascertain diagnostic accuracy.
Results: Three cases had type Ib Abernethy malformation and eleven cases had type II. Two cases of type II Abernethy malformation were misdiagnosed as type Ib using MSCT. Comparing the findings of MSCT with DSA and surgical results, the accuracy of MSCT was 85.7% (12/14), in which 100.0% (3/3) were type Ib and 81.8% (9/11) were type II. Clinical information included congenital heart disease, pulmonary hypertension, diffuse pulmonary arteriovenous fistula, abnormal liver function, hepatic nodules, elevated blood ammonia, and hepatic encephalopathy. Eleven cases were treated after diagnosis. One patient with Abernethy malformation type Ib (1/3) underwent liver transplantation. Seven patients with Abernethy malformation type II (7/11) were treated by shunt occlusion, received laparoscopy, or were treated with open surgical ligation. Another three patients (3/11) with Abernethy malformation type II were treated by interventional portocaval shunt occlusion under DSA.
Conclusion: MSCT attains excellent capability in diagnosing type II Abernethy malformation and further shows the location of the portocaval shunt. DSA can help when it is hard to determine diagnosis between Abernethy type Ib and II using MSCT.
abstract_id: PUBMED:15380846
Multidetector computed tomography angiography of the abdomen. Multidetector computed tomography (MDCT) angiography has provided excellent opportunities for advancement of computed tomography (CT) technology and clinical applications. It has a wide range of applications in the abdomen including vascular pathologies either occlusive or aneurysmal; enables the radiologist to produce vascular mapping that clearly show tumor invasion of vasculature and the relationship of vessels to mass lesions. MDCTA can be used in preoperative planning for hepatic resection, preoperative evaluation and planning for liver transplantation. MDCTA can also provide extremely valuable information in the evaluation of ischemic bowel disease, active Crohn disease, the extent and location of collateral vessels in cirrhosis.
abstract_id: PUBMED:28260429
Low Utility of Fluorodeoxyglucose-Positron Emission Tomography/Computed Tomography for Detecting Hepatocellular Carcinoma in Patients Before Liver Transplantation. Objectives: Our program routinely used fluorodeoxyglucose-positron emission tomography/computed tomography as part of the liver transplant evaluation of patients with hepatocellular carcinoma. The aim of this study was to evaluate the role of this imaging modality in the pretransplant work-up.
Materials And Methods: This was a retrospective chart review of our liver transplant database from January 2011 to December 2014 for all patients with hepatocellular carcinoma who underwent a liver transplant. Collected data included age, sex, cause of liver disease, imaging modality, fluorodeoxyglucose-positron emission tomography/computed tomography results, explant tissue analysis, type of transplant, and transplant outcome.
Results: During the study period, 275 liver transplants were performed. Fifty-three patients had hepatocellular carcinoma; 41 underwent fluorodeoxyglucose-positron emission tomography/computed tomography. Twenty-nine patients underwent living-donor liver transplant, and 12 patients underwent deceased-donor liver transplant. One of the 41 patients with negative FDG-imaging results had no evidence of hepatocellular carcinoma in the explant and was excluded from the study. The patients' average age was 58 years (range, 22-72 y), and 28 patients were men. The cause of liver disease was hepatitis C virus in 24 patients, cryptogenic cirrhosis in 12 patients, and hepatitis B virus in 5 patients. One patient had no hepatocellular carcinoma on explants and was excluded from the study. Twenty-five patients had hepatocellular carcinoma that met the Milan criteria, 7 were within the UCSF (University of California, San Francisco) criteria, and 8 exceeded the UCSF criteria. Of the 40 patients, 11 had positive fluorodeoxyglucose-positron emission tomography/computed tomography results (27.5%) with evidence of hepatocellular carcinoma in the explant; the remaining 29 patients (72.5%) had negative results. The fluorodeoxyglucose-positron emission tomography/computed tomography results were positive in 16% (4 of 21) of patients who met the Milan criteria, 28% (2 of 7) of patients who met the UCSF criteria and 62% (5 of 8) of patients who exceeded the UCSF criteria.
Conclusions: Fluorodeoxyglucose-positron emission tomography/computed tomography has a low degree of use in patients with hepatocellular carcinoma that falls within the Milan criteria and should not be routinely used as part of the liver transplant work-up.
abstract_id: PUBMED:3888769
In vivo hepatic volume determination using sonography and computed tomography. Validation and a comparison of the two techniques. Ultrasonography and computed tomography were used to determine hepatic volume in vivo. The data obtained were compared with the weight and volume of the same livers after surgical removal at the time of orthotopic hepatic transplantation. The relationship between hepatic weight and volume was found to be linear over a 16-fold range of weights and a 19-fold range of volumes. The sonographic results more closely paralleled the results obtained directly. These data demonstrate that both methods can determine, within acceptable limits, hepatic volume or weight. The sonographic technique, however, is more accurate than the computed tomography scan method as it allows the use of sagittal scanning of the liver, which is superior to the transverse scanning technique required by the computed tomography scanner. In addition, these results demonstrate that these methods might be applied in the future for better matching of donors and recipients of orthotopic liver transplants.
abstract_id: PUBMED:9126583
Liver volume in children measured by computed tomography. Liver volume was measured by computed tomography in 54 children and young adults with no history of liver disease. Their ages ranged from 10 days to 22 years. The volume was calculated as follows: (1) the edges of the liver were traced on each scan image and the area was calculated by computer; (2) the areas were summed and multiplied by the scan interval in centimeters. The mean liver volume (+/-SD) was 178.2 +/- 81.9 cm3 in infants (less than 12 months old) and 1114.3 +/- 192.9 cm3 in adolescents (more than 16 years old). The mean liver volume in relation to body weight (+/-SD) was 34.1 +/- 5.5 cm3/kg in infants and 20.2 +/- 3.1 cm3/kg in adolescents. In general, liver volume increases rapidly in infants, gradually in schoolchildren, and not at all in adolescents. Volumetry might be clinically useful for evaluating the liver function in children and determining the graft size in liver transplantation.
abstract_id: PUBMED:31490421
Patterns of splenic arterial enhancement on computed tomography scan are related to portal venous hypertension. Objectives: We have previously shown that patterns of splenic arterial enhancement on computed tomography scan change following liver transplantation. We suggested that this is related to changes in portal venous pressure. The aim of this study was to see if similar patterns occur in patients with and without portal hypertension and in patients before and after portal systemic shunts (transjugular portosystemic shunts).
Methods: We evaluated contrast enhanced computed tomography scans in patients being evaluated for liver disease and compared those from patients with and without portal hypertension. In addition we evaluated patients who had computed tomography scans before and after transjugular portosystemic shunts shunts. Splenic arterial enhancement was evaluated using Hounsfield units (pixel counts).
Results: Twenty-four patients with clinically significant portal hypertension were compared to 91 without. Mean splenic pixel count was significantly lower in patients with clinically significant portal hypertension (88.2 ± 17.7 vs. 115.2 ± 21.0; m ± SD, P < 0.01). Computed tomography scans were available in 18 patients pre- and post-transjugular portosystemic shunts. Pixel counts were significantly higher in the post-transjugular portosystemic shunts scans (99.7 ± 20.9 vs. 88.9 ± 26.3; P < 0.05).
Conclusion: This study supports the hypothesis that changes in portal venous pressure are related to changes in splenic arterial enhancement. We suggest that this reflects changes in the splenic micro-circulation. This mechanism may be part of the innate immune response and may also be important in the pathogenesis of hypersplenism.
abstract_id: PUBMED:2400875
Hepatic imaging with computed tomography of chronic tyrosinaemia type 1. Tyrosinaemia type 1 (fumaryl acetoacetase deficiency, hepato-renal tyrosinaemia) is a rare inborn error of metabolism which, in its chronic form, leads to cirrhosis in early childhood and subsequent development of hepatocellular carcinoma in a high proportion of cases. Imaging with computed tomography has an important role in assessing the progress of the liver disease and may be helpful in timing liver transplantation. The radiological features of seven cases are described and the implications discussed.
abstract_id: PUBMED:11981340
Detection of nodules in liver cirrhosis: spiral computed tomography or magnetic resonance imaging? A prospective study of 88 nodules in 34 patients. Detection and characterization of all focal lesions in the liver are critical for screening patients with chronic liver disease. The aim of this prospective study was to investigate the accuracy of magnetic resonance imaging (MRI) and spiral computed tomography for the diagnosis of hepatic nodules in cirrhotic patients when compared with pathological findings of the explanted liver. From February 1997 to July 1999, 34 cirrhotic patients waiting for orthotopic liver transplantation (OLT) (mean age, 53.5 +/- 9.3 years; 24 males) were included. All patients had MRI and spiral computed tomography examinations, and findings were matched with the histological findings. Data analyses were made using the McNemar chi-square test. Mean time between radiological examination (MRI or spiral computed tomography) and OLT was 43.8 +/- 39 days. A total of 88 nodules were found in the 34 patients: 54 hepatocellular carcinoma (HCC) (mean size, 18 +/- 10 mm) in 21 patients, 22 dysplastic nodules (mean size, 10.7 +/- 4.3 mm) in 11 patients, and 12 macroregenerative nodules in 13 patients. Lesion-by-lesion analyses showed that sensitivity of MRI and spiral computed tomography for nodule, HCC or dysplastic nodule diagnosis was 44.3 and 31.8% (P = 0.02), 61.1 and 51.9% (P = 0.2), and 27.3 and 0% (P = 0.04), respectively. Patient-by-patient analyses showed no statistical difference between spiral computed tomography and MRI for nodule diagnosis. In conclusion, in patients with liver cirrhosis, MRI is more accurate than spiral computed tomography for the detection of liver nodules and dysplastic nodules. However, tumour size is always a restricting factor for these two techniques, which are unable to detect small HCC in more than 60% of cases.
Answer: The study identified in abstract PUBMED:23674013 explored the utilization of multi-detector row computed tomography (MDCT) in screening for hepatocellular carcinoma (HCC) and aimed to modify a liver CT protocol to reduce radiation dose. The study found that the unenhanced phase of the four-phase MDCT did not add to reader confidence in identifying HCC, while the delayed phase increased confidence in 47% of cases. Given that the unenhanced phase did not contribute to the detection of HCC, the study concluded that it may be eliminated to reduce radiation dose without compromising the effectiveness of the screening. This suggests that in the setting of chronic liver disease and orthotopic liver transplantation, it is possible to eliminate a series (specifically, the unenhanced phase) from the multiphase MDCT protocol to reduce the radiation dose patients are exposed to during the screening for HCC. |
Instruction: Supraclavicular lymph node metastases (SLM) from breast cancer as only site of distant disease: has radiotherapy any role?
Abstracts:
abstract_id: PUBMED:9216706
Supraclavicular lymph node metastases (SLM) from breast cancer as only site of distant disease: has radiotherapy any role? Background: Supraclavicular lymph node metastases (SLM) as the only site of metastatic disease from breast cancer is a rare and a poor prognostic event. In order to evaluate the role of Radiotherapy (RT) with "radical dose" to the supraclavicular fossa, we carried out a non randomized clinical trial comparing systemic therapy alone to integrated and aggressive treatment (systemic therapy plus radiotherapy). The primary end-point was time to progression (TTP). The second end-point was the overall survival (OS).
Methods: From 1/1/1989 to 31/12/1994 37 patients (with or without the presence of locoregional disease) were enrolled into two arms, of the study, but were allowed, when giving their consent, to change the arm of the study which they had been originally allotted to. Arm A, 18 patients, 15 evaluable: chemo +/- hormonotherapy for 6 courses; after the second course, if local progression disease was present, the pts. were submitted to RT and removed from the study (3 patients). Arm B, 19 patients all evaluable: chemo +/- hormonotherapy for 3 courses followed by RT with "radical" dose. Results were analyzed on 30/11/1995 and no interim analysis was performed. The potential median follow up for all patients was 56.5 months (range 11-83 months): for Arm A 61 months (range: 12-82); for Arm B 53 months (range: 11-83). The two groups were homogeneous and balanced, without statistical differences.
Results: Median TTP was 12.5 months in Arm A and 19.5 months in Arm B (p = 0.064). Median overall survival (OS) was 27.5 months in Arm A and 48 months in Arm B. T-status to the time of the diagnosis was found to be independent prognostic factor for TTP (p = 0.0029). Disease-free interval from diagnosis to recurrence was found to be a significant prognostic factor for OS (p = 0.009).
Conclusion: The results in Arm B demonstrated the opportunity of a long term control in this subset of patients. Therefore we suggest the start of a wider multicenter study in order to define the biological significance of SLM, its importance in staging breast cancer and to consider the optimum treatment.
abstract_id: PUBMED:28535656
The Role of Supraclavicular lymph node dissection in Breast Cancer Patients with Synchronous Ipsilateral Supraclavicular Lymph Node Metastasis Objective: In this study, we evaluated the effect of supraclavicular lymph node dissection in breast cancer patients who presented with ipsilateral supraclavicular lymph node metastasis (ISLM) without distant metastasis. Methods: A total of 90 patients with synchronous ISLM without distant metastasis between 2000 and 2009 were retrospectively analyzed. Patients were retrospectively divided into two groups, namely supraclavicular lymph node dissection group(34 patients) and non-dissection group(56 patients), according to whether they underwentsupraclavicular lymph node dissection or not.The Kaplan-Meier method was applied to analyze the locoregional relapse free survival (LRFS) and overall survival(OS). Results: Median follow-upwas 85 months(range, 6 to 11 months). Local recurrence in 32 cases, 47 cases of distant metastasis, of which 25 patients were accompanied by both locoregional relapse and distant metastasis. Of the 32 patients with locoregional relapse, 11 patients were in the lymph node dissection group and 21 patients in the control group. Of the 47 patients with distant metastases, 17 were treated with lymph node dissection, 30 in the control group. Thirty-two patients died in the whole group and 16 patients underwentlymph node dissection and 16 patients didn't. There was no significant difference between the rate of 5-year LRFS and 5-year OS (P=0.359, P=0.246). For patients of ER negative, the 5-year loco-regional relapse free survival rates were 63.7% and 43.3% in supraclavicular lymph node dissection group and control group, respectively. The 5-year overall survival rates were 52.1% and 52.3%, respectively, and there were no statistically significant differences (P=0.118, P=0.951). For patients of PR negative, the 5-yearloco-regional relapse free rates were 59.8% and 46.2%, respectively, and the 5-year overall survival rates were 50.6% and 43.2%, respectively, and there was no significant difference between the two groups (P=0.317, P=0.973). The 5-year recurrence-free survival rates of human epidermal growth factor receptor 2 (HER2)-positive patients were 61.2% and 48.0%(P=0.634), respectively, and the 5-year overall survival rates were 37.2% and 65.4%(P=0.032). Forty-seven patients suffered distant metastases and the 5-year metastases free survival rates were 37.3% and 38.5% in supraclavicular lymph node dissection group and control group, respectively. Conclusion: Supraclavicular lymph node dissection maybe an effective approach to improve the loco-regional control for the patients with ISLM, especially for ER negative and PR negative subtypes, but it might has adverseeffects for the patients with negative HER2 status.
abstract_id: PUBMED:35645120
The role of surgery on locoregional treatment of patients with breast cancer newly diagnosed with ipsilateral supraclavicular lymph node metastasis. Background: Radiotherapy is a practical locoregional treatment approach for women with breast cancer who show ipsilateral supraclavicular lymph node metastasis (ISLNM) on diagnosis. However, there is controversy around the role of supraclavicular lymph node dissection. Therefore, we aimed to study the significance of supraclavicular surgery based on radiotherapy.
Patients And Methods: We retrospectively reviewed the data of 142 patients with breast cancer who presented with isolated ISLNM and received radiotherapy between the years 2000 and 2016. We also defined the effect of surgery on locoregional treatment of these patients by analyzing the prognostic factors for recurrence-free survival (RFS), distant metastasis-free survival (DMFS), and overall survival (OS).
Results: We observed that, of the 142 patients, 104 who received radiotherapy underwent supraclavicular lymph node dissection. Also, among the study group, the progesterone receptor (PR) status (P = 0.044) and the number of axillary lymph nodes (ALNs) involved (P = 0.002) were significant independent predictors of RFS. Also, tumor size (P = 0.007), PR (P < 0.001), and number of ALNs (P < 0.001) were independent predictors of DMFS and were statistically significant. Also, PR was an independent prognostic factor of OS (P = 0.033), whereas the supraclavicular surgery was not an independent prognostic factor for RFS, DMFS, and OS. Furthermore, our study focused on 92 patients with negative estrogen receptors (ERs). The result showed that supraclavicular surgery was statistically significant for RFS (P = 0.023); no significant differences in DMFS and OS were found between patients who received supraclavicular surgery and those who did not.
Conclusion: Radiotherapy may be the primary locoregional treatment approach for patients with breast cancer who present with newly diagnosed ISLNM. Additionally, supraclavicular surgery may be more appropriate for patients with negative ER who received radiotherapy.
abstract_id: PUBMED:32420257
Supraclavicular lymph node dissection with radiotherapy versus radiotherapy alone for operable breast cancer with synchronous ipsilateral supraclavicular lymph node metastases: a real-world cohort study. Background: The role of supraclavicular lymph node dissection (SCLD) in the treatment of breast cancer with ipsilateral supraclavicular lymph node metastasis (ISLM) remains controversial. We evaluated the role of SCLD in the treatment of breast cancer with ISLM and identified patients who may benefit from SCLD.
Methods: Data on patients presenting with breast cancer to the Breast Disease Center, Southwest Hospital, The Army Medical University from January 2004 and December 2017 were retrospectively screened. The median duration of follow-up was 36 months (2-175 months). 305 patients who were recently diagnosed with ISLM were eligible for the analysis.
Results: Overall, 9,236 women presented with breast cancer during the study period. Among the patients included, 146 and 159 received SCLD with radiotherapy (RT) and RT alone, respectively. Synchronous ISLM without distant metastases were present in 3.6% cases. The 3- and 5-year overall survival (OS) and disease-free survival (DFS) rates were 79.5% and 73.9%, respectively, and 67.5% and 54.8%, respectively. However, SCLD with RT was not associated with superior survival on both univariate and multivariate analyses. On stratified analyses, patients with non-luminal A tumors with 4-9 positive axillary lymph nodes who underwent SCLD with RT had both superior OS (HR =5.296; 95% CI: 1.857-15.107; P=0.001) and DFS (HR =5.331; 95% CI: 2.348-12.108; P<0.001) compared with those who received RT alone.
Conclusions: SCLD may not beneficial in improving survival for unselected breast cancer patients with ISLNM. There is less of a tendency to perform SCLD in the luminal A group.
abstract_id: PUBMED:22591766
Supraclavicular metastases from distant primaries: what is the role of the head and neck surgeon? Suspicious malignant supraclavicular lymphadenopathy provides a challenge for diagnosis and treatment. The wide variety of primary tumours that metastasise to this region should alert the clinician to look beyond the head and neck, particularly if it is the only site in the neck with suspected disease. As metastatic spread to these nodes from primaries not in the head and neck often indicates wide spread disease, neck dissection is controversial. In this article we review the lymphatic anatomy and discuss the investigation of supraclavicular lymphadenopathy. We discuss the evidence for the management of the neck in patients with subclavicular primary cancers (excluding lymphoma and melanoma) and the role of neck dissection.
abstract_id: PUBMED:24627640
The value of radiotherapy in breast cancer patients with isolated ipsilateral supraclavicular lymph node metastasis without distant metastases at diagnosis: a retrospective analysis of Chinese patients. Background: The purpose of this study was to investigate the prognosis of ipsilateral supraclavicular lymph node metastasis (ISLM) without evidence of distant metastases at diagnosis in Chinese women with breast cancer and to elucidate the clinical value of adjuvant radiotherapy.
Methods: We performed a retrospective analysis of clinical data for 39 patients with ISLM from breast cancer without distant metastasis at diagnosis. Combined modality therapy, consisting of neoadjuvant chemotherapy, surgery, and adjuvant chemotherapy with or without adjuvant radiotherapy, was offered to the patients.
Results: The patients in this study accounted for 1% of all breast cancer patients treated during the same time period. The median follow-up was 35 months. The 5-year locoregional recurrence-free survival, distant metastasis-free survival, disease-free survival (DFS), and overall survival (OS) were 57.3%, 42.3%, 34.4%, and 46.2%, respectively. Twenty-three patients received postoperative adjuvant radiotherapy. However, there was no significant difference in the 3- and 5-year locoregional recurrence-free survival (P=0.693), ISLM-free recurrence (P=0.964), distant metastasis-free survival (P=0.964), DFS (P=0.234), and OS (P=0.329) rates between the groups of patients who received or did not receive adjuvant radiotherapy (P=0.840). No significant difference in the 3-year locoregional control rate (P=0.900) was found between patients who were treated with adjuvant radiotherapy at ≤50 Gy and >50 Gy. Univariate analysis showed that clinical tumor size stage and age were prognostic factors that impacted DFS and OS.
Conclusion: Combined modality treatment may achieve satisfactory efficacy in Chinese women with ISLM from breast cancer without distant metastasis at the time of diagnosis, suggesting that ISLM might be considered a curable locoregional disease. Adjuvant radiotherapy did not, however, improve the results of these patients.
abstract_id: PUBMED:36990395
Comparison of supraclavicular surgery plus radiotherapy versus radiotherapy alone in breast cancer patients with synchronous ipsilateral supraclavicular lymph node metastasis: A multicenter retrospective study. Purpose: To evaluate and compare the outcomes of supraclavicular lymph node dissection plus radiotherapy (RT) and RT alone for patients with synchronous ipsilateral supraclavicular lymph node metastasis.
Methods: In all, 293 patients with synchronous ipsilateral supraclavicular lymph node metastasis across three centers were included. Of these, 85 (29.0%) received supraclavicular lymph node dissection plus RT (Surgery + RT) and 208 (71.0%) received RT alone. All patients received preoperative systemic therapy followed by mastectomy or lumpectomy and axillary dissection. Supraclavicular recurrence-free survival (SCRFS), locoregional recurrence-free survival (LRRFS), distant metastasis-free survival (DMFS), disease-free survival (DFS), and overall survival (OS) were evaluated by using the Kaplan-Meier method and multivariate Cox models. Multiple imputation was used for missing data.
Results: The median follow-up duration of the RT and Surgery + RT groups were 53.7 and 63.5 months, respectively. For the RT and Surgery + RT groups, the 5-year SCRFS rates were 91.7% vs. 85.5% (P = 0.522), LRRFS rates were 79.1% vs. 73.1% (P = 0.412), DMFS rates were 60.4 vs. 58.8% (P = 0.708), DFS rates were 57.6% vs. 49.7% (P = 0.291), and OS rates were 71.9% vs. 62.2% (P = 0.272), respectively. There was no significant effect on any outcome when comparing Surgery + RT versus RT alone in the multivariate analysis. Based on four risk factors of DFS, patients were classified into three risk groups: the intermediate- and high-risk groups had significantly lower survival outcomes than the low-risk group. Surgery + RT did not improve outcomes of any risk group compared with RT alone.
Conclusions: Patients with synchronous ipsilateral supraclavicular lymph node metastasis may not benefit from supraclavicular lymph node dissection. Distant metastasis remained the major failure pattern, especially for intermediate- and high-risk groups.
abstract_id: PUBMED:37916188
Role of aggressive locoregional surgery in treatment strategies for ipsilateral supraclavicular lymph node metastasis of breast cancer: a real-world cohort study. Background: Breast cancer patients with synchronous ipsilateral supraclavicular lymph node metastases (ISLNM) have unfavorable prognoses. The role of supraclavicular lymph node dissection (SLND) as a surgical intervention in the treatment of this condition remains controversial. In this study, we aimed to evaluate the prognostic factors associated with breast cancer with ISLNM and to assess the potential impact of aggressive locoregional surgical management on patient outcomes. Methods: We conducted a retrospective analysis of 250 breast cancer patients with ISLNM who were treated with curative intent at our institution between 2000 and 2020. The cohort was stratified into groups based on the extent of axillary surgery. The first group, comprising 185 patients, underwent level I/II axillary dissection. The second group, consisting of 65 patients, underwent aggressive locoregional surgery, including levels I/II/III (infraclavicular) dissection in 37 patients and levels I/II/III + SLND in 28 patients. Our study evaluated overall survival (OS) and disease-free survival (DFS) as primary endpoints, and locoregional recurrence-free survival (LRRFS) and distant metastasis-free survival (DMFS) as secondary endpoints. Results: The median follow-up time among all patients was 5.92 years (1.05-15.36 years). The 5-year OS rate was 71.89%, while the DFS rate, LRRFS rate, and DMFS rates were 59.25%, 66.38%, and 64.98%, respectively. A significant difference in OS, DFS, LRRFS, and DMFS was observed between the second group and the first group (p < 0.01). No beneficial impact on recurrence, metastasis, or survival outcomes was observed in the levels I/II/III + SLND group compared to the levels I/II/III dissection group. Multivariate logistic regression analysis revealed that levels I/II/III ± SLND surgery and T stage were associated with OS (p = 0.006 and p = 0.026), while levels I/II/III ± SLND surgery, ER+/HER2-, and histologic grade were associated with DFS (p = 0.032, p = 0.001, p = 0.032). Conclusion: Breast cancer with ISLNM may be considered a locoregional disease, requiring a combination of systemic and local therapies. Aggressive locoregional surgery has been shown to positively impact recurrence, metastasis, and survival outcomes. This approach may provide improved management of the ISLNM for breast cancer patients.
abstract_id: PUBMED:30275235
Profile and Outcome of Supraclavicular Metastases in Patients with Metastatic Breast Cancer: Discordance of Receptor Status Between Primary and Metastatic Site. Background: Breast cancer is a heterogenous and complex disease. A rare site of metastatic breast cancer disease is the neck. Data about supraclavicular metastases in patients with metastatic breast cancer are still lacking. Hence, our study aimed to analyze histological subtypes of supraclavicular metastases compared to the primary site.
Materials And Methods: This was a retrospective hospital-based cohort study of patients with breast cancer who developed supraclavicular metastases. Diagnosis of supraclavicular metastases was confirmed by biopsy or diagnostic lymph node extirpation. Histological subtypes were analyzed and Kaplan-Meier estimates were calculated for overall survival.
Results: A total of 20 patients were included in the analysis. The majority of the patients (12/20) had hormone receptor (HR)-positive/human epidermal growth factor receptor 2 (HER2)-negative supraclavicular metastases, disease in 3/20 patients was HR-positive/HER2-positive, HR-negative/HER2-positive in 1/20 patients and basal-like in 4/20 patients. Total discordance rates for estrogen receptor, progesterone receptor and HER2 between primary and metastatic tumors were 20.0%, 36.8% and 29.4%, respectively. The 5-year overall survival was 80%, whereas the 5-year survival after the onset of neck metastasis was 45%.
Conclusion: As a rare site of metastatic breast cancer, supraclavicular metastases are associated with a worse median overall survival from their onset. The high rate of discordance of histological subtype stresses the necessity for biopsies in patients with supraclavicular metastasis.
abstract_id: PUBMED:33224826
Comparison between surgery plus radiotherapy and radiotherapy alone in treating breast cancer patients with ipsilateral supraclavicular lymph node metastasis. Background: Ipsilateral supraclavicular lymph node metastasis (ISLM) with breast cancer patients has always been a hard problem for breast surgery. It is generally believed that radiotherapy can benefit the survival of patients, but whether local surgical resection is needed or not is controversial. The study aims to compare the efficacy between supraclavicular lymph node (SLN) dissection combined with radiotherapy and radiotherapy alone in the treatment of breast cancer with ISLM.
Methods: A retrospective analysis was performed using 122 cases of breast cancer with ISLM but without distant metastasis. Among them, 14 cases were eliminated due to insufficient data. The 108 remaining cases were divided into 2 groups based on different treatment proposals for metastatic SLNs. The groups were dissection plus radiotherapy (surgery group), and simple radiotherapy (radiotherapy group).
Results: For the 108 patients, the overall 5-year disease-free survival (DFS) and overall survival (OS) rates were 30.6% and 67.8%, respectively. In the surgery group, distant metastases occurred in 41 patients, and the 5-year DFS was 34.3%; in the radiotherapy group, 18 patients had distant metastases, and the 5-year DFS was 26.1%; the difference was not statistically significant (P>0.05). In the surgery group, 11 patients died, and the 5-year OS rate was 67.9%; in the radiotherapy group, 6 patients died, and the 5-year OS rate was 67.5%; the difference was not statistically significant (P>0.05).
Conclusions: The dissection of SLN combined with radiotherapy and radiotherapy alone had similar effects on the survival rates in breast cancer patients with ISLM. The local control in the surgery group was better than that in the radiotherapy group. The status of estrogen receptors (ER) and the number of axillary lymph node metastases were independent influencing factors of DFS. The ER status is an independent factor affecting the OS rate of patients.
Answer: Supraclavicular lymph node metastases (SLM) from breast cancer as the only site of distant disease is a clinical scenario that has been explored in various studies to determine the role of radiotherapy. The evidence suggests that radiotherapy does play a role in the management of SLM from breast cancer.
A non-randomized clinical trial compared systemic therapy alone to integrated treatment with systemic therapy plus radiotherapy, with the primary endpoint being time to progression (TTP) and the secondary endpoint being overall survival (OS). The results indicated that the median TTP was longer in the group that received radiotherapy (19.5 months) compared to the group that received systemic therapy alone (12.5 months), although the difference was not statistically significant (p = 0.064). However, the median OS was significantly longer in the group that received radiotherapy (48 months) compared to the group that received systemic therapy alone (27.5 months), suggesting a potential benefit of radiotherapy in this setting (PUBMED:9216706).
Another study evaluated the effect of supraclavicular lymph node dissection in breast cancer patients with ipsilateral supraclavicular lymph node metastasis (ISLM) without distant metastasis. The study found no significant difference in 5-year locoregional relapse-free survival (LRFS) and overall survival (OS) between patients who underwent dissection and those who did not. However, the study suggested that supraclavicular lymph node dissection might be an effective approach to improve locoregional control for patients with ER-negative and PR-negative subtypes, but it might have adverse effects for patients with negative HER2 status (PUBMED:28535656).
A retrospective study of 142 patients with breast cancer presenting with isolated ISLNM who received radiotherapy found that supraclavicular surgery was not an independent prognostic factor for RFS, DMFS, and OS. However, the study suggested that supraclavicular surgery might be more appropriate for patients with negative ER who received radiotherapy (PUBMED:35645120).
In summary, radiotherapy appears to have a role in the management of SLM from breast cancer, potentially contributing to longer overall survival. The integration of radiotherapy with systemic therapy may offer a benefit in terms of disease control, particularly in certain subgroups of patients. |
Instruction: The mean lung dose (MLD) : predictive criterion for lung damage?
Abstracts:
abstract_id: PUBMED:25865281
The mean lung dose (MLD) : predictive criterion for lung damage? Aim: The purpose of this work was to prove the validity of the mean lung dose (MLD), widely used in clinical practice to estimate the lung toxicity of a treatment plan, by reevaluating experimental data from mini pigs.
Materials And Methods: A total of 43 mini pigs were irradiated in one of four dose groups (25, 29, 33, and 37 Gy). Two regimens were applied: homogeneous irradiation of the right lung or partial irradiation of both lungs-including parts with lower dose-but with similar mean lung doses. The animals were treated with five fractions with a linear accelerator applying a CT-based treatment plan. The clinical lung reaction (breathing frequency) and morphological changes in CT scans were examined frequently during the 48 weeks after irradiation.
Results: A clear dose-effect relationship was found for both regimens of the trial. However, a straightforward relationship between the MLD and the relative number of responders with respect to different grades of increased breathing frequency for both regimens was not found. A morphologically based parameter NTCPlung was found to be more suitable for this purpose. The dependence of this parameter on the MLD is markedly different for the two regimens.
Conclusion: In clinical practice, the MLD can be used to predict lung toxicity of a treatment plan, except for dose values that could lead to severe side effects. In the latter mentioned case, limitations to the predictive value of the MLD are possible. Such severe developments of a radiation-induced pneumopathy are better predicted by the NTCPlung formalism. The predictive advantage of this parameter compared to the MLD seems to remain in the evaluation and comparison of widely differing dose distributions, like in the investigated trial.
abstract_id: PUBMED:29113398
Combing NLR, V20 and mean lung dose to predict radiation induced lung injury in patients with lung cancer treated with intensity modulated radiation therapy and chemotherapy. The purpose was to evaluate the predictive value of baseline neutrophil to lymphocyte ratio (NLR) level in the incidence of grade 3 or higher radiation induced lung injury (RILI) for lung cancer patients. A retrospectively analysis with 166 lung cancer patients was performed. All of the enrolled patients received chemoradiotherapy at our hospital between April 2014 and May 2016. The Cox proportional hazard model was used to identify the potential risk factors for RILI. In this cohort, the incidence of grade 3 or higher RILI was 23.8%. Univariate analysis showed that radiation dose, volume at least received 20Gy (V20), mean lung dose and NLR were significantly associated with the incidence of grade 3 or higher RILI (P = 0.012, 0.008, 0.012, and 0.039, respectively). Multivariate analysis revealed that total dose ≥ 60 Gy, V20 ≥ 20%, mean lung dose ≥ 12 Gy, and NLR ≥ 2.2 were still independent predictive factors for RILI (P = 0.010, 0.043, 0.028, and 0.015, respectively). A predictive model of RILI based on the identified risk factors was established using receiver operator characteristic curves. The results demonstrated that the combination analysis of V20, mean lung dose and NLR was superior to either of the variables alone. Additionally, we found that the constraint of V20 and mean lung dose were meaningful for patients with higher baseline NLR level. If the value of V20 and mean lung dose lower than the threshold value, the incidence of grade 3 or higher RILI for the high NLR level patients could be decreased from 63.3% to 8.7%. Our study showed that radiation dose, V20, mean lung dose and NLR were independent predictors for RILI. Combination analysis of V20, mean lung dose and NLR may provide a more accurate model for RILI prediction.
abstract_id: PUBMED:18439765
Irradiation of varying volumes of rat lung to same mean lung dose: a little to a lot or a lot to a little? Purpose: To investigate whether irradiating small lung volumes with a large dose or irradiating large lung volumes with a small dose, given the same mean lung dose (MLD), has a different effect on pulmonary function in laboratory animals.
Methods And Materials: WAG/Rij/MCW male rats were exposed to single fractions of 300 kVp X-rays. Four treatments, in decreasing order of irradiated lung volume, were administered: (1) whole lung irradiation, (2) right lung irradiation, (3) left lung irradiation, and (4) irradiation of a small lung volume with four narrow beams. The irradiation times were chosen to accumulate the same MLD of 10, 12.5, or 15 Gy with each irradiated lung volume. The development of radiation-induced lung injury for < or =20 weeks was evaluated as increased breathing frequency, mortality, and histopathologic changes in the irradiated and control rats.
Results: A significant elevation of respiratory rate, which correlated with the lung volume exposed to single small doses (> or =5 Gy), but not with the MLD, was observed. The survival of the rats in the whole-lung-irradiated group was MLD dependent, with all events occurring between 4.5 and 9 weeks after irradiation. No mortality was observed in the partial-volume irradiated rats.
Conclusions: The lung volume irradiated to small doses might be the dominant factor influencing the loss of pulmonary function in the rat model of radiation-induced lung injury. Caution should be used when new radiotherapy techniques that result in irradiation of large volumes of normal tissue are used for the treatment of lung cancer and other tumors in the thorax.
abstract_id: PUBMED:14597351
Incorporating an improved dose-calculation algorithm in conformal radiotherapy of lung cancer: re-evaluation of dose in normal lung tissue. Background And Purpose: The low density of lung tissue causes a reduced attenuation of photons and an increased range of secondary electrons, which is inaccurately predicted by the algorithms incorporated in some commonly available treatment planning systems (TPSs). This study evaluates the differences in dose in normal lung tissue computed using a simple and a more correct algorithm. We also studied the consequences of these differences on the dose-effect relations for radiation-induced lung injury.
Materials And Methods: The treatment plans of 68 lung cancer patients initially produced in a TPS using a calculation model that incorporates the equivalent-path length (EPL) inhomogeneity-correction algorithm, were recalculated in a TPS with the convolution-superposition (CS) algorithm. The higher accuracy of the CS algorithm is well-established. Dose distributions in lung were compared using isodoses, dose-volume histograms (DVHs), the mean lung dose (MLD) and the percentage of lung receiving >20 Gy (V20). Published dose-effect relations for local perfusion changes and radiation pneumonitis were re-evaluated.
Results: Evaluation of isodoses showed a consistent overestimation of the dose at the lung/tumor boundary by the EPL algorithm of about 10%. This overprediction of dose was also reflected in a consistent shift of the EPL DVHs for the lungs towards higher doses. The MLD, as determined by the EPL and CS algorithm, differed on average by 17+/-4.5% (+/-1SD). For V20, the average difference was 12+/-5.7% (+/-1SD). For both parameters, a strong correlation was found between the EPL and CS algorithms yielding a straightforward conversion procedure. Re-evaluation of the dose-effect relations showed that lung complications occur at a 12-14% lower dose. The values of the TD(50) parameter for local perfusion reduction and radiation pneumonitis changed from 60.5 and 34.1 Gy to 51.1 and 29.2 Gy, respectively.
Conclusions: A simple tissue inhomogeneity-correction algorithm like the EPL overestimates the dose to normal lung tissue. Dosimetric parameters for lung injury (e.g. MLD, V20) computed using both algorithms are strongly correlated making an easy conversion feasible. Dose-effect relations should be refitted when more accurate dose data is available.
abstract_id: PUBMED:3842277
Dose rate dependence of response of mouse lung to irradiation. The dose-rate dependence of lung damage in mice has been studied using LD50/50-180 as an index of the incidence of radiation pneumonitis. Mean lethal doses for 60Co gamma radiation to the thorax delivered at 100, 25 and 6 cGy/min were 1403, 1923 and 2488 cGy respectively. There were statistically significant differences between values obtained at 6 and 25 cGy/min and between those obtained at 25 and 100 cGy/min. An isoeffect plot of this data on a log-log graph shows the sparing effect of dose rate reduction to be greater for the lung than for more rapidly responding systems (colony forming units of small intestine and Chinese hamster cells in culture).
abstract_id: PUBMED:38189733
Application of dose-gradient function in reducing radiation induced lung injury in breast cancer radiotherapy. Objective: Try to create a dose gradient function (DGF) and test its effectiveness in reducing radiation induced lung injury in breast cancer radiotherapy.
Materials And Methods: Radiotherapy plans of 30 patients after breast-conserving surgery were included in the study. The dose gradient function was defined as DGH=VDVp3, then the area under the DGF curve of each plan was calculated in rectangular coordinate system, and the minimum area was used as the trigger factor, and other plans were triggered to optimize for area reduction. The dosimetric parameters of target area and organs at risk in 30 cases before and after re-optimization were compared.
Results: On the premise of ensuring that the target dose met the clinical requirements, the trigger factor obtained based on DGF could further reduce the V5, V10, V20, V30 and mean lung dose (MLD) of the ipsilateral lung in breast cancer radiotherapy, P < 0.01. And the D2cc and mean heart dose (MHD) of the heart were also reduced, P < 0.01. Besides, the NTCPs of the ipsilateral lung and the heart were also reduced, P < 0.01.
Conclusion: The trigger factor obtained based on DGF is efficient in reducing radiation induced lung injury in breast cancer radiotherapy.
abstract_id: PUBMED:6535853
Dose-independence of early, cyclophosphamide-induced lung damage in mice. Administration of cyclophosphamide to mice resulted in lung damage which was assessed by measuring breathing rate and lung compliance. Above a threshold dose (approximately 50-100 mg/kg b.w.) the time of onset and severity of early, acute pulmonary dysfunction was dose-independent in the range 100-400 mg/kg. At doses up to 200 mg/kg lung damage was reversible but above this second threshold level, pulmonary fibrosis invariably developed. The severity of late damage (approximately day 40) was dose-related. These findings may help in the interpretation of clinical lung damage associated with cyclophosphamide therapy.
abstract_id: PUBMED:26454068
Association between absolute volumes of lung spared from low-dose irradiation and radiation-induced lung injury after intensity-modulated radiotherapy in lung cancer: a retrospective analysis. The aim of this study was to investigate the association between absolute volumes of lung spared from low-dose irradiation and radiation-induced lung injury (RILI) after intensity-modulated radiotherapy (IMRT) for lung cancer. The normal lung relative volumes receiving greater than 5, 10, 20 and 30 Gy (V5-30) mean lung dose (MLD), and absolute volumes spared from greater than 5, 10, 20 and 30 Gy (AVS5-30) for the bilateral and ipsilateral lungs of 83 patients were recorded. Any association of clinical factors and dose-volume parameters with Grade ≥2 RILI was analyzed. The median follow-up was 12.3 months; 18 (21.7%) cases of Grade 2 RILI, seven (8.4%) of Grade 3 and two (2.4%) of Grade 4 were observed. Univariate analysis revealed the located lobe of the primary tumor. V5, V10, V20, MLD of the ipsilateral lung, V5, V10, V20, V30 and MLD of the bilateral lung, and AVS5 and AVS10 of the ipsilateral lung were associated with Grade ≥2 RILI (P < 0.05). Multivariate analysis indicated AVS5 of the ipsilateral lung was prognostic for Grade ≥2 RILI (P = 0.010, OR = 0.272, 95% CI: 0.102-0.729). Receiver operating characteristic curves indicated Grade ≥2 RILI could be predicted using AVS5 of the ipsilateral lung (area under curve, 0.668; cutoff value, 564.9 cm(3); sensitivity, 60.7%; specificity, 70.4%). The incidence of Grade ≥2 RILI was significantly lower with AVS5 of the ipsilateral lung ≥564.9 cm(3) than with AVS5 < 564.9 cm(3) (P = 0.008). Low-dose irradiation relative volumes and MLD of the bilateral or ipsilateral lung were associated with Grade ≥2 RILI, and AVS5 of the ipsilateral lung was prognostic for Grade ≥2 RILI for lung cancer after IMRT.
abstract_id: PUBMED:26295744
Extracting the normal lung dose-response curve from clinical DVH data: a possible role for low dose hyper-radiosensitivity, increased radioresistance. In conventionally fractionated radiation therapy for lung cancer, radiation pneumonitis' (RP) dependence on the normal lung dose-volume histogram (DVH) is not well understood. Complication models alternatively make RP a function of a summary statistic, such as mean lung dose (MLD). This work searches over damage profiles, which quantify sub-volume damage as a function of dose. Profiles that achieve best RP predictive accuracy on a clinical dataset are hypothesized to approximate DVH dependence.Step function damage rate profiles R(D) are generated, having discrete steps at several dose points. A range of profiles is sampled by varying the step heights and dose point locations. Normal lung damage is the integral of R(D) with the cumulative DVH. Each profile is used in conjunction with a damage cutoff to predict grade 2 plus (G2+) RP for DVHs from a University of Michigan clinical trial dataset consisting of 89 CFRT patients, of which 17 were diagnosed with G2+ RP.Optimal profiles achieve a modest increase in predictive accuracy--erroneous RP predictions are reduced from 11 (using MLD) to 8. A novel result is that optimal profiles have a similar distinctive shape: enhanced damage contribution from low doses (<20 Gy), a flat contribution from doses in the range ~20-40 Gy, then a further enhanced contribution from doses above 40 Gy. These features resemble the hyper-radiosensitivity / increased radioresistance (HRS/IRR) observed in some cell survival curves, which can be modeled using Joiner's induced repair model.A novel search strategy is employed, which has the potential to estimate RP dependence on the normal lung DVH. When applied to a clinical dataset, identified profiles share a characteristic shape, which resembles HRS/IRR. This suggests that normal lung may have enhanced sensitivity to low doses, and that this sensitivity can affect RP risk.
abstract_id: PUBMED:31727550
Improving TBI lung dose calculations: Can the treatment planning system help? Lung toxicity is a serious concern during total body irradiation (TBI). Therefore, evaluation of accurate dose calculation when using lung blocks is of utmost importance. Existing clinical treatment planning systems can perform the calculation but there are large inaccuracies when calculating volumetric dose at extended distances in the presence of high atomic number materials. Percent depth dose and absolute dose measurements acquired at 400 cm SSD with a cerrobend block were compared with calculated values from the Eclipse treatment planning system using AAA and Acuros. The block was simulated in 2 ways; (1) manually drawing a contour to mimic the block and (2) creating a virtual block in the accessory tray. Although the relative dose distribution was accurately calculated, larger deviations of around 50% and 40% were observed between measured depth dose and absolute dose with AAA and Acuros, respectively. Deviations were reduced by optimizing the relative electron density in the contoured block or the transmission factor in the virtual block.
Answer: The mean lung dose (MLD) is widely used in clinical practice to estimate lung toxicity of a treatment plan. However, its validity as a predictive criterion for lung damage is nuanced and depends on various factors.
A study involving mini pigs irradiated with different regimens found that while there was a clear dose-effect relationship, a straightforward relationship between MLD and the relative number of responders with different grades of increased breathing frequency was not found. Instead, a morphologically based parameter, NTCPlung, was more suitable for predicting severe side effects of radiation-induced pneumopathy. The study concluded that MLD can predict lung toxicity except for dose values that could lead to severe side effects, where its predictive value is limited (PUBMED:25865281).
Another study in lung cancer patients treated with intensity-modulated radiation therapy and chemotherapy found that MLD, along with the baseline neutrophil to lymphocyte ratio (NLR) and volume at least received 20Gy (V20), were independent predictors for radiation-induced lung injury (RILI). The combination of these factors provided a more accurate model for RILI prediction than any single variable alone (PUBMED:29113398).
Research on rats showed that the volume of lung irradiated might be the dominant factor influencing the loss of pulmonary function, rather than MLD, suggesting that caution should be used with radiotherapy techniques that result in irradiation of large volumes of normal tissue (PUBMED:18439765).
A study that incorporated an improved dose-calculation algorithm in conformal radiotherapy of lung cancer found that a simple tissue inhomogeneity-correction algorithm overestimates the dose to normal lung tissue, and that dosimetric parameters for lung injury computed using both simple and more accurate algorithms are strongly correlated, allowing for easy conversion (PUBMED:14597351).
In summary, while MLD is a commonly used parameter to predict lung toxicity, its predictive accuracy can be improved when combined with other factors such as NLR and V20. Additionally, the volume of lung irradiated and the accuracy of dose calculation algorithms can significantly influence the prediction of lung damage, suggesting that MLD should not be the sole criterion for predicting lung toxicity (PUBMED:25865281, PUBMED:29113398, PUBMED:18439765, PUBMED:14597351). |
Instruction: Are pediatric quality care measures too stringent?
Abstracts:
abstract_id: PUBMED:25169454
Developing measures for pediatric quality: methods and experiences of the CHIPRA pediatric quality measures program grantees. Background: Monitoring quality is an important way of understanding how the health care system is serving children and families. The Children's Health Insurance Program Reauthorization Act of 2009 (CHIPRA) Pediatric Quality Measures Program (PQMP) funded efforts to develop and enhance measures to assess care for children and adolescents. We describe the processes used by the PQMP grantees to develop measures to assess the health care of children and adolescents in Medicaid and the Children's Health Insurance Program.
Methods: Key steps in the measures development process include identifying concepts, reviewing and synthesizing evidence, prioritizing concepts, defining how measures should be calculated, and measure testing. Stakeholder engagement throughout the process is critical. Case studies illustrate how PQMP grantees adapted the process to respond to the nature of measures they were charged to develop and overcome challenges encountered.
Results: PQMP grantees used varied approaches to measures development but faced common challenges, some specific to the field of pediatrics and some general to all quality measures. Major challenges included the limited evidence base, data systems difficult or unsuited for measures reporting, and conflicting stakeholder priorities.
Conclusions: As part of the PQMP, grantees were able to explore innovative methods to overcome measurement challenges, including new approaches to building the evidence base and stakeholder consensus, integration of alternative data sources, and implementation of new testing methods. As a result, the PQMP has developed new quality measures for pediatric care while also building an infrastructure, expertise, and enhanced methods for measures development that promise to provide more relevant and meaningful tools for improving the quality of children's health care.
abstract_id: PUBMED:27017034
Quality Care and Patient Safety in the Pediatric Emergency Department. Over the past 15 years, with alarming and illustrative reports released from the Institute of Medicine, quality improvement and patient safety have come to the forefront of medical care. This article reviews quality improvement frameworks and methodology and the use of evidence-based guidelines for pediatric emergency medicine. Top performance measures in pediatric emergency care are described, with examples of ongoing process and quality improvement work in our pediatric emergency department.
abstract_id: PUBMED:31047093
Pediatric Quality Metrics Related to Quality and Cost. The institution of pediatric quality in health care has grown in the past decade but continues to evolve. Children's health care emphasizes the importance of maintenance of health and prevention of illness, which can be measured based on immunization rates, routine or scheduled well care, and early intervention. Pediatric quality measures and indicators have become the basis for payment of services and a true goal to value. Designing processes such as pay-for-performance models, volume-based care, and coordination of care assist in assuring that children receive high-quality health care.
abstract_id: PUBMED:22956704
Are pediatric quality care measures too stringent? Introduction: We aimed to demonstrate the application of national pediatric quality measures, derived from claims-based data, for use with electronic medical record data, and determine the extent to which rates differ if specifications were modified to allow for flexibility in measuring receipt of care.
Methods: We reviewed electronic medical record data for all patients up to 15 years of age with ≥1 office visit to a safety net family medicine clinic in 2010 (n = 1544). We assessed rates of appropriate well-child visits, immunizations, and body mass index (BMI) documentation, defined strictly by national guidelines versus by guidelines with clinically relevant modifications.
Results: Among children aged <3 years, 52.4% attended ≥6 well-child visits by the age of 15 months; 60.8% had ≥6 visits by age 2 years. Less than 10% completed 10 vaccination series before their second birthday; with modifications, 36% were up to date. Among children aged 3 to 15 years, 63% had a BMI percentile recorded; 91% had BMI recorded within 36 months of the measurement year.
Conclusions: Applying relevant modifications to national quality measure definitions captured a substantial number of additional services. Strict adherence to measure definitions might miss the true quality of care provided, especially among populations that may have sporadic patterns of care utilization.
abstract_id: PUBMED:25684132
Developing and testing pediatric oral healthcare quality measures. Objective: This study describes processes used to develop and test pediatric oral healthcare quality measures and provides recommendations for implementation.
Methods: At the request of the Centers for Medicare and Medicaid Services, the Dental Quality Alliance (DQA) was formed in 2008 as a multi-stakeholder group to develop oral healthcare quality measures. For its initial focus on pediatric care, measure development processes included a literature review and environmental scan to identify relevant measure concepts, which were rated on importance, feasibility, and validity using the RAND/UCLA modified Delphi approach. These measure concepts and a gap assessment led to the development of a proposed set of measures that were tested for feasibility, reliability, and validity.
Results: Of 112 measure concepts identified, 59 met inclusion criteria to undergo formal rating. Twenty-one of 59 measure concepts were rated as "high scoring." Subsequently, 11 quality and related care delivery measures comprising a proposed pediatric starter set were developed and tested; 10 measures met feasibility, reliability, and validity criteria and were approved by the DQA stakeholder membership. These measures are currently being incorporated into Medicaid, Children's Health Insurance Program, and commercial quality improvement programs.
Conclusions: Broad stakeholder engagement, rigorous measure development and testing processes, and regular opportunities for public input contributed to the development and validation of the first set of fully specified and tested pediatric oral healthcare quality measures, which have high feasibility for implementation in both public and private sectors. This achievement marks an important essential step toward improving oral healthcare and oral health outcomes for children.
abstract_id: PUBMED:37772707
Quality measures for the care of pediatric patients with obstructive sleep apnea: 2023 update after measure maintenance. Obstructive sleep apnea (OSA) is the most common respiratory sleep disorder in the United States in preschool and school-aged children. In an effort to continue addressing gaps and variations in care in this patient population, the American Academy of Sleep Medicine (AASM) Quality Measures Task Force performed quality measure maintenance on the Quality Measures for the Care of Pediatric Patients with Obstructive Sleep Apnea (originally developed in 2015). The Quality Measures Task Force reviewed the current medical literature, including updated clinical practice guidelines and systematic literature reviews, existing pediatric OSA quality measures, and performance data highlighting remaining gaps or variations in care since implementation of the original quality measure set to inform any potential revisions to the quality measures. These revised quality measures have been implemented in the AASM Sleep Clinical Data Registry (Sleep CDR) to capture performance data and encourage continuous quality improvement, specifically in outcomes associated with diagnosing and managing OSA in the pediatric population.
Citation: Lloyd RM, Crawford T, Donald R, et al. Quality measures for the care of pediatric patients with obstructive sleep apnea: 2023 update after measure maintenance. J Clin Sleep Med. 2024;20(1):127-134.
abstract_id: PUBMED:30460185
Pediatric palliative care in the intensive care unit and questions of quality: a review of the determinants and mechanisms of high-quality palliative care in the pediatric intensive care unit (PICU). This article reviews the state and practice of pediatric palliative care (PC) within the pediatric intensive care unit (PICU) with specific consideration of quality issues. This includes defining PC and end of life (EOL) care. We will also describe PC as it pertains to alleviating children's suffering through the provision of "concurrent care" in the ICU environment. Modes of care, and attendant strengths, of both the consultant and integrated models will be presented. We will review salient issues related to the provision of PC in the PICU, barriers to optimal practice, parental, and staff perceptions. Opportunity areas for quality improvement and the role of initiatives and measures such as education, family-based initiatives, staff needs, symptom recognition, grief, and communication follow. To conclude, we will look to the literature for PC resources for pediatric intensivists and future directions of study.
abstract_id: PUBMED:17496830
Determining pediatric intensive care unit quality indicators for measuring pediatric intensive care unit safety. Introduction: The measurement of quality and patient safety continues to gain increasing importance, as these measures are used for both healthcare improvement and accountability. Pediatric care, particularly that provided in pediatric intensive care units, is sufficiently different from adult care that specific metrics are required. BODY: Pediatric critical care requires specific measures for both quality and safety. Factors that may affect measures are identified, including data sources, risk adjustment, intended use, reliability, validity, and the usability of measures. The 18-month process to develop seven pediatric critical care measures proposed for national use is described. Specific patient safety metrics that can be applied to pediatric intensive care units include error-, injury-, and risk-based approaches.
Conclusion: Measurement of pediatric critical care quality and safety will likely continue to evolve. Opportunities exist for intensivists to contribute and lead in the development and refinement of measures.
abstract_id: PUBMED:28536768
Quality measures and pediatric radiology: suggestions for the transition to value-based payment. Recent political and economic factors have contributed to a meaningful change in the way that quality in health care, and by extension value, are viewed. While quality is often evaluated on the basis of subjective criteria, pay-for-performance programs that link reimbursement to various measures of quality require use of objective and quantifiable measures. This evolution to value-based payment was accelerated by the 2015 passage of the Medicare Access and CHIP (Children's Health Insurance Program) Reauthorization Act (MACRA). While many of the drivers of these changes are rooted in federal policy and programs such as Medicare and aimed at adult patients, the practice of pediatrics and pediatric radiology will be increasingly impacted. This article addresses issues related to the use of quantitative measures to evaluate the quality of services provided by the pediatric radiology department or sub-specialty section, particularly as seen from the viewpoint of a payer that may be considering ways to link payment to performance. The paper concludes by suggesting a metric categorization strategy to frame future work on the subject.
abstract_id: PUBMED:7984393
Outcome evaluations as measures of quality in pediatric intensive care. In recent years, interest in assessing quality of care has blossomed. Quality care may be defined as providing the most appropriate treatment and providing it with great technical and managerial skill and proficiency in a manner that gains patient acceptance. For assessment purposes, variation in risk-adjusted outcomes between providers should be attributable to quality of care differences. Some methods for measuring outcomes and risk-adjustment for pediatric intensive care populations have been developed, but additional tools are needed for applications in outcomes management, continuous quality improvement, and outcomes research.
Answer: The question of whether pediatric quality care measures are too stringent is addressed in the literature with varying perspectives. One study aimed to demonstrate the application of national pediatric quality measures, derived from claims-based data, for use with electronic medical record data. It found that when applying relevant modifications to national quality measure definitions, a substantial number of additional services were captured. Strict adherence to measure definitions might miss the true quality of care provided, especially among populations that may have sporadic patterns of care utilization (PUBMED:22956704). This suggests that some flexibility in the application of these measures may be necessary to accurately reflect the quality of care.
On the other hand, the literature also highlights the importance of rigorous measure development and testing processes, as well as broad stakeholder engagement, which contribute to the development and validation of pediatric quality measures. For instance, the Dental Quality Alliance developed and tested pediatric oral healthcare quality measures, which were found to be highly feasible for implementation in both public and private sectors (PUBMED:25684132). This indicates that while some measures may be stringent, they are also carefully constructed and validated to ensure relevance and effectiveness.
Moreover, the Pediatric Quality Measures Program (PQMP) grantees faced challenges such as limited evidence bases and data systems unsuited for measures reporting, but they explored innovative methods to overcome these challenges, including new approaches to building evidence and stakeholder consensus (PUBMED:25169454). This suggests that while measures may be stringent, there is also a concerted effort to adapt and refine them to better serve pediatric healthcare needs.
In conclusion, while there may be concerns about the stringency of pediatric quality care measures, the literature indicates that there is ongoing work to ensure these measures are both rigorous and adaptable to the realities of pediatric healthcare delivery. |
Instruction: Are pediatric Open Access journals promoting good publication practice?
Abstracts:
abstract_id: PUBMED:21477335
Are pediatric Open Access journals promoting good publication practice? An analysis of author instructions. Background: Several studies analyzed whether conventional journals in general medicine or specialties such as pediatrics endorse recommendations aiming to improve publication practice. Despite evidence showing benefits of these recommendations, the proportion of endorsing journals has been moderate to low and varied considerably for different recommendations. About half of pediatric journals indexed in the Journal Citation Report referred to the Uniform Requirements for Manuscripts of the International Committee of Medical Journal Editors (ICMJE) but only about a quarter recommended registration of trials. We aimed to investigate to what extent pediatric open-access (OA) journals endorse these recommendations. We hypothesized that a high proportion of these journals have adopted recommendations on good publication practice since OA electronic publishing has been associated with a number of editorial innovations aiming at improved access and transparency.
Methods: We identified 41 journals publishing original research in the subject category "Health Sciences, Medicine (General), Pediatrics" of the Directory of Open Access Journals http://www.doaj.org. From the journals' online author instructions we extracted information regarding endorsement of four domains of editorial policy: the Uniform Requirements for Manuscripts, trial registration, disclosure of conflicts of interest and five major reporting guidelines such as the CONSORT (Consolidated Standards of Reporting Trials) statement. Two investigators collected data independently.
Results: The Uniform Requirements were mentioned by 27 (66%) pediatric OA journals. Thirteen (32%) required or recommended trial registration prior to publication of a trial report. Conflict of interest policies were stated by 25 journals (61%). Advice about reporting guidelines was less frequent: CONSORT was referred to by 12 journals (29%) followed by other reporting guidelines (MOOSE, PRISMA or STARD) (8 journals, 20%) and STROBE (3 journals, 7%). The EQUATOR network, a platform of several guideline initiatives, was acknowledged by 4 journals (10%). Journals published by OA publishing houses gave more guidance than journals published by professional societies or other publishers.
Conclusions: Pediatric OA journals mentioned certain recommendations such as the Uniform Requirements or trial registration more frequently than conventional journals; however, endorsement is still only moderate. Further research should confirm these exploratory findings in other medical fields and should clarify what the motivations and barriers are in implementing such policies.
abstract_id: PUBMED:29516389
Retracted Publications in the Biomedical Literature from Open Access Journals. The number of articles published in open access journals (OAJs) has increased dramatically in recent years. Simultaneously, the quality of publications in these journals has been called into question. Few studies have explored the retraction rate from OAJs. The purpose of the current study was to determine the reasons for retractions of articles from OAJs in biomedical research. The Medline database was searched through PubMed to identify retracted publications in OAJs. The journals were identified by the Directory of Open Access Journals. Data were extracted from each retracted article, including the time from publication to retraction, causes, journal impact factor, and country of origin. Trends in the characteristics related to retraction were determined. Data from 621 retracted studies were included in the analysis. The number and rate of retractions have increased since 2010. The most common reasons for retraction are errors (148), plagiarism (142), duplicate publication (101), fraud/suspected fraud (98) and invalid peer review (93). The number of retracted articles from OAJs has been steadily increasing. Misconduct was the primary reason for retraction. The majority of retracted articles were from journals with low impact factors and authored by researchers from China, India, Iran, and the USA.
abstract_id: PUBMED:29040536
Due diligence in the open-access explosion era: choosing a reputable journal for publication. Faculty are required to publish. Naïve and "in-a-hurry-to-publish" authors seek to publish in journals where manuscripts are rapidly accepted. Others may innocently submit to one of an increasing number of questionable/predatory journals, where predatory is defined as practices of publishing journals for exploitation of author-pays, open-access publication model by charging authors publication fees for publisher profit without provision of expected services (expert peer review, editing, archiving, and indexing published manuscripts) and promising almost instant publication. Authors may intentionally submit manuscripts to predatory journals for rapid publication without concern for journal quality. A brief summary of the open access "movement," suggestions for selecting reputable open access journals, and suggestion for avoiding predatory publishers/journals are described. The purpose is to alert junior and seasoned faculty about predatory publishers included among available open access journal listings. Brief review of open access publication, predatory/questionable journal characteristics, suggestions for selecting reputable open access journals and avoiding predatory publishers/journals are described. Time is required for intentionally performing due diligence in open access journal selection, based on publisher/journal quality, prior to manuscript submission or authors must be able to successfully withdraw manuscripts when submission to a questionable or predatory journal is discovered.
abstract_id: PUBMED:35425878
The Upsurge of Impact Factors in Pediatric Journals Post COVID-19 Outbreak: A Cross-Sectional Study. Background: Impact factor (IF) is a quantitative tool designed to evaluate scientific journals' excellence. There was an unprecedented upsurge in biomedical journals' IF in 2020, perhaps contributed by the increased number of publications since the COVID-19 outbreak. We conducted a cross-sectional study (2018-2020) to analyze recent trends in standard bibliometrics (IF, Eigenfactor, SNIP) of pediatric journals. We also estimated reference and publication counts of biomedical journals since publication volume determines the number of citations offered and IF.
Methods: Various bibliometrics of pediatric journals and reference/publication volumes of biomedical journals were compared between 2020 vs. 2019 and 2019 vs. 2018. We also compared open access (OA) and subscription journals' trends. Finally, we estimated IF changes in the journals of a different specialty, pulmonology.
Results: The study included 164 pediatric and 4,918 biomedical journals (OA = 1,473, subscription = 3,445). Pediatric journals' IFs had increased significantly in 2020 [median (IQR) = 2.35 (1.34)] vs. 2019 [1.82 (1.22)] (Wilcoxon: p-value < 0.001). IFs were unchanged between 2018 and 2019. Eigenfactor remained stable between 2018 and 2020, while SNIP increased progressively. Reference/publication volumes of biomedical journals escalated between 2018 and 2020, and OA journals experienced faster growth than subscription journals. IFs of pulmonary journals also increased considerably in 2020 vs. 2019.
Conclusions: We report an upsurge in pediatric journals' IF, perhaps contributed by a sudden increase in publication numbers in 2020. Therefore, considering this limitation, IF should be cautiously used as the benchmark of excellence. Unlike IF, Eigenfactor remained stable between 2018 and 2020. Similar changes in IF were also observed among the journals of another specialty, pulmonology.
abstract_id: PUBMED:31293108
Open Access Publishing in India: Coverage, Relevance, and Future Perspectives. Open access (OA) publishing is a recent phenomenon in scientific publishing, enabling free access to knowledge worldwide. In the Indian context, OA to science has been facilitated by government-funded repositories of student and doctoral theses, and many Indian society journals are published with platinum OA. The proportion of OA publications from India is significant in a global context, and Indian journals are increasingly available on OA repositories such as Pubmed Central, and Directory of Open Access Journals. However, OA in India faces numerous challenges, including low-quality or predatory OA journals, and the paucity of funds to afford gold OA publication charges. There is a need to increase awareness amongst Indian academics regarding publication practices, including OA, and its potential benefits, and utilize this modality of publication whenever feasible, as in publicly-funded research, or when platinum OA is available, while avoiding falling prey to poor quality OA journals.
abstract_id: PUBMED:31163966
Compliance with ethical rules for scientific publishing in biomedical Open Access journals indexed in Journal Citation Reports. This study examined compliance with the criteria of transparency and best practice in scholarly publishing defined by COPE, DOAJ, OASPA and WAME in Biomedical Open Access journals indexed in Journal Citation Reports (JCR). 259 Open Access journals were drawn from the JCR database and on the basis of their websites their compliance with 14 criteria for transparency and best practice in scholarly publishing was verified. Journals received penalty points for each unfulfilled criterion when they failed to comply with the criteria defined by COPE, DOAJ, OASPA and WAME. The average number of obtained penalty points was 6, where 149 (57.5%) journals received 6 points and 110 (42.5%) journals 7 points. Only 4 journals met all criteria and did not receive any penalty points. Most of the journals did not comply with the criteria declaration of Creative Commons license (164 journals), affiliation of editorial board members (116), unambiguity of article processing charges (115), anti-plagiarism policy (113) and the number of editorial board members from developing countries (99). The research shows that JCR cannot be used as a whitelist of journals that comply with the criteria of transparency and best practice in scholarly publishing.
abstract_id: PUBMED:38434260
Do orthopaedics surgeons have any idea what predatory journals are?:(cross-sectional study). Objective: The legitimacy of published research confronts a real challenge posed by predatory journals. These journals not only distribute inadequately written articles but also undermine the prospects of acknowledgment and citation for high-quality content. It is essential, nevertheless, to differentiate between predatory journals and reputable open-access ones. A worldwide anti-predatory movement seeks to enhance awareness about such journals. Hence, our objective was to assess the awareness, attitudes, and practices of Sudanese orthopedic surgeons concerning both predatory and open-access publishing.
Methods: Conducted between January and April 2023, this cross-sectional electronic survey involved Sudanese orthopedic surgeons. The survey, comprising five domains to gauge knowledge, attitudes, and practices related to predatory and open-access publishing, was shared via the Sudanese Orthopedic Surgeons Association email distribution list among the 561 registered surgeons. The targeted sample size was 286. Categorical variables were reported using frequencies, while continuous variables were presented as medians and interquartile ranges. Nonparametric tests and ordinal regression were employed for inferential statistics.
Results: Of the 561 surgeons, 104 participants completed the questionnaire, resulting in a response rate of 18.5 %. Approximately 49% exhibited poor knowledge, with 56% unfamiliar with the term "predatory journals," and 74% unaware of Beall's list. Overall attitudes toward publication in open-access and predatory journals were neutral for 60% of participants, and only 26% demonstrated good overall publication practices. Higher knowledge scores positively correlated with attitude and practice scores. Ordinal regression analysis identified variables such as employment in university hospitals, higher academic rank, publication experience, and working in well-resourced countries as factors increasing the likelihood of higher knowledge, attitude, and practice scores.
Conclusion: The majority of the study participants reported very low knowledge of predatory journals and their possible detrimental consequences on the integrity and quality of scientific publications. Therefore, educational efforts on the negative impact of predatory publication practices in orthopedics are needed.
abstract_id: PUBMED:28433651
The surge of predatory open-access in neurosciences and neurology. Predatory open access is a controversial publishing business model that exploits the open-access system by charging publication fees in the absence of transparent editorial services. The credibility of academic publishing is now seriously threatened by predatory journals, whose articles are accorded real citations and thus contaminate the genuine scientific records of legitimate journals. This is of particular concern for public health since clinical practice relies on the findings generated by scholarly articles. Aim of this study was to compile a list of predatory journals targeting the neurosciences and neurology disciplines and to analyze the magnitude and geographical distribution of the phenomenon in these fields. Eighty-seven predatory journals operate in neurosciences and 101 in neurology, for a total of 2404 and 3134 articles issued, respectively. Publication fees range 521-637 USD, much less than those charged by genuine open-access journals. The country of origin of 26.0-37.0% of the publishers was impossible to determine due to poor websites or provision of vague or non-credible locations. Of the rest 35.3-42.0% reported their headquarters in the USA, 19.0-39.2% in India, 3.0-9.8% in other countries. Although calling themselves "open-access", none of the journals retrieved was listed in the Directory of Open Access Journals. However, 14.9-24.7% of them were found to be indexed in PubMed and PubMed Central, which raises concerns on the criteria for inclusion of journals and publishers imposed by these popular databases. Scholars in the neurosciences are advised to use all the available tools to recognize predatory practices and avoid the downsides of predatory journals.
abstract_id: PUBMED:29047152
False gold: Safely navigating open access publishing to avoid predatory publishers and journals. Aim: The aim of this study was to review and discuss predatory open access publishing in the context of nursing and midwifery and develop a set of guidelines that serve as a framework to help clinicians, educators and researchers avoid predatory publishers.
Background: Open access publishing is increasingly common across all academic disciplines. However, this publishing model is vulnerable to exploitation by predatory publishers, posing a threat to nursing and midwifery scholarship and practice. Guidelines are needed to help researchers recognize predatory journals and publishers and understand the negative consequences of publishing in them.
Design: Discussion paper.
Data Sources: A literature search of BioMed Central, CINAHL, MEDLINE with Full Text and PubMed for terms related to predatory publishing, published in the period 2007-2017.
Implications For Nursing: Lack of awareness of the risks and pressure to publish in international journals, may result in nursing and midwifery researchers publishing their work in dubious open access journals. Caution should be taken prior to writing and submitting a paper, to avoid predatory publishers.
Findings: The advantage of open access publishing is that it provides readers with access to peer-reviewed research as soon as it is published online. However, predatory publishers use deceptive methods to exploit open access publishing for their own profit. Clear guidelines are needed to help researchers navigate safely open access publishing.
Conclusion: A deeper understanding of the risks of predatory publishing is needed. Clear guidelines should be followed by nursing and midwifery researchers seeking to publish their work in open access journals.
abstract_id: PUBMED:34048953
Evaluation of Open-Access Journals in Obstetrics and Gynaecology. A retrospective observational study was conducted to evaluate open-access journals in obstetrics and gynaecology, published between 2011 and 2019. Journals were classified based on their registration in open-access journal directories. Of 176 journals, 47 were not registered. Journals registered in the Directory of Open Access Journals (DOAJ) demonstrated good overall quality, and their journal metrics were significantly higher than those of non-registered journals or journals registered in other directories. The lack of editor names and indexing information on a journal's website are the most distinctive features of non-registered journals. Non-registration in an open-access journal directory indicates a lack of transparency and may ultimately indicate that a journal is predatory.
Answer: Pediatric Open Access (OA) journals are somewhat promoting good publication practice, but there is still room for improvement. An analysis of author instructions from pediatric OA journals found that 66% mentioned the Uniform Requirements for Manuscripts, which is a positive indicator of endorsement of good publication practices. However, only 32% required or recommended trial registration prior to publication of a trial report, and less than a third referred to major reporting guidelines like CONSORT (29%). Journals published by OA publishing houses provided more guidance than those published by professional societies or other publishers (PUBMED:21477335).
Despite these efforts, the quality of publications in OA journals has been questioned, with a study showing an increase in the number of retracted articles from OA journals in biomedical research. The most common reasons for retraction were errors, plagiarism, duplicate publication, fraud/suspected fraud, and invalid peer review, indicating issues with the publication practices of some OA journals (PUBMED:29516389).
Moreover, there is a need for due diligence in the era of the open-access explosion to choose reputable journals for publication. The existence of predatory journals, which exploit the author-pays model without providing proper editorial services, poses a challenge to the legitimacy of published research. Authors are advised to be cautious and perform due diligence in selecting reputable OA journals to avoid predatory publishers (PUBMED:29040536).
In the context of pediatric journals, there was a significant increase in impact factors post the COVID-19 outbreak, which may have been contributed by an increase in publication numbers. This suggests that while OA journals are gaining visibility, the impact factor should be used cautiously as a benchmark of excellence (PUBMED:35425878).
Overall, while pediatric OA journals are making efforts to promote good publication practice, the presence of predatory journals and the need for greater adherence to reporting guidelines and ethical standards indicate that there is still significant progress to be made in ensuring good publication practices across all OA journals in pediatrics (PUBMED:31293108, PUBMED:31163966). |
Instruction: Is the serum concentration of pentosidine a predictor of cardiovascular events in patients with type 2 diabetes and kidney disease?
Abstracts:
abstract_id: PUBMED:17726651
Is the serum concentration of pentosidine a predictor of cardiovascular events in patients with type 2 diabetes and kidney disease? Background And Objective: Advanced glycation end-products (AGEs) are implicated in the pathogenesis of vascular damage of atherosclerosis, especially in diabetes and renal failure. Pentosidine, an AGE, is generated by glycation and oxidation reactions (glycoxidation product).
Methods: 218 patients at high risk of cardiovascular events (from the "Irbesartan in Diabetic Nephropathy Trial" [IDNT] cohort) with type 2 diabetes and nephropathy (mean age 61 +/- 6,4 years, 68 female, 150 male) were followed for a mean of 2.2 years. The mean GFR at baseline was 47,9 +/- 16,0 ml/min (MDRD formula). Serum levels of pentosidine were measured by high-performance liquid chromatography. The relationship between pentosidine, traditional risk factors and cardiovascular events (CVE) was tested in Cox proportional hazards models.
Results: The mean serum level of pentosidine at baseline was 148 +/- 113 pmol/ml, that of hemoglobin A (1c) (HbA (1c)) 8.6 +/- 1.7 %. During follow-up, 93 CVE occurred; a total of 50 patients died, 39 of cardiovascular causes. Final multivariate analysis showed low density lipoprotein (LDL) and duration of diabetes to be independent risk factors for a first cardiovascular event (including death from cardiovascular causes) (relative risk [RR] for the highest quartile compared with the lowest: LDL 3,041, confidence interval 1.616 - 5.724, P = 0.001; duration of diabetes: RR 2.629, CI 1.279 - 6.151, P = 0.011).
Conclusion: The serum level of pentosidine was not an independent risk factor for cardiovascular outcomes in the selected cohort. This suggests that traditional risk factors may play a more important role in causing cardiovascular events and that serum levels of AGEs are of low predictive value. Further investigations are necessary to assess whether tissue levels of AGEs are of greater relevance.
abstract_id: PUBMED:25987260
Serum Bicarbonate and Kidney Disease Progression and Cardiovascular Outcome in Patients With Diabetic Nephropathy: A Post Hoc Analysis of the RENAAL (Reduction of End Points in Non-Insulin-Dependent Diabetes With the Angiotensin II Antagonist Losartan) Study and IDNT (Irbesartan Diabetic Nephropathy Trial). Background: Low serum bicarbonate level has been reported to be an independent predictor of kidney function decline and mortality in patients with chronic kidney disease. Mechanisms underlying low serum bicarbonate levels may differ in patients with and without diabetes. We aimed to specifically investigate the association of serum bicarbonate level with kidney disease progression and cardiovascular outcome in a cohort of patients with type 2 diabetes and nephropathy.
Study Design: Post hoc analysis of 2 multicenter randomized controlled trials.
Setting & Participants: 2,628 adults with type 2 diabetes and nephropathy.
Factor: Serum bicarbonate level.
Outcomes: Incidence of: (1) end-stage renal disease (ESRD), (2) ESRD or doubling of serum creatinine level, (3) all-cause mortality, (4) cardiovascular events (fatal/nonfatal stroke/myocardial infarction), and (5) heart failure.
Measurements: Serum bicarbonate was measured at baseline as total carbon dioxide. Associations of baseline serum bicarbonate level with end points were investigated using Cox regression models. Serum bicarbonate levels were studied as a continuous variable and stratified in quartiles. Follow-up was 2.8±1.0 (SD) years.
Results: Cox regression analyses showed that serum bicarbonate level had inverse associations with incident ESRD (HR, 0.91; 95% CI, 0.89-0.93; P<0.001) and incidence of the combined end point of ESRD or serum creatinine doubling (HR, 0.94; 95% CI, 0.92-0.96; P<0.001). These associations were independent of age, sex, and cardiovascular risk factors, but disappeared after adjustment for baseline estimated glomerular filtration rate (all P>0.05). Analysis of bicarbonate quartiles showed similar results for the quartile with the lowest bicarbonate (≤21 mEq/L) versus the quartile with normal bicarbonate levels (24-26 mEq/L). There was no association of bicarbonate level with cardiovascular events and heart failure.
Limitations: Post hoc analysis and single measurement of serum bicarbonate.
Conclusions: In this cohort of patients with type 2 diabetes with nephropathy, serum bicarbonate level associations with kidney disease end points were not retained after adjustment for estimated glomerular filtration rate, which is in contrast to results of earlier studies in nondiabetic populations.
abstract_id: PUBMED:31484628
Resistin as a predictor of cardiovascular hospital admissions and renal deterioration in diabetic patients with chronic kidney disease. Background: High resistin levels have been associated with cardiovascular disease (CVD). Cardiovascular hospitalizations are common, especially in diabetic and renal impaired patients. The purpose of this study is to determine the role of serum resistin as a predictor of cardiovascular hospitalizations in type 2 diabetic patients with mild to moderate chronic kidney disease (CKD).
Methods: We conducted a prospective, observational study. 78 diabetic patients with mild to moderate CKD and no previous CVD were included. The population was divided in two groups: G-1 with cardiovascular related admission (n = 13) and G-2 without cardiovascular related admission (n = 65). A Student's t-test was conducted to determine correlations between laboratory findings and hospitalization. We used logistic regression to assess predictors of cardiovascular events requiring hospitalization and Cox regression to identify predictors of end-stage renal disease (ESRD).
Results: eGFR, albumin, HbA1c, phosphorous, PTH, IR, CRP, resistin and active vitamin D, were related to cardiovascular admissions. In a multivariate regression model, resistin (OR = 2.074, p = 0.047) was an independent predictor of cardiovascular hospitalization. Cox regression showed that resistin (HR = 1.931, p = 0.031) and UACr (HR = 1.151, p = 0.048) were also independent predictors of renal disease progression.
Conclusion: Resistin demonstrated to be valuable in predicting hospital admissions and progression to ESRD.
abstract_id: PUBMED:26124640
The impact of lipocalin-type-prostaglandin-D-synthase as a predictor of kidney disease in patients with type 2 diabetes. Hypertension and diabetes are clinical conditions which contribute to the development of chronic kidney disease as well as risk factors for cardiovascular events. In recent years, lipocalin-type-prostaglandin-D-synthase (beta trace protein; BTP) has increasingly been studied as an alternative to creatinine for the evaluation of renal function as well as for being a possible biomarker for cardiovascular disease. It is expected that the levels of BTP in patients with cardiovascular disease are elevated, as is the case with patients with renal dysfunction. The objective of this study is to realize a systematic review of the pertinent literature in respect to BTP as a biomarker of renal dysfunction in diabetic patients. Using the database MEDLINE, a search up to year 2014 was conducted using the follow descriptors: "lipocalin type prostaglandin d synthase" AND "diabetes"; "lipocalin type prostaglandin d synthase" and "diabetic nephropathy"; "beta trace protein" AND "diabetes"; "beta trace protein" AND "diabetic nephropathy". The criteria used for inclusion were the presence of the referring to terms in title or abstract and study conducted in humans. About 17 articles were selected, of which six articles were duplicates, and of which six articles did not investigate any possible relationship between the protein (BTP) and either diabetes or nephropathy. The final result yielded five articles to be analyzed. This review found BTP is not influenced by race, by body mass index nor by patient's sex. BTP can be considered as a reliable early biomarker of renal dysfunction in diabetics. BTP is associated with metabolic syndrome and is also associated with greater cardiovascular risk. Prospective data establishing a correlation between BTP and mortality would have been of great interest, but such articles were not found in this review.
abstract_id: PUBMED:27108247
Coronary Artery Disease Is a Predictor of Progression to Dialysis in Patients With Chronic Kidney Disease, Type 2 Diabetes Mellitus, and Anemia: An Analysis of the Trial to Reduce Cardiovascular Events With Aranesp Therapy (TREAT). Background: Although clear evidence shows that chronic kidney disease is a predictor of cardiovascular events, death, and accelerated coronary artery disease (CAD) progression, it remains unknown whether CAD is a predictor of progression of chronic kidney disease to end-stage renal disease. We sought to assess whether CAD adds prognostic information to established predictors of progression to dialysis in patients with chronic kidney disease, diabetes, and anemia.
Methods And Results: Using the previously described Trial to Reduce Cardiovascular Events With Aranesp Therapy (TREAT) population, we compared baseline characteristics of patients with and without CAD. Cox proportional hazards models were used to assess the association between CAD and the outcomes of end-stage renal disease and the composite of death or end-stage renal disease. Of the 4038 patients, 1791 had a history of known CAD. These patients were older (mean age 70 versus 65 years, P<0.001) and more likely to have other cardiovascular disease. CAD patients were less likely to have marked proteinuria (29% versus 39%, P<0.001), but there was no significant difference in estimated glomerular filtration rate between the 2 groups. After adjusting for age, sex, race, estimated glomerular filtration rate, proteinuria, treatment group, and 14 other renal risk factors, patients with CAD were significantly more likely to progress to end-stage renal disease (adjusted hazard ratio 1.20 [95% CI 1.01-1.42], P=0.04) and to have the composite of death or end-stage renal disease (adjusted hazard ratio 1.15 [95% CI 1.01-1.30], P=0.03).
Conclusions: In patients with chronic kidney disease, diabetes, and anemia, a history of CAD is an independent predictor of progression to dialysis. In patients with diabetic nephropathy, a history of CAD contributes important prognostic information to traditional risk factors for worsening renal disease.
abstract_id: PUBMED:29426298
Albuminuria, serum creatinine, and estimated glomerular filtration rate as predictors of cardio-renal outcomes in patients with type 2 diabetes mellitus and kidney disease: a systematic literature review. Background: Albuminuria, elevated serum creatinine and low estimated glomerular filtration rate (eGFR) are pivotal indicators of kidney decline. Yet, it is uncertain if these and emerging biomarkers such as uric acid represent independent predictors of kidney disease progression or subsequent outcomes among individuals with type 2 diabetes mellitus (T2DM). This study systematically examined the available literature documenting the role of albuminuria, serum creatinine, eGFR, and uric acid in predicting kidney disease progression and cardio-renal outcomes in persons with T2DM.
Methods: Embase, MEDLINE, and Cochrane Central Trials Register and Database of Systematic Reviews were searched for relevant studies from January 2000 through May 2016. PubMed was searched from 2013 until May 2016 to retrieve studies not yet indexed in the other databases. Observational cohort or non-randomized longitudinal studies relevant to albuminuria, serum creatinine, eGFR, uric acid and their association with kidney disease progression, non-fatal cardiovascular events, and all-cause mortality as outcomes in persons with T2DM, were eligible for inclusion. Two reviewers screened citations to ensure studies met inclusion criteria.
Results: From 2249 citations screened, 81 studies were retained, of which 39 were omitted during the extraction phase (cross-sectional [n = 16]; no outcome/measure of interest [n = 13]; not T2DM specific [n = 7]; review article [n = 1]; editorial [n = 1]; not in English language [n = 1]). Of the remaining 42 longitudinal study publications, biomarker measurements were diverse, with seven different measures for eGFR and five different measures for albuminuria documented. Kidney disease progression differed substantially across 31 publications, with GFR loss (n = 9 [29.0%]) and doubling of serum creatinine (n = 5 [16.1%]) the most frequently reported outcome measures. Numerous publications presented risk estimates for albuminuria (n = 18), serum creatinine/eGFR (n = 13), or both combined (n = 6), with only one study reporting for uric acid. Most often, these biomarkers were associated with a greater risk of experiencing clinical outcomes.
Conclusions: Despite the utility of albuminuria, serum creatinine, and eGFR as predictors of kidney disease progression, further efforts to harmonize biomarker measurements are needed given the disparate methodologies observed in this review. Such efforts would help better establish the clinical significance of these and other biomarkers of renal function and cardio-renal outcomes in persons with T2DM.
abstract_id: PUBMED:20644476
Renin inhibition and microalbuminuria development: meaningful predictor of kidney disease progression. Purpose Of Review: Microalbuminuria is an indicator of increased cardiovascular disease risk. Herein, we review microalbuminuria as a predictor of the onset and progression of renal disease in people with and without diabetes. We evaluate the data on the use of direct renin inhibitors (DRIs) for treatment of hypertension with microalbuminuria.
Recent Findings: It is known that DRIs have an antiproteinuric effect, whether used alone or with an angiotensin receptor blocker (ARB), independent of its hypotensive effects in patients with type 2 diabetes. A current study will determine if adding the DRI aliskiren to an angiotensin-converting enzyme inhibitor (ACEi) or an ARB will reduce cardiovascular and renal risk in patients with type 2 diabetes.
Summary: DRIs are the latest addition to the class of renin-angiotensin-aldosterone system (RAAS) inhibitors available for patients with hypertension and kidney disease. Whether these drugs can improve upon the reduction of cardiovascular and renal risk with an ACEi or an ARB is unknown. Microalbuminuria is a surrogate marker for both cardiovascular and possibly renal endpoints. However, an ongoing issue is that the majority of patients with microalbuminuria will die of cardiovascular events before the onset of end-stage renal disease, limiting the value of using longitudinal measures of microalbuminuria progression as a measure of therapeutic benefit with newer RAAS-blocking drugs such as DRIs.
abstract_id: PUBMED:12401753
Proteinuria as a predictor of total plasma homocysteine levels in type 2 diabetic nephropathy. Objective: Patients with diabetes who manifest proteinuria are at increased risk for cardiovascular events. Some studies suggest that proteinuria exerts its cardiovascular effects at least partly through a positive association with total plasma homocysteine (tHcy). Modestly sized but better designed contrary studies find no such link through a limited range of serum creatinine and proteinuria. We tested the hypothesis that proteinuria independently predicts tHcy levels in a larger cohort of type 2 diabetic patients with nephropathy throughout a much broader range of kidney disease and proteinuria.
Research Design And Methods: Baseline data for the cross-sectional study were obtained from 717 patients enrolled in the multicenter Irbesartan Diabetic Nephropathy Trial. All subjects had type 2 diabetes, hypertension, and proteinuria and were between 29 and 78 years of age. Data included age, sex, BMI, serum creatinine and albumin, LDL and HDL cholesterol, triglyceride, proteinuria and albuminuria, plasma folate, B12, and pyridoxal 5'-phosphate (PLP) (the active form of B6), HbA(1c), and tHcy levels. Unadjusted and multivariable models were used in the analysis.
Results: Crude analyses revealed significant associations between tHcy and age (r = 0.074; P = 0.008), creatinine (r = 0.414; P < 0.001), PLP (r = -0.105; P = 0.021), B12 (r = -0.216; P < 0.001), folate (r = -0.241; P < 0.001), and HbA(1c) (r = -0.119; P = 0.003), with serum albumin approaching significance (r = 0.055; P = 0.072). Only serum creatinine, plasma folate, B12, serum albumin, sex, HbA(1c), and age were independent predictors of tHcy after controlling for all other variables.
Conclusions: By finding no independent correlation between proteinuria (or albuminuria) and tHcy levels, this study improves the external validity of previous negative findings. Therefore, it is unlikely that the observed positive association between proteinuria and cardiovascular disease is directly related to hyperhomocysteinemia.
abstract_id: PUBMED:16997053
The advanced glycation end product N(epsilon)-carboxymethyllysine is not a predictor of cardiovascular events and renal outcomes in patients with type 2 diabetic kidney disease and hypertension. Background: Advanced glycation end products (AGEs) are implicated in the pathogenesis of vascular damage, especially in patients with diabetes and renal insufficiency. The oxidatively formed AGE N(epsilon)-carboxymethyllysine (CML) is thought to be a marker of oxidative stress.
Methods: Four hundred fifty patients with type 2 diabetes and nephropathy from the Irbesartan in Diabetic Nephropathy Trial cohort (mean age, 58 +/- 8.2 years; 137 women, 313 men) with a mean glomerular filtration rate of 48.2 mL/min (0.80 mL/s; Modification of Diet in Renal Disease formula) were followed up for 2.6 years. Serum CML was measured by using an enzyme-linked immunosorbent assay. Relationships between CML levels, traditional risk factors, and cardiovascular and renal events were tested in Cox proportional hazards models.
Results: Mean serum CML level was 599.9 +/- 276.0 ng/mL, and mean hemoglobin A1c level was 7.5% +/- 1.6%. One hundred forty-three first cardiovascular events occurred during follow-up; 74 patients died, 44 of cardiovascular causes. Final multivariate analysis showed age (relative risk [RR], 1.87; confidence interval [CI], 1.13 to 3.11; P = 0.016 for the highest compared with lowest quartile), history of prior cardiovascular events (RR, 1.96; CI, 1.35 to 2.85; P < 0.0005), and 24-hour urinary albumin-creatinine ratio (RR, 1.29; CI, 1.11 to 1.50 per doubling; P < 0.0005) to be independent risk factors for a first cardiovascular event, but not CML level. CML level also did not correlate significantly with renal outcome.
Conclusion: Serum CML level could not be identified as an independent risk factor for cardiovascular or renal outcomes in the examined population. This suggests that traditional risk factors might have a more important role for these end points or that other AGE compounds, as well as tissue AGE levels, might be of greater relevance compared with serum levels, which remains open to further study.
abstract_id: PUBMED:16509553
Improving antihypertensive therapy in patients with diabetic nephropathy. Nearly all patients with diabetic nephropathy have comorbid hypertension, which greatly elevates the risk for cardiovascular events. As patients are surviving longer, their risk of progressing to end-stage renal disease is increasing, particularly in patients with type 2 diabetes. Prevention of cardiovascular and renal events in this population requires diligent efforts to control blood pressure, serum glucose, and serum lipids. Improving antihypertensive therapy in patients with diabetic nephropathy relies on the following unified strategies: reducing blood pressure to <130/80 mm Hg, prescribing an agent that blocks the renin-angiotensin system, and designing an antihypertensive regimen that both reduces albuminuria and provides cardiovascular protection. A majority of patients will require three or more antihypertensive agents to achieve these objectives. Appropriate antihypertensive therapy in patients with diabetic nephropathy delays progression of renal disease and leads to substantial cost savings.
Answer: The serum concentration of pentosidine was not found to be an independent risk factor for cardiovascular outcomes in a selected cohort of patients with type 2 diabetes and nephropathy. This conclusion was drawn from a study that followed 218 patients at high risk of cardiovascular events for a mean of 2.2 years. Despite the mean serum level of pentosidine being measured, the final multivariate analysis indicated that traditional risk factors such as low-density lipoprotein (LDL) and duration of diabetes were independent risk factors for a first cardiovascular event, including death from cardiovascular causes, rather than the serum level of pentosidine. Therefore, the study suggests that traditional risk factors may play a more significant role in causing cardiovascular events and that serum levels of AGEs like pentosidine are of low predictive value. Further investigations were deemed necessary to assess whether tissue levels of AGEs might be of greater relevance (PUBMED:17726651). |
Instruction: Laryngeal complications after thyroidectomy: is it always the surgeon?
Abstracts:
abstract_id: PUBMED:19221326
Laryngeal complications after thyroidectomy: is it always the surgeon? Hypothesis: Laryngeal dysfunction after thyroidectomy is a common complication. However, few data are available to differentiate whether these complications result from injury to the recurrent nerve or to the vocal folds from intubation.
Setting: University medical center.
Patients: Seven hundred sixty-one patients who underwent surgery to the thyroid gland from 1990 to 2002. Of these patients, 8.4% underwent a revision thyroidectomy.
Intervention: Preoperative and postoperative laryngostroboscopic examination.
Main Outcome Measure: Laryngostroboscopic evaluation of laryngeal complications.
Results: The overall rate of laryngeal complications was 42.0% (320 patients). Complications from an injury to the vocal folds occurred in 31.3% of patients. Weakness or paresis of the recurrent nerve was initially present in 6.6% and was related to the nerves at risk. This rate was higher in revision thyroidectomies than in primary surgical interventions (6.2% vs 11.6%; P = .04). The rate of laryngeal injuries was higher in patients older than 65 years (39.8% vs 30.8%; P = .03).
Conclusions: These data suggest that laryngeal complications after thyroidectomies are primarily caused by injury to the vocal folds from intubation and to a lesser extent by injury to the laryngeal nerve. We recommend documentation of informed consent, especially for patients who use their voice professionally, such as singers, actors, or teachers.
abstract_id: PUBMED:35634691
Surgeon Volume and Laryngectomy Outcomes. Objective: To examine the relationship between surgeon volume and operative morbidity and mortality for laryngectomy.
Data Sources: The Nationwide Inpatient Sample was used to identify 45,156 patients who underwent laryngectomy procedures for laryngeal or hypopharyngeal cancer between 2001 and 2011. Hospital and surgeon laryngectomy volume were modeled as categorical variables.
Methods: Relationships between hospital and surgeon volume and mortality, surgical complications, and acute medical complications were examined using multivariable regression.
Results: Higher-volume surgeons were more likely to operate at large, teaching, nonprofit hospitals and were more likely to treat patients who were white, had private insurance, hypopharyngeal cancer, low comorbidity, admitted electively, and to perform partial laryngectomy, concurrent neck dissection, and flap reconstruction. Surgeons treating more than 5 cases per year were associated with lower odds of medical and surgical complications, with a greater reduction in the odds of complications with increasing surgical volume. Surgeons in the top volume quintile (>9 cases/year) were associated with a decreased odds of in-hospital mortality (OR = 0.09 [0.01-0.74]), postoperative surgical complications (OR = 0.58 [0.45-0.74]), and acute medical complications (OR = 0.49 [0.37-0.64]). Surgeon volume accounted for 95% of the effect of hospital volume on mortality and 16%-47% of the effect of hospital volume on postoperative morbidity.
Conclusion: There is a strong volume-outcome relationship for laryngectomy, with reduced mortality and morbidity associated with higher surgeon and higher hospital volumes. Observed associations between hospital volume and operative morbidity and mortality are mediated by surgeon volume, suggesting that surgeon volume is an important component of the favorable outcomes of high-volume hospital care. Laryngoscope, 133:834-840, 2023.
abstract_id: PUBMED:4010419
Complications of laser surgery for laryngeal papillomatosis. Carbon dioxide laser surgery has become the treatment of choice for laryngeal papillomatosis. The purpose of this study was to determine the type, incidence, and severity of complications that occur with laser microlaryngoscopy for a disease that often requires multiple operations. Forty patients with laryngeal papillomatosis underwent a total of 222 carbon dioxide laser laryngoscopies over the 6 1/2-year period from June 1977 through December 1983. The results showed that 13 patients sustained a total of 23 separate complications. Intraoperative complications consisted of one episode of bilateral pneumothorax and one episode of cervical subcutaneous emphysema, both associated with the use of jet ventilation anesthesia, and one episode of a loosened tooth in a child with carious teeth. The delayed complications consisted of 10 patients with anterior laryngeal webbing, 2 patients with posterior webbing, 6 patients with laryngeal edema or fibrosis, and one episode each of prolonged dysphagia and tracheal foreign body. No airway fires occurred. Only 2 of 28 patients who had 5 or fewer laser laryngoscopies developed complications, but 11 or 12 patients undergoing 6 or more laser operations had complications. In summary, although the incidence of life threatening complications was low, the occurrence of minor complications such as small anterior glottic webs and persistent edema was relatively high, especially in those patients who required multiple laser laryngoscopies.
abstract_id: PUBMED:23177407
Office-based laryngeal procedures. Awake office-based laryngeal procedures offer numerous advantages to the patient and surgeon. These procedures are well-tolerated, safe, and can be used to treat a wide variety of laryngeal pathology. This article discusses office-based laser procedures and laryngeal biopsies. Indications, procedural techniques, postprocedural care, and potential complications are reviewed in detail.
abstract_id: PUBMED:24577936
Treatment complications and survival in advanced laryngeal cancer: a population-based analysis. Objectives/hypothesis: Primary curative treatment of advanced laryngeal cancer may include surgery or chemoradiation, although recommendations vary and both are associated with complications. We evaluated predictors and trends in the use of these modalities and compared rates of complications and overall survival in a population-based cohort of older adults.
Study Design: Retrospective population-based cohort study.
Methods: Using Surveillance Epidemiology and End Results (SEER) cancer registry data linked with Medicare claims, we identified patients over 65 with advanced laryngeal cancer diagnosed 1999 to 2007 who had total laryngectomy (TL) or chemoradiation (CTRT) within 6 months following diagnosis. We identified complications and estimated the impact of treatment on overall survival, using propensity score methods.
Results: The proportion of patients receiving TL declined from 74% in 1999 to 26% in 2007 (P < 0.0001). Almost 20% of the CTRT patients had a tracheostomy following treatment, and 57% had a feeding tube. TL was associated with an 18% lower risk of death, adjusting for patient and disease characteristics. The benefit of TL was greatest in patients with the highest propensity to receive surgery.
Conclusion: TL remains an important treatment option in well selected older patients. However, treatment selection is complex; and factors such as functional status, patient preference, surgeon expertise, and post-treatment support services should play a role in treatment decisions.
Level Of Evidence: 2b. Laryngoscope, 124:2707-2713, 2014.
abstract_id: PUBMED:10091350
Retrospective study of complications of surgery for laryngeal cancer Surgery, alone or in combination with other therapeutic measures, is one of the main approaches to curing laryngeal cancer. The risk of complications is implicit in any surgical procedure. We describe our experience with general and local complications in surgery for laryngeal cancer and examine their relation to tumor extension and surgical technique. A review was made of a series of 431 patients who underwent surgery for laryngeal cancer over a 10-year period (1982-1991). Twenty-two patients (5.1%) had systemic complications, including upper gastrointestinal hemorrhage (n = 5), massive cervical hemorrhage (n = 5), and four renal failure. Minor complications were recorded in 77 cases (17.8%), predominantly pharyngocutaneous salivary fistula, which developed in 55 patients (13.8%). The incidence of local complications was significantly greater in patients with extensive local spread (T4). There were no differences among patients with regional spread. The surgical technique and type of pharyngoesophageal reconstruction played no role in the development of complications. Preoperative radiotherapy did not influence on the development of salivary fistulas.
abstract_id: PUBMED:21181983
Impact of surgeon and hospital volume on short-term outcomes and cost of laryngeal cancer surgical care. Objective: To evaluate the impact of surgeon and hospital case volume and other related variables on short-term outcomes after surgery for laryngeal cancer.
Methods: The Maryland Health Service Cost Review Commission database was queried for laryngeal cancer surgical case volumes from 1990 to 2009. Multivariate logistic regression analyses and multiple linear regression models were used to evaluate for significant associations between surgeon and hospital case volume, as well as other independent variables and the risk of in-hospital death, postoperative wound complications, length of hospital stay, and hospital-related cost of care.
Results: Overall, 1,981 laryngeal cancer surgeries were performed with complete financial data available for 1,885 laryngeal cancer surgeries, performed by 284 surgeons at 37 hospitals. The only independently significant factor associated with the risk of in-hospital death was an APR-DRG mortality risk score of 4 (odds ratio [OR] = 10.7, P < .001). Postoperative wound fistula or dehiscence was associated with an increased mortality risk score (OR = 3.1, P < .001), total laryngectomy (OR = 12.4, P = .013), and flap reconstruction (OR = 3.8, P = .001). Increased mortality risk score, partial or total laryngectomy, flap reconstruction, and Black race were associated with an increased length of stay and hospital-related costs. After controlling for all other variables, a statistically significant negative correlation was observed between surgery at a high-volume hospital and both length of hospital stay (geometric mean = -1.5 days, P = .003). and hospital-related costs (geometric mean = -$6,061, P = .003).
Conclusions: After controlling for other factors, high-volume hospital care is associated with a shorter length of hospitalization and lower hospital-related cost of care for laryngeal cancer surgery.
abstract_id: PUBMED:33327697
Comparison between nasopharyngeal airway and laryngeal mask airway in blepharoplasty under general anaesthesia. A randomized controlled trial. Introduction: Blepharoplasty can be performed under local infiltration anaesthesia with or without sedation or general anaesthesia depending upon the surgical plan, patient and surgeon preferences, and duration of surgery. Securing the airway with an endotracheal tube or a laryngeal mask airway may cause sore throat. The primary aim of our study was to compare the incidence of this complication between the nasopharyngeal and laryngeal mask airways among patients receiving general anaesthesia during blepha-roplasty.
Material And Methods: One hundred forty-eight patients (40-60 years old), ASA II-III, were randomly and evenly assigned to one of two groups. After induction of general anaesthesia, a nasopharyngeal airway or a laryngeal mask airway was inserted according to group allocation. All patients received local infiltration anaesthesia given by the surgeon. Haemodynamic variables, oxygen saturation, end-tidal CO2, failure rate and recovery time were monitored. Postoperative complications (mainly sore throat) as well as patients' and surgeon's satisfaction, were recorded.
Results: Compared to laryngeal mask airways, the use of nasopharyngeal airways was associated with significantly lower incidence of sore throat (4.0% vs. 17.6% with a difference of 13.5%, 95% CI [3.5-24.1%], P < 0.015), shorter recovery times (10.3 min ± 2.84 min vs. 12.6 min ± 2.65 min, P < 0.001), and better patient and surgeon satisfaction (P < 0.001 for both).
Conclusions: Nasopharyngeal airways are an excellent alternative to laryngeal mask airways in anaesthetizing patients undergoing four-lid blepharoplasty surgery, with shorter recovery time, less incidence of postoperative sore throat and better patients' and surgeon's satisfaction.
abstract_id: PUBMED:24611361
Complications of endoscopic CO2 laser surgery for laryngeal cancer and concepts of their management. Endoscopic CO2 laser surgery (ELS) is a widely accepted treatment modality for early laryngeal cancer. Commonly reported advantages of ELS are good oncologic results with low incidence of complications. Although less common if compared with open procedures, complications following ELS can be very serious, even with lethal outcome. They can range from intraoperative endotracheal tube fire accidents to early and late postoperative sequels that require intensive medical treatment, blood transfusion, or revision surgery. We present our institutional experience, discuss the possible complications of ELS for laryngeal cancer, and outline the concepts of their treatment, with comprehensive literature review. Complications are more frequent following the treatment of supraglottic as compared to glottic cancer. If compared with open surgery, ELS for laryngeal cancer is associated with a lower incidence of complications. Every surgeon performing ELS should comply with particular strategies to avoid complications in the first place, and have a clear concept of their management if they occur.
abstract_id: PUBMED:38150023
Correlation between plasma lycopene levels in patients with laryngeal carcinoma and postoperative adverse complications of chemoradiotherapy and nutritional risks. Objective: In this study, we analyzed the correlation between the preoperative plasma lycopene levels, postoperative adverse complications of chemoradiotherapy, and nutritional risk scores in patients with laryngeal carcinoma.
Methods: A total of 114 patients with laryngeal carcinoma and 114 healthy respondents were enrolled in this study. The patients with laryngeal carcinoma were divided into two groups: 62 patients with laryngeal carcinoma, with an NRS2002 score higher than 3 points and whose diet contained lycopene, were enrolled in the observation group, and 52 patients with laryngeal carcinoma during the corresponding time period, whose diet did not contain lycopene, were enrolled in the reference group. The immune indexes (CD4 + , CD8 + , IGA, IGM, IGG), nutritional indexes (albumin, prealbumin, transferrin), and postoperative adverse complications of chemo-radiotherapy in the two groups were recorded.
Results: The lycopene levels were lower in patients with advanced tumor stage (III and IV). The diagnosis threshold of the plasma lycopene level for laryngeal carcinoma was 0.503 μmol/L. The area under the curve for plasma lycopene levels in cancer diagnosis was 0.96, with a clinical specificity of 0.943 and a sensitivity of 0.859. There was a significant negative correlation between the plasma lycopene levels and Nutrition Risk Screening (NRS) 2002 score (R2 = - 0.523, P < 0.001), which was related to the increase in NRS-2002 scores and nutritional hazards in patients with laryngeal carcinoma. The observation group showed a significant increase in nutritional and immune indices, as compared to the reference group, as well as a lower incidence of severe and serious adverse reactions to chemo-radiotherapy. Lycopene supplementation, tumor stage, NRS-2002 scores, nutritional and immune indices were all significant predictors of postoperative severe and serious adverse complications of chemoradiotherapy.
Conclusion: Progression of laryngeal carcinoma and severity of the side effects of the adverse complications of chemo-radiotherapy are related to the levels of lycopene.
Answer: Laryngeal complications after thyroidectomy are not always the result of the surgeon's actions. The data from a study of 761 patients who underwent thyroidectomy between 1990 and 2002 showed that the overall rate of laryngeal complications was 42.0%, with 31.3% of patients experiencing complications from injury to the vocal folds due to intubation. Weakness or paresis of the recurrent nerve was initially present in 6.6% of patients, which was related to the nerves at risk. The rate of laryngeal injuries was higher in patients older than 65 years. These findings suggest that laryngeal complications after thyroidectomies are primarily caused by injury to the vocal folds from intubation and to a lesser extent by injury to the laryngeal nerve (PUBMED:19221326).
Therefore, while surgical technique and the skill of the surgeon can play a role in the occurrence of laryngeal complications, other factors such as the method of intubation and patient characteristics also contribute significantly to these complications. It is recommended that informed consent should be documented, especially for patients who use their voice professionally (PUBMED:19221326). |
Instruction: Incorporation monitoring of employees of a radioiodine therapy ward. Is incorporation monitoring required for routine?
Abstracts:
abstract_id: PUBMED:23348688
Incorporation monitoring of employees of a radioiodine therapy ward. Is incorporation monitoring required for routine? Unlabelled: Aim of the study was to determine the annual incorporation of staff on a radioiodine therapy ward and the resulting annual effective dose (aed). Following the German incorporation guideline (gig), incorporation monitoring is not necessary for potential aed below 0.5 mSv/a. For aed > 0.5 mSv/a adherence to the 1 mSv dose limit must be verified. For doses > 1 mSv/a incorporation has to be monitored by the authority. Furthermore, the (131)I incorporation factor from the gig should be verified.
Methods: To determine the actual work related incorporation, the (131)I activity concentration in urine samples (collection over 24 h) of 14 employees of different professions were examined over a period of 27 months.
Results: Measured activity concentrations were related to the individual time of exposure. A constant activity supply for at least three days was assumed. The mean annual effective doses were 2.4 · 10⁻¹ mSv/a (nursing staff; n = 3), 5.6 · 10⁻² mSv/a (cleaning staff; n = 2), 2.8 · 10⁻³ mSv/a (technical staff; n = 2) and 5.2 · 10⁻³ mSv/a (physicians; n = 7). All aed were below the dose limits of the gig. The calculated mean incorporation factors ranged from 3.0 · 10⁻⁸ for the nursing staff to 3.6 · 10⁻¹⁰ for the technical staff (cleaning staff: 7 · 10⁻⁹; physicians: 6.5 · 10⁻¹⁰) and were therefore well below the (131)I incorporation factor defined by the gig.
Conclusions: To estimate the aed caused by incorporation of (131)I it has to be subdivided for the different requirements in the diverse fields of activity of the employees. Regarding those who spend most of their time nearby the patient an incorporation monitoring by the authority might be required. The (131)I incorporation factor from the guideline (10⁻⁶) can be reduced by a factor of 10. For (99m)Tc and (18)F an incorporation factor of 10⁻⁷ is accepted.
abstract_id: PUBMED:12601454
Monitoring of 131I incorporation in nuclear medicine personnel by self accomplished measurements Aim: The personnel in nuclear medicine therapy wards must be monitored according to German guidelines for incorporations of (131)I. A surveillance with the employees measuring themselves similarly to the autonomous contamination survey using hand-foot-clothing monitors is presented as an alternative to the monitoring according to the official guidelines.
Method: The employees use a dedicated device to measure themselves every working day. The automatic individual positioning of the device ensures reliable and reproducible results. The thyroid dose is determined from the measured time activity curve. The individual values of depth and mass of the thyroid are taken into account for activity measurement and dose evaluation, respectively.
Results: The employees measure themselves regularly and utilize the device to check for activities in the thyroid at an early stage after suspected incorporation. The almost complete surveillance permits a dosimetry with slight uncertainty. The determined thyroid doses of all monitored persons average to 0.35 mSv per month.
Conclusion: The incorporation surveillance by autonomous monitoring allows a more reliable and more precise dosimetry than the monitoring according to the official guidelines. Despite numerous measurements the practice saves time and money as a result of the automation.
abstract_id: PUBMED:36131457
TIP47: a cellular factor required for envelope glycoproteins incorporation into HIV particles The production of Human Immunodeficiency Virus-1 (HIV-1), the causative agent of AIDS, requires many interactions between viral and host cell proteins at each step of the viral cycle. The late steps of the replicative cycle of HIV-1 permit the formation of new infectious virions. These steps consist of assembly and budding of the particle, as well as the envelope glycoproteins incorporation step. Several research teams have tried to elucidate the molecular mechanism controlling the envelope glycoproteins (Env) incorporation. Recently, the first cellular cofactor required for this step, Tail-Interacting Protein of 47 kDa (TIP47), has been identified. TIP47 is required for the generation of an infectious viral particle and for the incorporation of the envelope glycoproteins into virions. In this review, we emphasize the key roles of the two major viral structural proteins, Gag and Env, in the last steps of the replicative cycle of HIV-1. We describe the biology of TIP47 and its role as a bridge between Gag and Env, during Env incorporation into new viral particles. Studies discussed in the review illustrate the key roles of proteins implicated in the intracellular trafficking pathways during the formation of the virus.
abstract_id: PUBMED:35388445
Incorporation Monitoring of Staff using I-131 and Lu-177 in a Nuclear Medicine Ward Objectives: In addition to the well-established therapy with iodine-131, treatments with lutetium-177 are increasingly being performed on an inpatient basis in Germany. All of these treatments have be taken into account when assessing the potential internal dose and for incorporation monitoring of personnel. This article describes the experience with and the results of incorporation monitoring of staff of a nuclear medicine ward of a university hospital in Germany.
Methods: Personnel working in a nuclear medicine ward was regularly measured using a whole body counter. In total, 234 measurements were performed over a period of 12 months. Incorporation factors were determined considering activities handled or applied to patients in the respective time period.
Results: In approx. 74 % of measurements, no incorporations was found. In the remaining measurements, activity was detected. Assuming incorporation, the maximum effective dose would be less than 0.15 mSv per measurement. The incorporation factors determined in this work were in the order of magnitude of 10-7 for all groups except for personnel performing radiochemical quality control. For this group, only an upper limit of the incorporation factor of 10-5 can be specified.
Conclusion: The risk of incorporating radiactivity can be considered low for personnel working in a nuclear medicine ward. An incorporation factor of 10-7 is appropriate for medical, nursing, and cleaning staff and personnel performing radiochemical syntheses.
abstract_id: PUBMED:28781778
Glucose metabolism before and after radioiodine therapy of a patient with Graves' disease: Assessment by continuous glucose monitoring. Hyperthyroidism causes impaired glucose tolerance, insulin resistance (IR) and insulin secretion. However, the glucose variability affected by thyroid dysfunction remains unclear. Glucose variability was assessed by continuous glucose monitoring (CGM) in a non-diabetic patient with Graves' disease (GD), to the best of our knowledge, for the first time. A 28-year-old man with GD, who had been taking methimazole for 4 years, was treated with radioiodine on August 17th 2016. Although the patient exhibited normal glycated hemoglobin (HbA1c; 5.3%) and blood glucose values during the oral glucose tolerance test (OGTT; fasting and 120 min blood glucose were 5.38 and 6.39 mmol/l, respectively) before radioiodine therapy, CGM exhibited high 24 h mean glucose and nocturnal hyperglycemia. An increased fasting insulin level, suppressed levels of blood glucagon and high homeostatic model assessment of IR were also observed. The disordered glucose metabolism improved as soon as the patient's thyroid function turned to hypothyroidism 4 months after radioiodine therapy. The glucose intolerance in patients with hyperthyroidism, missed by the OGTT and HbA1c tests, may be more common than anticipated.
abstract_id: PUBMED:9604233
Measurement of incorporation in family members of radioiodine therapy patients after therapy of benign thyroid diseases Aim: Patients exhale I-131 after radioiodine therapy. In this study we quantify the amount of radioactivity and resulting thyroid doses found in people living in close contact to patients treated with I-131 after their release from a therapy ward.
Methods: For 31 relatives of 25 patients treated with I-131 the incorporation was monitored using the thyroid probe of a whole body counter. These values are used for a determination of thyroid doses.
Results: 11 of the 31 monitored persons had a thyroid activity of less than the minimal detectable activity of 13 Bq. The mean value of the remaining 20 people was 104 Bq in the thyroid resulting in a mean thyroid dose of 0.2 mSv (Maximum: 2 mSv).
Conclusion: The intake of I-131 for persons in close contact to patients after dismissal from a therapy ward is low. In no case an effective dose exceeding 1 mSv was observed.
abstract_id: PUBMED:31146792
Automated continuous noninvasive ward monitoring: future directions and challenges. Automated continuous noninvasive ward monitoring may enable subtle changes in vital signs to be recognized. There is already some evidence that automated ward monitoring can improve patient outcome. Before automated continuous noninvasive ward monitoring can be implemented in clinical routine, several challenges and problems need to be considered and resolved; these include the meticulous validation of the monitoring systems with regard to their measurement performance, minimization of artifacts and false alarms, integration and combined analysis of massive amounts of data including various vital signs, and technical problems regarding the connectivity of the systems.
abstract_id: PUBMED:31582102
Postoperative ward monitoring - Why and what now? The postoperative ward is considered an ideal nursing environment for stable patients transitioning out of the hospital. However, approximately half of all in-hospital cardiorespiratory arrests occur here and are associated with poor outcomes. Current monitoring practices on the hospital ward mandate intermittent vital sign checks. Subtle changes in vital signs often occur at least 8-12 h before an acute event, and continuous monitoring of vital signs would allow for effective therapeutic interventions and potentially avoid an imminent cardiorespiratory arrest event. It seems tempting to apply continuous monitoring to every patient on the ward, but inherent challenges such as artifacts and alarm fatigue need to be considered. This review looks to the future where a continuous, smarter, and portable platform for monitoring of vital signs on the hospital ward will be accompanied with a central monitoring platform and machine learning-based pattern detection solutions to improve safety for hospitalized patients.
abstract_id: PUBMED:29045662
Resistance Monitoring of Four Insecticides and a Description of an Artificial Diet Incorporation Method for Chilo suppressalis (Lepidoptera: Crambidae). Chilo suppressalis (Walker; Lepidoptera: Crambidae) is one of the most damaging rice pests in China. Insecticides play a major role in its management. We describe how we monitored the resistance of C. suppressalis to four insecticides in seven field populations from Jiangxi, Hubei, and Hunan Provinces, China, in 2014-2016. The topical application method for resistance monitoring was suitable for triazophos, monosultap, and abamectin. The conventional rice seedling dipping method proved ineffective for testing chlorantraniliprole so the new artificial diet incorporation method was substituted. This new method provided more consistent results than the other methods, once baseline toxicity data had been established. All populations had moderate to high resistance to triazophos from 2014 to 2016. Monosultap resistance in two populations increased from low in 2014 to moderate in 2016 and the other five populations showed moderate to high-level resistance throughout. Abamectin resistance in three populations increased from sensitive or low in 2014 to moderate in 2015-2016, and the other populations had moderate to high levels of resistance. Resistance to chlorantraniliprole increased from sensitive or low in 2014 to moderate to high in 2016. These results suggested that resistance management strategies should be developed according to the needs of a specific location. It was suggested that, in these localities, organophosphate insecticides should be prohibited, the application of nereistoxin, macrolide antibiotic, and diamide insecticides should be limited, and other insecticides, including spinetoram and methoxyfenozide, that exhibited no resistance should be used rationally and in rotation to delay resistance development.
abstract_id: PUBMED:34270861
Stress at work: Self-monitoring of stressors and resources to support employees. High levels of stress at work may have serious consequences for employee functioning and mental health. By providing employees with an easily accessible instrument to regularly evaluate stressors and resources, employee self-monitoring and guidance to support can be accommodated. We evaluated an online self-monitoring tool Brief Assessment of Stress and Energy (BASE). Through their organization, 139 railway emergency services employees were invited to complete BASE and six wellbeing measures. We assessed BASE in two ways: using multiple regression analysis (N = 102, 73.4%), as well as by telephone follow-up interviews during which experts and respondents evaluated the BASE outcome (N = 67, 65.7%). Explained variances of BASE on the six wellbeing measures ranged between 26.6% and 49.9%. Telephone interviews confirmed the BASE outcome. The results indicate that BASE is associated with several measures of wellbeing and accurately refers respondents to counseling. This study shows that BASE is a promising instrument to encourage employees to self-monitor stressors and resources and identify those who need counseling.
Answer: Incorporation monitoring of employees in a radioiodine therapy ward is a practice aimed at assessing the internal exposure of staff to radioactive iodine (I-131) used in treatments. The necessity of routine incorporation monitoring is determined by the potential annual effective dose (AED) received by the employees. According to the German incorporation guideline (GIG), incorporation monitoring is not required if the potential AED is below 0.5 mSv/a. If the AED is greater than 0.5 mSv/a, adherence to the 1 mSv dose limit must be verified, and for doses greater than 1 mSv/a, incorporation must be monitored by the authority (PUBMED:23348688).
A study that monitored the (131)I activity concentration in urine samples of 14 employees over 27 months found that the mean annual effective doses were well below the dose limits of the GIG for all staff categories, including nursing, cleaning, technical staff, and physicians. The study concluded that the (131)I incorporation factor from the guideline could be reduced by a factor of 10, suggesting that routine incorporation monitoring by the authority might not be required for all employees, especially those who spend most of their time nearby the patient (PUBMED:23348688).
Another study presented an alternative to the official guidelines for monitoring (131)I incorporation in nuclear medicine personnel, where employees measured themselves every working day using a dedicated device. This method allowed for more reliable and precise dosimetry and saved time and money due to automation (PUBMED:12601454).
In summary, while incorporation monitoring is an important safety measure for employees working in a radioiodine therapy ward, the necessity for routine monitoring depends on the actual levels of exposure and the resulting AED. If the AED is below the threshold set by relevant guidelines, routine monitoring may not be required, but for higher doses, monitoring becomes necessary to ensure the safety and health of the employees. |
Instruction: Does removal of aids/devices and help make a difference in the Childhood Health Assessment Questionnaire disability index?
Abstracts:
abstract_id: PUBMED:19221399
Does removal of aids/devices and help make a difference in the Childhood Health Assessment Questionnaire disability index? Objective: To assess whether the removal of aids/devices and/or help from another person in the Childhood Health Assessment Questionnaire (C-HAQ) leads to a significant change in the disability index (DI) score and responsiveness in juvenile idiopathic arthritis (JIA).
Methods: Changes in the C-HAQ DI score in a cross-sectional sample of 2663 children with JIA and in 530 active patients with JIA in a trial of methotrexate (MTX) were compared.
Results: Patients in the MTX trial had higher disease activity and disability than the cross-sectional sample. The frequency of aids/devices (range 1.2-10.2%) was similar between the two samples, while help (range 5.3-38.1%) was more frequently used in the MTX group. Correlation between disease severity variables and the two different C-HAQ DI scoring methods did not change substantially. There was a decrease in the C-HAQ DI score for both the cross-sectional (mean score from 0.64 with the original method to 0.54 without aids/devices and help, p<0.0001) and the MTX sample (mean score from 1.23 to 1.07, p<0.0001). A linear regression analysis of the original C-HAQ DI score versus the score without aids/devices and help demonstrated the substantial overlap of the different scoring methods. Responsiveness in the responders to MTX treatment did not change with the different C-HAQ DI scoring methods (range 0.86-0.82).
Conclusion: The removal of aids/devices and help from the C-HAQ does not alter the interpretation of disability at a group level. The simplified C-HAQ is a more feasible and valid alternative for the evaluation of disability in patients with JIA.
abstract_id: PUBMED:18085737
Does incorporation of aids and devices make a difference in the score of the health assessment questionnaire-disability index? Analysis from a scleroderma clinical trial. Objective: The Health Assessment Questionnaire-Disability Index (HAQ-DI) is a commonly used musculoskeletal-targeted measure in systemic sclerosis (SSc). We assessed if HAQ-DI scores are different when calculated with and without aids/devices, and if apparent responsiveness changes when scored in these 2 ways.
Methods: We used data from a placebo-controlled clinical trial in diffuse SSc. Baseline HAQ-DI total score was calculated with and without aids/devices and compared using Student's t-test. We also classified the HAQ-DI scores into no-to-mild disability (0.00-1.00), moderate disability (1.01-2.00), and severe disability (2.01-3.00). Responsiveness to change was evaluated using the effect size (ES).
Results: The mean (SD) baseline HAQ-DI score was 1.33 (0.68) with aids/devices compared to HAQ-DI score 1.16 (0.70) without aids/devices (p = 0.03). When the baseline HAQ-DI score was categorized into no-to-mild, moderate, and severe disability, the proportion of patients in the no-to-mild disability (29% with aids/devices vs 44% without aids/devices) and moderate disability (59% with aids/devices vs 45% without aids/devices) groups were statistically different (p <0.001). The ES was similar between the 2 groups (ES = 0.01 and 0.02 with and without aids/devices).
Conclusion: This analysis suggests a shift from no-to-mild disability to moderate disability when aids/devices are incorporated in total HAQ-DI score. Future clinical trials in SSc should explicitly state whether HAQ-DI score was calculated using aids/devices.
abstract_id: PUBMED:33993106
Measuring Physical Function in Psoriatic Arthritis: Comparing the Multidimensional Health Assessment Questionnaire to the Health Assessment Questionnaire-Disability Index. Objective: To compare physical function scales of the Multidimensional Health Assessment Questionnaire (MDHAQ) with that of the Health Assessment Questionnaire-Disability Index (HAQ-DI) in patients with psoriatic arthritis (PsA), and to examine whether either questionnaire is less prone to "floor effects."
Methods: Data were collected prospectively from 2018 to 2019 across 3 UK hospitals. All patients completed physical function scales within the MDHAQ and HAQ-DI in a single clinic visit. Agreement was assessed using medians and the Bland-Altman method. Intraclass correlation coefficients (ICCs) were used to assess test-retest reliability.
Results: Two hundred ten patients completed the clinic visit; 1 withdrew consent. Thus, 209 were analyzed. Sixty percent were male, with mean age of 51.7 years and median disease duration of 7 years. In clinic, median MDHAQ and HAQ-DI including/excluding aids scores were 0.30, 0.50, and 0.50 respectively. Although the median score for HAQ-DI was higher than for MDHAQ, the difference between the 2 scores was mostly within 1.96 SDs from the mean, suggesting good agreement. The ICCs demonstrated excellent test-retest reliability for both the MDHAQ and HAQ-DI. Similar numbers of patients scored 0 on the MDHAQ and HAQ-DI including/excluding aids (48, 47, and 49, respectively). Using a score of ≤ 0.5 as a cutoff for minor functional impairment, 23 patients had a MDHAQ ≤ 0.5 when their HAQ-DI including aids was > 0.5. Conversely, 4 patients had a MDHAQ > 0.5 when the HAQ-DI including aids was ≤ 0.5.
Conclusion: Both the MDHAQ and HAQ-DI appear to be similar in detecting floor effects in patients with PsA.
abstract_id: PUBMED:37744048
The long-term course of the Health Assessment Questionnaire in patients with systemic sclerosis. Objective: The Health Assessment Questionnaire-Disability Index is an important outcome measure reflecting functional disability, but knowledge on its course over time in patients with systemic sclerosis is scarce. Therefore, we investigated the long-term course of the Health Assessment Questionnaire-Disability Index and its association with baseline characteristics in systemic sclerosis patients.
Methods: Systemic sclerosis patients, fulfilling the European League Against Rheumatism and the American College of Rheumatology 2013 criteria, were included from the Leiden Combined Care in Systemic Sclerosis cohort with annual assessments including the Scleroderma Health Assessment Questionnaire-Disability Index (range = 0-3). The course of the Health Assessment Questionnaire-Disability Index was evaluated over the total follow-up (baseline to last available Health Assessment Questionnaire-Disability Index) and between yearly visits. Based on a minimal clinical important difference of 0.22, courses were categorized into worsening, stable or improvement. The course of the Health Assessment Questionnaire-Disability Index over time was evaluated with linear mixed models. Baseline characteristics were compared between patients with a worsening or improvement of the Health Assessment Questionnaire-Disability Index over the total follow-up period with logistic regression analyses.
Results: A total of 517 systemic sclerosis patients were included, with a median follow-up of 7 years (interquartile range = 4-9; 2649 visits) and a baseline Health Assessment Questionnaire-Disability Index of 0.625 (interquartile range = 0.125-1.25). On group level, the Health Assessment Questionnaire-Disability Index is stable with an annual increase of 0.019 (95% confidence interval = 0.011 to 0.027). Looking at subgroups, patients >65 years or who died/were physically unable to come during follow-up had a worse mean Health Assessment Questionnaire-Disability Index. In individual courses from baseline to the last follow-up, the proportions of patients with a clinically meaningful worsening, stable or improved Health Assessment Questionnaire-Disability Index were 35%, 42% and 23%, respectively. Patients with immunosuppressants (odds ratio = 0.5, 95% confidence interval = 0.3 to 0.9) or gastrointestinal involvement (odds ratio = 0.6, 95% confidence interval = 0.4 to 0.9) at baseline showed a reduced chance of worsening of the Health Assessment Questionnaire-Disability Index over the total follow-up period.
Conclusion: Over time, the average course of the Health Assessment Questionnaire-Disability Index was stable in systemic sclerosis patients. However, individual courses vary, with worsening occurring in one-third. Worsening occurred less often in individuals using immunosuppressants or with gastrointestinal involvement at baseline.
abstract_id: PUBMED:31359798
Factors affecting the use of mobility aids devices among young adults with mobility disability in a selected Nigerian population. Purpose: This study aimed to evaluate the interaction of people living with mobility disability (PLWMDs), mobility aid devices (MADs), and their environment.
Materials And Method: This was a cross-sectional institution-based survey with 51 participants (33 males and 18 females) aged between 18 and 50 years old. Participants were recruited using a purposive sampling method with snowballing. Data were collected using modified socio-cognitive and psychological impacts of the device self-administered questionnaire and analysed using descriptive statistics of frequency count, mean, percentages and standard deviation and Pearson's chi-square. Alpha level was set at 0.05.
Results: The results showed diagnosis around the lower limb leading to disabilities implicated the use of MADs. The results showed the psychological factors and combined effect of psychological, socio-cultural and environmental factors were found to be significantly associated with the use of MADs at a significant level of p = .011 and .011, respectively.
Conclusion: The findings of the study suggest a negative effect associated with lack of proper use of MADs as well as the importance of MADs for promoting participation, inclusion, and productivity of PLWDs. However, the effectiveness of a specific type of MADs should be assessed in future studies.Implications for rehabilitationMobility aids devices are designed to help people achieve independence, reduce pain, increase confidence and self-esteem.Individuals with mobility disability are often encouraged to make use of the mobility aids devices.The type of mobility aid device required for each individual will depend on the mobility disability or injury.
abstract_id: PUBMED:15476213
Development and validation of the health assessment questionnaire II: a revised version of the health assessment questionnaire. Objective: The Health Assessment Questionnaire (HAQ) has become the most common tool for measuring functional status in rheumatology. However, the HAQ is long (34 questions, including 20 concerning activities of daily living and 14 relating to the use of aids and devices) and somewhat burdensome to score, has some floor effects, and has psychometric problems relating to linearity and confusing items. We undertook this study to develop and validate a revised version of the HAQ (the HAQ-II).
Methods: Using Rasch analysis and a 31-question item bank, including 20 HAQ items, the 10-item HAQ-II was developed. Five original items from the HAQ were retained. We studied the HAQ-II in 14,038 patients with rheumatic disease over a 2-year period to determine its validity and reliability.
Results: The HAQ-II was reliable (reliability of 0.88, compared with 0.83 for the HAQ), measured disability over a longer scale than the HAQ, and had no nonfitting items and no gaps. Compared with the HAQ, modified HAQ, and Medical Outcomes Study Short Form 36 physical function scale, the HAQ-II was as well correlated or better correlated with clinical and outcome variables. The HAQ-II performed as well as the HAQ in a clinical trial and in prediction of mortality and work disability. The mean difference between the HAQ and HAQ-II scores was 0.02 units.
Conclusion: The HAQ-II is a reliable and valid 10-item questionnaire that performs at least as well as the HAQ and is simpler to administer and score. Conversion from HAQ to HAQ-II and from HAQ-II to HAQ for research purposes is simple and reliable. The HAQ-II can be used in all places where the HAQ is now used, and it may prove to be easier to use in the clinic.
abstract_id: PUBMED:32159654
Validation of the Brazilian version of the World Health Organization Disability Assessment Schedule 2.0 for individuals with HIV/AIDS. The WHODAS 2.0 (World Health Organization Disability Assessment Schedule) is an instrument developed by the WHO (World Health Organization) for functioning and disability assessment based on the biopsychosocial framework, fully supported by the theoretical-conceptual framework of the ICF (International Classification of Functioning, Disability and Health). To validate the Brazilian version of the WHODAS 2.0 for individuals with HIV/AIDS. 100 individuals with diagnosis of HIV/AIDS participated in the study. Two assessment instruments were used: the 36-item version of WHODAS 2.0 and the WHOQOL-HIV-BREF (World Health Organization Quality of Life assessment in persons infected with HIV, shorter version). The psychometric properties tested were internal consistency and criterion validity. Internal consistency was adequate for all domains, with the exception of Life Activities (α = 0.69) and Self-care (α = 0.32). Criterion validity was adequate, with moderate correlations between the WHODAS 2.0 and the WHOQOL-HIV-BREF domains. The results indicated the WHODAS 2.0 instrument as a valid tool for assessing functioning of individuals with HIV/AIDS. The use of data from the Self-care domain should be carefully considered.
abstract_id: PUBMED:16468046
Examining the psychometric characteristics of the Dutch childhood health assessment questionnaire: room for improvement? The aim of this study was to examine the psychometric characteristics of the childhood health assessment questionnaire-disability index (CHAQ-DI). Seventy-six patients with juvenile idiopathic arthritis (JIA), age range 4.8-15.8 years, completed a CHAQ questionnaire one or more times. In total, 321 CHAQ questionnaires were available for analysis. Factor analysis and correlation were used to analyse the data. The analysis indicated that 12 items could be removed from the original 30 items of the CHAQ-DI. Also the addition of "aids and assistance" to the overall scoring method of the CHAQ-DI did not contribute to the overall measuring concept of the CHAQ-DI. The psychometric characteristics of the CHAQ-DI could be improved by removing 12 items from the original 30 items. Moreover, a simple scoring method, without the addition of aids and assistance to the total CHAQ-DI improves sensitivity to change of the CHAQ-DI.
abstract_id: PUBMED:15818719
The health assessment questionnaire disability index and scleroderma health assessment questionnaire in scleroderma trials: an evaluation of their measurement properties. Objective: To evaluate the measurement properties of the Health Assessment Questionnaire (HAQ) disability index (DI) for group comparisons in scleroderma trials, and to determine if the Scleroderma Health Assessment Questionnaire (SHAQ) visual analog scales confer any measurement advantage over the HAQ DI.
Methods: A computer search for articles describing the use of the HAQ DI and SHAQ in scleroderma was performed. Evidence supporting the sensibility, reliability, validity, and responsiveness of these measures was evaluated.
Results: The SHAQ has incremental face and content validity over the HAQ DI because it addresses scleroderma-specific manifestations that also contribute to disability. The HAQ DI has good concurrent validity, construct validity, and predictive validity. Whether SHAQ confers incremental construct, concurrent, or predictive validity over the HAQ DI is uncertain. The HAQ DI appears more reliable than the SHAQ; however, reliability studies provide insufficient data to ascertain if minimum standards have been achieved. Responsiveness of the HAQ DI subscales has been demonstrated.
Conclusion: The SHAQ has incremental face and content validity over the HAQ DI. The HAQ DI has greater reliability and demonstrated construct, concurrent, and predictive validity. Further investigation into the measurement properties of the HAQ DI and SHAQ visual analog scales, and their relation to the required standards of measurement is needed.
abstract_id: PUBMED:31441680
Updated version of Japanese Childhood Health Assessment Questionnaire (CHAQ). Objectives: The Childhood Health Assessment Questionnaire (CHAQ) is one of the most widely used self- report questionnaires to measure functional status in Juvenile idiopathic arthritis (JIA). The Japanese version of the CHAQ (JCHAQ) has been revised to meet requirements of clinical international trials which need the same number of questions in each functional area of the CHAQ.Methods: The original JCHAQ consisted of 36 items, measuring eight functional areas. This was changed to 30 items of questionnaire so that each functional area has same number of questions as the original US English version. The revised version was professionally translated from English to Japanese, reviewed, and validated with Japanese JIA patients.Results: A total of 42 JIA patients were enrolled in the validation: seven systemic, 30 polyarticular/oligoarticular and five enthesis related. Most patients were well controlled and the median disability index (DI) scores was 0.0 [0-0.03]; however, significant correlation was seen with visual analog scale (VAS) of pain, VAS overall well-being, physician VAS, DAS (Disease Activity Score) 28-ESR, and JADAS (Juvenile Arthritis Disease Activity Score)-27. In comparison of two groups of disease activity, remission or inactive/low disease activity vs. moderate/high disease activity, both DAS28-ESR and JADAS-27 showed significant correlation with DI.Conclusion: The updated JCHAQ was a reliable and valid tool for the functional assessment of children with JIA. It is more suitable for international and transitional comparison.
Answer: The removal of aids/devices and help from the Childhood Health Assessment Questionnaire (C-HAQ) does not significantly alter the interpretation of disability at a group level. A study assessing this change in the C-HAQ disability index (DI) score and responsiveness in juvenile idiopathic arthritis (JIA) found that while there was a decrease in the C-HAQ DI score when aids/devices and help were removed, the correlation between disease severity variables and the two different C-HAQ DI scoring methods did not change substantially. The study concluded that the simplified C-HAQ without aids/devices and help is a more feasible and valid alternative for the evaluation of disability in patients with JIA (PUBMED:19221399). |
Instruction: Radiofrequency and microwave tumor ablation in patients with implanted cardiac devices: is it safe?
Abstracts:
abstract_id: PUBMED:30677240
Efficacy of radiofrequency ablation and microwave ablation in the treatment of thoracic cancer: A systematic review and meta-analysis. Background: Radiofrequency ablation and microwave ablation are frequently prescribed for thoracic cancer. However, few writers have been able to draw on any systematic research into the differences between the two ablation methods.
Methods: A literature search was carried out using Embase, PUBMED, Web of Science, Cochrane Library, and CNKI databases, with additional searches carried out manually using terms associated with thoracic cancer and thermal ablation. Then we used Google Scholar for a complementary search. Data were extracted from studies of patients that underwent radiofrequency ablation or microwave ablation, and the investigator carried out efficacy evaluation and follow up. The data obtained from the literature were summarized and analyzed using Cochrane Revman software Version 5.3 and SPSS 22.0.
Results: There were seven comparative studies, but no randomized studies identified for data extraction; 246 patients received radiofrequency ablation therapy and 319 controls received microwave ablation. There was no significant difference in the six-month, one-year, two-year, and three-year survival rates, and adverse reactions were found in the two treatments. For patients' long-term survival rate, the two treatments can achieve a similar survival time.
Conclusion: In the treatment of thoracic cancer, microwave ablation can achieve the same efficacy as radiofrequency ablation.
abstract_id: PUBMED:33803926
Comparing the Safety and Efficacy of Microwave Ablation Using ThermosphereTM Technology versus Radiofrequency Ablation for Hepatocellular Carcinoma: A Propensity Score-Matched Analysis. There is limited information regarding the oncological benefits of microwave ablation using ThermosphereTM technology for hepatocellular carcinoma. This study compared the overall survival and recurrence-free survival outcomes among patients with hepatocellular carcinoma after microwave ablation using ThermosphereTM technology and after radiofrequency ablation. Between December 2017 and August 2020, 410 patients with hepatocellular carcinoma (a single lesion that was ≤5 cm or ≤3 lesions that were ≤3 cm) underwent ablation at our institution. Propensity score matching identified 150 matched pairs of patients with well-balanced characteristics. The microwave ablation and radiofrequency ablation groups had similar overall survival rates at 1 year (99.3% vs. 98.2%) and at 2 years (88.4% vs. 87.5%) (p = 0.728), as well as similar recurrence-free survival rates at 1 year (81.1% vs. 76.2%) and at 2 years (60.5% vs. 62.1%) (p = 0.492). However, the microwave ablation group had a significantly lower mean number of total insertions (1.22 ± 0.49 vs. 1.59 ± 0.94; p < 0.0001). This retrospective study revealed no significant differences in the overall survival and recurrence-free survival outcomes after microwave ablation or radiofrequency ablation. However, we recommend microwave ablation for hepatocellular carcinoma tumors with a diameter of >2 cm based on the lower number of insertions.
abstract_id: PUBMED:31293775
Transcatheter arterial chemoembolization combined with radiofrequency or microwave ablation for hepatocellular carcinoma: a review. Hepatocellular carcinoma (HCC) is the sixth most common type of malignancy. Several therapies are available for HCC and are determined by stage of presentation, patient clinical status and liver function. Local-regional treatment options, including transcatheter arterial chemoembolization, radiofrequency ablation or microwave ablation, are safe and effective for HCC but are accompanied by limitations. The synergistic effects of combined transcatheter arterial chemoembolization and radiofrequency ablation/microwave ablation may overcome these limitations and improve the therapeutic outcome. The purpose of this article is to review the current literature on these combined therapies and examine their efficacy, safety and influence on the overall and recurrence-free survival in patients with HCC.
abstract_id: PUBMED:31217342
Radiofrequency Ablation and Microwave Ablation in Liver Tumors: An Update. This article provides an overview of radiofrequency ablation (RFA) and microwave ablation (MWA) for treatment of primary liver tumors and hepatic metastasis. Only studies reporting RFA and MWA safety and efficacy on liver were retained. We found 40 clinical studies that satisfied the inclusion criteria. RFA has become an established treatment modality because of its efficacy, reproducibility, low complication rates, and availability. MWA has several advantages over RFA, which may make it more attractive to treat hepatic tumors. According to the literature, the overall survival, local recurrence, complication rates, disease-free survival, and mortality in patients with hepatocellular carcinoma (HCC) treated with RFA vary between 53.2 ± 3.0 months and 66 months, between 59.8% and 63.1%, between 2% and 10.5%, between 22.0 ± 2.6 months and 39 months, and between 0% and 1.2%, respectively. According to the literature, overall survival, local recurrence, complication rates, disease-free survival, and mortality in patients with HCC treated with MWA (compared with RFA) vary between 22 months for focal lesion >3 cm (vs. 21 months) and 50 months for focal lesion ≤3 cm (vs. 27 months), between 5% (vs. 46.6%) and 17.8% (vs. 18.2%), between 2.2% (vs. 0%) and 61.5% (vs. 45.4%), between 14 months (vs. 10.5 months) and 22 months (vs. no data reported), and between 0% (vs. 0%) and 15% (vs. 36%), respectively. According to the literature, the overall survival, local recurrence, complication rates, and mortality in liver metastases patients treated with RFA (vs. MWA) are not statistically different for both the survival times from primary tumor diagnosis and survival times from ablation, between 10% (vs. 6%) and 35.7% (vs. 39.6), between 1.1% (vs. 3.1%) and 24% (vs. 27%), and between 0% (vs. 0%) and 2% (vs. 0.3%). MWA should be considered the technique of choice in selected patients, when the tumor is ≥3 cm in diameter or is close to large vessels, independent of its size. IMPLICATIONS FOR PRACTICE: Although technical features of the radiofrequency ablation (RFA) and microwave ablation (MWA) are similar, the differences arise from the physical phenomenon used to generate heat. RFA has become an established treatment modality because of its efficacy, reproducibility, low complication rates, and availability. MWA has several advantages over RFA, which may make it more attractive than RFA to treat hepatic tumors. The benefits of MWA are an improved convection profile, higher constant intratumoral temperatures, faster ablation times, and the ability to use multiple probes to treat multiple lesions simultaneously. MWA should be considered the technique of choice when the tumor is ≥3 cm in diameter or is close to large vessels, independent of its size.
abstract_id: PUBMED:20434862
Radiofrequency and microwave tumor ablation in patients with implanted cardiac devices: is it safe? Purpose: To identify malfunction of implanted cardiac devices during or after thermal ablation of tumors in lung, kidney, liver or bone, using radiofrequency (RF) or microwave (MW) energy.
Materials And Methods: After providing written consent, 19 patients (15 men and 4 women; mean age 78 years) with pacemakers or pacemaker/defibrillators underwent 22 CT image-guided percutaneous RF or MW ablation of a variety of tumors. Before and after each procedure, cardiac devices were interrogated and reprogrammed by a trained cardiac electrophysiology fellow. Possible pacer malfunctions included abnormalities on electrocardiographic (EKG) monitoring and alterations in device settings. Our institutional review board approved this Health Insurance Portability and Accountability Act-compliant study. Informed consent for participation in this retrospective study was deemed unnecessary by our review board.
Results: During 20 of 22 sessions, no abnormalities were identified in continuous, EKG tracings or pacemaker functions. However, in two sessions significant changes, occurred in pacemaker parameters: inhibition of pacing during RF application in one, session and resetting of mode by RF energy in another session. These changes did not, result in hemodynamic instability of either patient. MW ablation was not associated with, any malfunction. In all 22 sessions, pacemakers were undamaged and successfully reset to original parameters.
Conclusion: RF or MW ablation of tumors in liver, kidney, bone and lung can be performed safely in patients with permanent intra-cardiac devices, but careful planning between radiology and cardiology is essential to avoid adverse outcomes.
abstract_id: PUBMED:35645097
Comparison of percutaneous microwave ablation with radiofrequency ablation for hepatocellular carcinoma adjacent to major vessels: A retrospective study. Purpose: To compare the therapeutic efficacy and safety of percutaneous microwave ablation (MWA) with those of percutaneous radiofrequency ablation (RFA) for the treatment of hepatocellular carcinoma (HCC) adjacent to major vessels.
Methods: From January 2010 to April 2011, 78 patients with a single nodule, no >5 cm, adjacent to major vessels were enrolled in this study. Forty-four patients (forty-one men, three women; age range, 33-72 years) treated by MWA were compared with thirty-four patients (thirty-one men, three women; age range, 33-75 years) treated by RFA. Local tumor progression rate, overall survival rate, and disease-free survival rate were calculated using the Kaplan-Meier method, and differences between groups were estimated by log-rank test.
Results: No death related to treatment occurred in the two groups. The 1-, 2-, and 3-year local tumor progression rates were 6.8%, 11.4%, and 15.9%, respectively, in the microwave group versus 17.6%, 20.6%, and 20.6%, respectively in the radiofrequency group (P = 0.544). The rates of major complications associated with microwave and RFA were 2.3% (1/44) versus 0% (0/34; P = 0.376). The microwave group's 1-, 2-, and 3-year disease-free survival rates were 72.7%, 65.9%, and 51.8%, respectively, and those in the radiofrequency were 58.8%, 52.9%, and 47.1%, respectively (P = 0.471). The microwave group's 1-, 2-, and 3-year overall survival rates were 93.2%, 90.9%, and 83.6%, respectively, and those in the radiofrequency group were 91.2%, 88.2%, and 82.4%, respectively (P = 0.808) There was no significant difference in local tumor progression, complications related to treatment, and long-term results between the two modalities. The incidence of peritumoral structure damage on image scan was significantly higher in the microwave group than in the RFA group (P = 0.025).
Conclusions: Both RFA and MWA are safe and effective techniques for HCC adjacent to major vessels and have the same clinical value.
abstract_id: PUBMED:28396723
Laser ablation of liver tumors: An ancillary technique, or an alternative to radiofrequency and microwave? Radiofrequency ablation (RFA) is currently the most popular and used ablation modality for the treatment of non surgical patients with primary and secondary liver tumors, but in the last years microwave ablation (MWA) is being technically improved and widely rediscovered for clinical use. Laser thermal ablation (LTA) is by far less investigated and used than RFA and MWA, but the available data on its effectiveness and safety are quite good and comparable to those of RFA and MWA. All the three hyperthermia-based ablative techniques, when performed by skilled operators, can successfully treat all liver tumors eligible for thermal ablation, and to date in most centers of interventional oncology or interventional radiology the choice of the technique usually depends on the physician's preference and experience, or technical availability. However, RFA, MWA, and LTA have peculiar advantages and limitations that can make each of them more suitable than the other ones to treat patients and tumors with different characteristics. When all the three thermal ablation techniques are available, the choice among RFA, MWA, and LTA should be guided by their advantages and disadvantages, number, size, and location of the liver nodules, and cost-saving considerations, in order to give patients the best treatment option.
abstract_id: PUBMED:34344807
Clinical and functional results of radiofrequency ablation and microwave ablation in patients with benign thyroid nodules. Objectives: To determine how well ultrasound-guidance percutaneous radiofrequency ablation (RFA) and microwave ablation (MWA) performed for benign symptomatic thyroid nodules in terms of clinical and functional outcomes.
Methods: Patients who had a thyroid nodule-linked symptoms acting as dysphagia, cosmetic issues, pain, a foreign body sense, hyperthyroidism secondary to autonomous nodules, or concern of malignancy were involved in the study. The primary was the comparison in symptom scores obtained at 1, 3, and 6 months after RFA and MWA. The volume alterations in nodules and alterations in thyroid gland functions were secondary objectives.
Results: This prospective study carried out from November 2014 and January 2017 at the General Surgery Department, Marmara University, Faculty of Medicine, Istanbul, Turkey included a total of 100 nodules (50% MWA, 50% RFA). There were statistically significance in pain scores, dysphagia scores, and foreign body sensation scores at 1, 3, and 6 months after therapy in both ablation groups (p=0.0006, p=00004, p=0.0005). At the same time, there were statistically significant reductions in size and volume of the nodules for RFA and MWA (p=0.0004, p=0.0003). There was no significant difference between the RFA and MWA groups' cosmetic scoring and volume changes (p=0.68, p=0.43).
Conclusions: Alternative therapies for benign symptomatic thyroid nodules include RFA and MWA. The findings of this research revealed that both approaches are safe and effective.
abstract_id: PUBMED:30929016
Is microwave ablation more effective than radiofrequency ablation in achieving local control for primary pulmonary malignancy? A best evidence topic in thoracic surgery was written according to a structured protocol. The question addressed was 'Is microwave ablation (MWA) more effective than radiofrequency ablation (RFA) in achieving local control for primary lung cancer?'. Altogether, 439 papers were found, of which 7 represented the best evidence to answer the clinical question. The authors, journal, date and country of publication, patient group studied, study type, relevant outcomes and results of these papers are tabulated. Both are thermal ablative techniques, with microwave ablation (MWA) the newer technique and radiofrequency ablation (RFA) with a longer track record. Lack of consensus with regard to definitions of technical success and efficacy and heterogeneity of study inclusions limits studies for both. The only direct comparison study does not demonstrate a difference with either technique in achieving local control. The quality of evidence for MWA is very limited by retrospective nature and heterogeneity in technique, power settings and tumour type. Tumour size and late-stage cancer were shown to be associated with higher rates of local recurrence in 1 MWA study. RFA studies were generally of a higher level of evidence comprising prospective trials, systematic review and meta-analysis. The recurrence rates for MWA and RFA overlapped, and for the included studies ranged between 16% and 44% for MWA and 9% and 58% for RFA. The current evidence, therefore, does not clearly demonstrate a benefit of MWA over RFA in achieving local control in primary lung cancer.
abstract_id: PUBMED:26052394
Percutaneous microwave ablation vs radiofrequency ablation in the treatment of hepatocellular carcinoma. Hepatocellular cancer ranks fifth among cancers and is related to chronic viral hepatitis, alcohol abuse, steatohepatitis and liver autoimmunity. Surgical resection and orthotopic liver transplantation have curative potential, but fewer than 20% of patients are suitable candidates. Interventional treatments are offered to the vast majority of patients. Radiofrequency (RFA) and microwave ablation (MWA) are among the therapeutic modalities, with similar indications which include the presence of up to three lesions, smaller than 3 cm in size, and the absence of extrahepatic disease. The therapeutic effect of both methods relies on thermal injury, but MWA uses an electromagnetic field as opposed to electrical current used in RFA. Unlike MWA, the effect of RFA is partially limited by the heat-sink effect and increased impedance of the ablated tissue. Compared with RFA, MWA attains a more predictable ablation zone, permits simultaneous treatment of multiple lesions, and achieves larger coagulation volumes in a shorter procedural time. Major complications of both methods are comparable and infrequent (approximately 2%-3%), and they include haemorrhage, infection/abscess, visceral organ injury, liver failure, and pneumothorax. RFA may incur the additional complication of skin burns. Nevertheless, there is no compelling evidence for differences in clinical outcomes, including local recurrence rates and survival.
Answer: According to the study identified by PUBMED:20434862, radiofrequency (RF) and microwave (MW) ablation of tumors in liver, kidney, bone, and lung can be performed safely in patients with permanent intra-cardiac devices. The study involved 19 patients with pacemakers or pacemaker/defibrillators who underwent 22 CT image-guided percutaneous RF or MW ablation procedures. Cardiac devices were interrogated and reprogrammed before and after each procedure by a trained cardiac electrophysiology fellow. During 20 of the 22 sessions, no abnormalities were identified in continuous electrocardiographic (EKG) tracings or pacemaker functions. However, in two sessions, significant changes occurred in pacemaker parameters: inhibition of pacing during RF application in one session and resetting of mode by RF energy in another session. These changes did not result in hemodynamic instability of either patient. MW ablation was not associated with any malfunction. In all 22 sessions, pacemakers were undamaged and successfully reset to original parameters. The conclusion of the study is that RF or MW ablation of tumors can be performed safely in patients with permanent intra-cardiac devices, but careful planning between radiology and cardiology is essential to avoid adverse outcomes. |
Instruction: Occipitocervical contoured rod stabilization: does it still have a role amidst the modern stabilization techniques?
Abstracts:
abstract_id: PUBMED:18040109
Occipitocervical contoured rod stabilization: does it still have a role amidst the modern stabilization techniques? Background: The occipitocervical contoured rod (CR) stabilization for use in craniovertebral junction (CVJ) pathologies is an effective and economical technique of posterior fusion (PF).
Aims: The various indications for CR in CVJ pathologies are discussed.
Settings And Design: Retrospective analysis.
Materials And Methods: Fifty-four patients (mean age: 31.02+/-13.44 years; male: female ratio=5.75:1) who underwent CR stabilization are included. The majority had congenital atlantoaxial dislocation (AAD; n=50); two had CVJ tuberculosis; one each had rheumatoid arthritis and C2-3 listhesis, respectively. The indications for CR fusion in congenital AAD were associated Chiari 1 malformation (C1M) (n=29); occipitalized C1 arch and/or malformed or deficient C1 or C2 posterior elements (n=9); hypermobile AAD (n=2); and, rotatory AAD (n=3). Contoured rod as a revision procedure was also performed in seven patients. Most patients were in poor grade (18 in Grade III [partial dependence for daily needs] and 15 in Grade IV [total dependence]); 15 patients were in Grade II [independent except for minor deficits] and six in Grade I [no weakness except hyperreflexia or neck pain].
Results: Twenty-four patients improved, 18 stabilized and six deteriorated at a mean follow-up (FU) of 17.78+/-19.75 (2-84) months. Six patients were lost to FU. In 37 patients with a FU of at least three months, stability and bony union could be assessed. Thirty-one of them achieved a bony fusion/stable construct.
Conclusions: Contoured rod is especially useful for PF in cases of congenital AAD with coexisting CIM, cervical scoliosis, sub-axial instability and/or asymmetrical facet joints. In acquired pathologies with three-column instability, inclusion of joints one level above the affected one by using CR, especially enhances stability.
abstract_id: PUBMED:27630479
Comparison of hinged and contoured rods for occipitocervical arthrodesis in adults: A clinical study. Introduction: A rigid construct that employs an occipital plate and upper cervical screws and rods is the current standard treatment for craniovertebral junction (CVJ) instability. A rod is contoured to accommodate the occipitocervical angle. Fatigue failure has been associated these acute bends. Hinged rod systems have been developed to obviate intraoperative rod contouring.
Object: The aim of this study is to determine the safety and efficacy of the hinged rod system in occipitocervical fusion.
Materials And Methods: This study retrospectively evaluated 39 patients who underwent occipitocervical arthrodesis. Twenty patients were treated with hinged rods versus 19 with contoured rods. Clinical and radiographic data were compared and analyzed.
Results: Preoperative and postoperative Nurick and Frankel scores were similar between both groups. The use of allograft, autograft or bone morphogenetic protein was similar in both groups. The average number of levels fused was 4.1 (±2.4) and 3.4 (±2) for hinged and contoured rods, respectively. The operative time, estimated blood loss, and length of stay were similar between both groups. The occiput to C2 angle was similarly maintained in both groups and all patients demonstrated no movement across the CVJ on flexion-extension X-rays during their last follow-up. The average follow-up for the hinged and contoured rod groups was 12.2 months and 15.9 months, respectively.
Conclusion: Hinged rods provide a safe and effective alternative to contoured rods during occipitocervical arthrodesis.
abstract_id: PUBMED:17321967
Augmentation of occipitocervical contoured rod fixation with C1-C2 transarticular screws. Background Context: The technique of occipitocervical fusion using a threaded contoured rod attached with sublaminar wires to the occiput and upper cervical vertebrae is widely used throughout the world and has been clinically proven to provide effective fixation of the destabilized spine. However, this system has some disadvantages in maintaining stability, especially at C1-C2 because of the large amount of axial rotation at this level. In some clinical situations such as fracture of the C1 lamina, C1 laminectomy, and excessively lordotic curvature, it is not always possible to wire C1 directly into the construct. In such cases, combination of other stabilization methods that include C1 indirectly can be used to achieve a reliable posterior internal fixation.
Purpose: Primarily, to evaluate whether a contoured rod construct in which C1 is indirectly included using C1-C2 transarticular screws is biomechanically equivalent to a standard, fully wired contoured rod construct. Secondarily, to evaluate the biomechanical benefit of adding C1-C2 transarticular screws to a fully wired contoured rod construct.
Study Design: Repeated-measures nondestructive in vitro biomechanical testing of destabilized cadaveric human occipitocervical spine specimens.
Methods: Six human cadaveric specimens from the occiput to C3 were studied. Angular and linear displacement data were recorded while nonconstraining nondestructive loads were applied. Three methods of fixation were tested: contoured rod incorporating C1 with and without transarticular screws and contoured rod with transarticular screws without incorporating C1.
Results: All three constructs reduced motion to well within normal range. In contoured rod constructs with C1 wired, addition of transarticular screws slightly but significantly improved stability. In constructs with transarticular screws, incorporation of C1 into the contoured rod wiring did not improve stability significantly.
Conclusions: Adding C1-C2 transarticular screws to a wired contoured rod construct where C1 is included only slightly improves stability. As the absolute reduction in motion from adding transarticular screws is small (<1 degree), it is doubtful whether any enhanced fusion from this additional procedure outweighs the surgical risks. However, transarticular screws provide an effective alternate method to fixate C1 when the posterior arch of C1 is absent or has been fractured.
abstract_id: PUBMED:35693373
A Novel Technique for Occipitocervical Fusion with Triple Rod Connection to Prevent Implant Failure. Occipitocervical fusion is an effective surgical method for treating various upper cervical disorders. However, complications such as implant failure due to rod breakage have been reported. Therefore, we devised a surgical technique for occipitocervical fusion with a triple rod connection to prevent implant failure. Occipitocervical fusion with triple rod connection was performed in two cases with a high risk of instability such as athetoid cerebral palsy and rheumatoid arthritis. A multiaxial screw (diameter: 4.5 mm) was inserted into the screw hole in the middle of the occipital plate, and subsequently, an additional rod was attached. It was connected to the main rod using an offset connector at the caudal side. The connection of the additional rod was simple and did not interfere with the fusion bed for bone graft between the occipital bone and axis. The head of the screw was crimped to the occipital plate, and the plate was firmly fixed. Moreover, since the head of the screw did not protrude to the dorsal side, the tension of the soft tissue and skin did not increase. No complications occurred after surgery in both cases. In addition, no special instruments were required to connect the additional rod to the main rod in this procedure. Therefore, our technique may be useful as an option to prevent implant failure due to rod breakage at the craniocervical junction.
abstract_id: PUBMED:30610329
Occipitocervical Fusion: An Updated Review. Occipitocervical fusion (OCF) is indicated for instability at the craniocervical junction (CCJ). Numerous surgical techniques, which evolved over 90 years, as well as unique anatomic and kinematic relationships of this region present a challenge to the neurosurgeon. The current standard involves internal rigid fixation by polyaxial screws in cervical spine, contoured rods and occipital plate. Such approach precludes the need of postoperative external stabilization, lesser number of involved spinal segments, and provides 95-100% fusion rates. New surgical techniques such as occipital condyle screw or transarticular occipito-condylar screws address limitations of occipital fixation such as variable lateral occipital bone thickness and dural sinus anatomy. As the C0-C1-C2 complex is the most mobile portion of the cervical spine (40% of flexion-extension, 60% of rotation and 10% of lateral bending) stabilization leads to substantial reduction of neck movements. Preoperative assessment of vertebral artery anatomical variations and feasibility of screw insertion as well as visualization with intraoperative fluoroscopy are necessary. Placement of structural and supplemental bone graft around the decorticated bony elements is an essential step of every OCF procedure as the ultimate goal of stabilization with implants is to provide immobilization until bony fusion can develop.
abstract_id: PUBMED:23157394
Occipitocervical fusion using a contoured rod and wire construct in children: a reappraisal of a vintage technique. Object: Many methods to stabilize and fuse the craniocervical junction have been described. One of the early designs was a contoured (Luque) rod fixated with wires, the so-called Hartshill-Ransford loop. In this study, the authors report their 20-year experience with this surgical technique in children.
Methods: The authors reviewed the medical records of patients 18 years of age or younger who underwent dorsal occipitocervical fusion procedures between March 1992 and March 2012 at Le Bonheur Children's Hospital using a contoured rod and wire construct. Data on basic patient characteristics, causes of instability, neurological function at presentation and at last follow-up, details of surgery, complications, and radiographic outcome were collected.
Results: Twenty patients (11 male) were identified, with a mean age of 5.5 years (range 1-18 years) and a median follow-up of 43.5 months. Fourteen patients had atlantooccipital dislocation, 2 patients had atlantoaxial fracture-dissociations, 2 had Down syndrome with occipitocervical and atlantoaxial instability, 1 had an epithelioid sarcoma from the clivus to C-2, and 1 had an anomalous atlas with resultant occipitocervical instability. Surgical stabilization extended from the occiput to C-1 in 3 patients, C-2 in 6, C-3 in 8, and to C-4 in 3. Bone morphogenetic protein was used in 2 patients. Two patients were placed in a halo orthosis; the rest were kept in a hard collar for 6-8 weeks. All patients were neurologically stable after surgery. One patient with a dural tear experienced wound dehiscence with CSF leakage and required reoperation. Eighteen patients went on to achieve fusion within 6 months of surgery; 1 patient was initially lost to follow-up, but recent imaging demonstrated a solid fusion. There were no early hardware or bone failures requiring hardware removal, but radiographs obtained 8 years after surgery showed that 1 patient had an asymptomatic fractured rod. There were no instances of symptomatic junctional degeneration, and no patient was found to have increasing lordosis over the fused segments. Five (31%) of the 16 trauma patients required a shunt for hydrocephalus.
Conclusions: Despite the proliferation of screw-fixation techniques for craniocervical instability in children, the contoured rod-wire construct remains an effective, less expensive, and technically easier alternative that has been in use for almost 30 years. It confers immediate stability, and therefore most patients will not need to be placed in a halo device postoperatively. A secondary observation in our series was the high (30%) rate of hydrocephalus requiring a shunt in patients with traumatic instability.
abstract_id: PUBMED:32089614
Surgical, clinical, and radiological outcomes of occipitocervical fusion using the plate-screw-rod system with allograft in craniocervical instability. Objective: We evaluated surgical, clinical, and radiological outcomes of posterior occipitocervical fusion (OCF) using plate-rod-screw construct supplemented with allograft in cases of occipitocervical instability.
Study Design: This was a retrospective analysis of prospective collected data.
Methods: Data of 52 patients who underwent posterior OCF using plate-screw-rod construct supplemented with allograft at a single institute from 2009 to 2014 were analyzed. Demographics, clinical parameters (Visual Analog Score [VAS], ODI, and mJOA score), functional status (McCormick scale), radiological parameters - mean atlantodens interval, posterior occipitocervical angle, occipitocervical 2 angle, and surgical parameters (operative time, blood loss, hospital stay, and fusion) with complications were evaluated.
Results: The mean age of the patients was 54.56 ± 16.21 years with male: female was 28:24. The mean operative time was 142.2 min (90-185 min) and mean blood loss was 250.8 ml. The mean duration of hospital stay was 6.7 days and mean follow-up period was 65.17 ± 5.39 months. There was significant improvement in clinical parameters (modified JOA score, VAS, and Oswestry Disability Index values) postoperatively. Forty patients showed recovery in neurological status at least in Grade 1 in McCormick scale with no neurological deterioration in any patient. Furthermore, radiological parameters at cervicomedullary junction got into acceptable range. Implant-related complications noted in 1 patient and 1 patient had vertebral artery injury. We had dural tear in 3 patients and infection in 2 patients. Fusion was achieved in 46 cases with mean time for fusion was 11.039 months.
Conclusion: Patients with occipitocervical instability can successfully undergo posterior OCF using plate-screw-rod construct supplemented with allograft with high fusion rate, good clinical and functional outcomes, and low complication rate.
abstract_id: PUBMED:29353274
Postoperative Immobilization following Occipitocervical Fusion in the Pediatric Population: Outcome Evaluation and Review of Literature. The scientific literature does not have a consensus about the role and method of postoperative immobilization after occipitocervical fusion in the pediatric population. The primary goal of this study is to review the medical literature and evaluate different immobilization methods and their impact on fusion, following the surgical management of craniocervical instability in children. It started with an extensive research of randomized controlled trials, series of cases and case reports, describing occipitocervical junction pathologies, clinical, epidemiological characteristics, and treatment. The search was performed using the Pubmed database evaluating all the literature involving postoperative immobilization after occipitocervical fusion in pediatric patients. The results showed that most cases of occipitocervical stabilization were due to congenital spinal instability followed by trauma in most series. The most common type of surgery performed was occipitocervical fusion using screw and rod constructs. The different methods of postoperative immobilization did not affect outcomes. Then, we can conclude that screw-and-rod constructions in occipitocervical fusion augment the rates of fusion, independently from which immobilization was used, even when none was used at all.
abstract_id: PUBMED:37206351
The effectiveness of pre-contoured titanium alloy rods in inducing thoracic kyphosis after sequential spinal releases in an in vitro biomechanical model. Purpose: Evaluate the ability of pre-contoured rods to induce thoracic kyphosis (TK) in human cadaveric spines and determine the effectiveness of sequential surgical adolescent idiopathic scoliosis (AIS) release procedures.
Methods: Six thoracolumbar (T3-L2) spine specimens were instrumented with pedicle screws bilaterally (T4-T12). Over correction using pre-contoured rods was performed for intact condition and Cobb angle was measured. Rod radius of curvature (RoC) was measured pre- and post-reduction. The process was repeated following sequential release procedures of (1) interspinous and supraspinous ligaments (ISL); (2) ligamentum flavum; (3) Ponte osteotomy; (4) posterior longitudinal ligament (PLL); and (5) transforaminal discectomy. Cobb measurements determined the effective contribution of release on TK and RoC data displayed effects of reduction to the rods.
Results: The intact TK (T4-12) was 38.0° and increased to 51.7° with rod reduction and over correction. Each release resulted in 5°-7°of additional kyphosis; the largest releases were ISL and PLL. All releases resulted in significant increases in kyphosis compared to intact with rod reduction and over correction. Regionally, kyphosis increased ∼2° for each region following successive releases. Comparing RoC before and after reduction showed significant 6° loss in rod curvature independent of release type.
Conclusion: Kyphosis increased in the thoracic spine using pre-contoured and over corrected rods. Subsequent posterior releases provided a substantial, meaningful clinical change in the ability to induce additional kyphosis. Regardless of the number of releases, the ability of the rods to induce and over correct kyphosis was reduced following reduction.
abstract_id: PUBMED:23932777
An unexpected complication of occipitocervical stabilization surgery: retropharyngeal hematoma. Background Context: To report a unique case of an unexpected complication of occipitocervical stabilization surgery that is retropharyngeal hematoma (RH).
Purpose: Postoperative RH is a very rare complication and has never been reported after posterior occipitocervical surgery.
Study Design: Case report.
Methods: A 44-year-old woman being treated for rheumatoid arthritis for 20 years was admitted to our hospital in a wheelchair with the complaints of neck pain and weakness in both upper and lower extremities. She was diagnosed with basilar invagination, and occipitocervical (C0-C5) transpedicular fixation with osteosynthesis using iliac autograft was performed. The airway was seen as obstructed after extubation. The airway was maintained with laryngeal mask, and computed tomography revealed an RH. Emergent tracheotomy was performed. The patient was decannulated because of the resorption of RH after 10 days and was discharged.
Conclusion: This patient is the first patient, to our knowledge, to be reported for unexplained RH after cervical posterior spinal surgery.
Answer: Occipitocervical contoured rod stabilization continues to have a role amidst modern stabilization techniques. It is an effective and economical technique for posterior fusion in craniovertebral junction (CVJ) pathologies, particularly useful in cases of congenital atlantoaxial dislocation with coexisting Chiari 1 malformation, cervical scoliosis, sub-axial instability, and/or asymmetrical facet joints. It is also beneficial in acquired pathologies with three-column instability, where the inclusion of joints one level above the affected one by using a contoured rod enhances stability (PUBMED:18040109).
Despite the development of hinged rod systems designed to avoid intraoperative rod contouring and associated fatigue failure, contoured rods remain a viable option. Hinged rods have been shown to be a safe and effective alternative to contoured rods during occipitocervical arthrodesis, but the choice between the two may depend on specific clinical scenarios and surgeon preference (PUBMED:27630479).
The addition of C1-C2 transarticular screws to a contoured rod construct can slightly improve stability, especially when the posterior arch of C1 is absent or fractured. However, the absolute reduction in motion from adding transarticular screws is small, and the surgical risks must be weighed against the potential benefits (PUBMED:17321967).
Innovations such as the triple rod connection technique have been developed to prevent implant failure due to rod breakage, suggesting that there is ongoing refinement and application of contoured rod techniques in occipitocervical fusion (PUBMED:35693373).
The contoured rod-wire construct has been used effectively for nearly 30 years, providing immediate stability and allowing most patients to avoid postoperative halo devices. It remains an effective, less expensive, and technically simpler alternative to screw-fixation techniques for craniocervical instability in children (PUBMED:23157394).
Overall, occipitocervical fusion using contoured rods continues to be a relevant technique with high fusion rates, good clinical and functional outcomes, and a low complication rate, even in the context of modern stabilization methods (PUBMED:32089614). |
Instruction: Bicycle helmet campaigns and head injuries among children. Does poverty matter?
Abstracts:
abstract_id: PUBMED:12933770
Bicycle helmet campaigns and head injuries among children. Does poverty matter? Objectives: To assess the impact of a community based bicycle helmet programme aimed at children aged 5-12 years (about 140,000) from poor and well off municipalities.
Methods: A quasi-experimental design, including a control group, was used. Changes in the risk of bicycle related head injuries leading to hospitalisation were measured, using rates ratios.
Results: Reductions in bicycle related head injuries were registered in both categories of municipalities. Compared with the pre-programme period, the protective effect of the programme during the post-programme period was as significant among children from poor municipalities (RR= 0.45 95%CI 0.26 to 0.78) as among those from richer municipalities (RR=0.55 95%CI 0.41 to 0.75).
Conclusion: Population based educational programmes may have a favourable impact on injury risks in poor areas despite lower adoption of protective behaviours.
abstract_id: PUBMED:27846992
Bicycle helmet use among persons 5years and older in the United States, 2012. Introduction: In 2013, injuries to bicyclists accounted for 925 fatalities and 493,884 nonfatal, emergency department-treated injuries in the United States. Bicyclist deaths increased by 19% from 2010 to 2013. The greatest risk of death and disability to bicyclists is head injuries. The objective of this study was to provide estimates of prevalence and associated factors of bicycle riding and helmet use among children and adults in the United States.
Method: CDC analyzed self-reported data from the 2012 Summer ConsumerStyles survey. Adult respondents (18+years) were asked about bicycle riding and helmet use in the last 30days for themselves and their children (5 to 17years). For bicycle riders, CDC estimated the prevalence of helmet use and conducted multivariable regression analyses to identify factors associated with helmet use.
Results: Among adults, 21% rode bicycles within the past 30days and 29% always wore helmets. Respondents reported that, of the 61% of children who rode bicycles within the past 30days, 42% always wore helmets. Children were more likely to always wear helmets (90%) when their adult respondents always wore helmets than when their adult respondents did not always wear helmets (38%). Children who lived in states with a child bicycle helmet law were more likely to always wear helmets (47%) than those in states without a law (39%).
Conclusions: Despite the fact that bicycle helmets are highly effective at reducing the risk for head injuries, including severe brain injuries and death, less than half of children and adults always wore bicycle helmets while riding.
Practical Application: States and communities should consider interventions that improve the safety of riding such as policies to promote helmet use, modeling of helmet wearing by adults, and focusing on high risk groups, including Hispanic cyclists, occasional riders, adults, and children ages 10 to 14.
abstract_id: PUBMED:37593306
Social Disparities in Helmet Usage in Bicycle Accidents Involving Children. Background Bicycle helmet use has a known protective health benefit; yet, pediatric populations have suboptimal helmet rates, which increases the risk of severe injuries. It is imperative to have an updated assessment of behavioral social disparities and for providers to be aware of them to better counsel patients. The study objective was to identify social determinants correlated with helmet use in children involved in bicycle accidents. Based on previous literature, we hypothesized that higher socioeconomic status, female sex, and Caucasian race were associated with increased helmet use. Methods A retrospective case series of 140 pediatric cases of bicycle-related traumas assessing helmet status. Participants presented to the emergency room with injuries due to a bicycle-related trauma and were subsequently admitted to the University of North Carolina (UNC) Hospital System in Chapel Hill, North Carolina (NC), from June 2006 to May 2020. The Institutional Review Board (IRB) approved study comprised a retrospective chart review of 140 cases from the pediatric (<18 years of age) trauma database with coding indicating bicycle-related injury. Zip codes were used to approximate the median household income utilizing the Proximity One government database. The primary exposure was helmet status, which was determined from the electronic record chart review. The hypothesis was formulated before the start of the study. The main outcomes measured in the study included age, sex, race, helmet status, zip code, insurance status, injury types, and mortality. Results There were a total of 140 study participants, of which 35 were female and 105 were male. Males comprised 79.6% of the non-helmeted group, while females were in the minority in both helmet status groups, with 65.7% still being non-helmeted. Additionally, 51.9% of patients who were helmeted used private insurance, and 59.3% of those non-helmeted used public insurance. Of the 71 head injuries, 88.7% were non-helmeted. Principally, this study found that 80.7% of children involved in a bicycle-related accident were not helmeted. Conclusions Despite NC legislation mandating that children under 16 years of age wear helmets while operating bicycles, many children injured in bicycle-related trauma are not complying with this requirement. This study demonstrates that specific populations have decreased rates of helmet usage and emphasize the continued need to monitor helmet behaviors.
abstract_id: PUBMED:24115375
Effects of bicycle helmet laws on children's injuries. In recent years, many states and localities in the USA have enacted bicycle helmet laws. We estimate the effects of these laws on injuries requiring emergency department treatment. Using hospital-level panel data and triple difference models, we find helmet laws are associated with reductions in bicycle-related head injuries among children. However, laws also are associated with decreases in non-head cycling injuries, as well as increases in head injuries from other wheeled sports. Thus, the observed reduction in bicycle-related head injuries may be due to reductions in bicycle riding induced by the laws.
abstract_id: PUBMED:38081723
Increased bicycle helmet use in the absence of mandatory bicycle helmet legislation: Prevalence and trends from longitudinal observational studies on the use of bicycle helmets among cyclists in Denmark 2004-2022. Introduction: Using a bicycle helmet reduces the risk of serious head injuries among cyclists substantially. This makes it highly relevant to increase the use of helmets and to measure the prevalence of bicycle helmet use over time and across different groups.
Method: Since 2004, the use of bicycle helmets in Denmark has been measured observationally in two nationwide time series: one among cyclists in city traffic across all age groups, and one among cycling school children (aged 6-16) around schools. The observations have been conducted on a regular basis in different parts of the country following the same methodology over the years.
Results: Bicycle helmet use among cyclists in city traffic in Denmark has increased from 6% in 2004 to 50% in 2022. Among cycling school children, helmet use has increased from 33% in 2004 to 79% in 2022. Throughout the years, helmet wearing rates have been highest among young children and lowest among young adults. Since 2015, female cyclists in city traffic have had a slightly higher helmet use than male cyclists.
Discussion: Several factors might have affected bicycle helmet use in Denmark. One possible factor is a nationwide focus on traffic safety education and behavior change campaigns to encourage helmet wearing. Furthermore, among stakeholders on cycling safety there has been consensus on recommending bicycle helmet use and supporting the promotion of helmets while not recommending or promoting helmet legislation. Finally, more safety-oriented behavior in road traffic in general, and self-reinforcing effects of increased helmet use have plausibly been important factors.
Practical Applications: Increasing bicycle helmet use in a country where cycling is popular is possible in the absence of mandatory bicycle helmet legislation. Persistent behavior change campaigning and education, stakeholder consensus, higher levels of road safety-oriented behaviors, and self-reinforcing processes could potentially be important factors.
abstract_id: PUBMED:31834816
Characteristics of bicycle crashes among children and the effect of bicycle helmets. Objective: Focusing on children (0-17 years), this study aimed to investigate injury and accident characteristics for bicyclists and to evaluate the use and protective effect of bicycle helmets.Method: This nationwide Swedish study included children who had visited an emergency care center due to injuries from a bicycle crash. In order to investigate the causes of bicycle crashes, data from 2014 to 2016 were analyzed thoroughly (n = 7967). The causes of the crashes were analyzed and categorized, focusing on 3 subgroups: children 0-6, 7-12, and 13-17 years of age. To assess helmet effectiveness, the induced exposure approach was applied using data from 2006 to 2016 (n = 24,623). In order to control for crash severity, only bicyclists who had sustained at least one Abbreviated Injury Scale (AIS) 2+ injury (moderate injury or more severe) in body regions other than the head were included.Results: In 82% of the cases the children were injured in a single-bicycle crash, and the proportion decreased with age (0-6: 91%, 7-12: 84%, 13-17: 77%). Of AIS 2+ injuries, 8% were head injuries and 85% were injuries to the extremities (73% upper extremities and 13% lower extremities). Helmet use was relatively high up to the age of 10 (90%), after which it dropped. Helmets were much less frequently used by teenagers (14%), especially girls. Consistently, the share of head injuries increased as the children got older. Bicycle helmets were found to reduce all head injuries by 61% (95% confidence interval [CI], 10: +/- 10%) and AIS 2+ head injuries by 68% (95% CI, 12: +/- 12%). The effectiveness in reducing face injuries was lower (45% CI +/- 10% for all injuries and 54% CI +/- 32% for AIS2+ injuries).Conclusions: This study indicated that bicycle helmets effectively reduce injuries to the head and face. The results thus point to the need for actions aimed at increasing helmet use, especially among teenagers. Protective measures are necessary to further reduce injuries, especially to the upper extremities.
abstract_id: PUBMED:24426808
Effects of helmet laws and education campaigns on helmet use in young skiers. Objective: Helmet-compulsory laws for young skiers, accompanied by educational campaigns, have recently been implemented in several countries. However, data regarding compliance to these interventions during adolescence are scarce.
Methods: In 2011, a questionnaire survey was performed among 10- to 16-year-old students in 62 Austrian secondary schools.
Results: A total of 2655 questionnaires were completed by 1376 males and 1279 females. Helmet use was reported in 99% of 10- to 15-year-old skiers (for whom helmets are mandatory) and in 91% of 16-year-old skiers (for whom helmets are not mandatory).
Conclusion: Compliance with helmet laws, which were accompanied by educational campaigns, was very high among adolescent skiers. Nevertheless, helmet use decreased slightly during adolescence, and this decrease was particularly pronounced when helmet use was no longer mandatory. Sophisticated multifaceted interventions may have the potential to increase the use of ski helmets among individuals who refuse to wear helmets.
abstract_id: PUBMED:37508789
An Air-Filled Bicycle Helmet for Mitigating Traumatic Brain Injury. We created a novel air-filled bicycle helmet. The aims of this study were (i) to assess the head injury mitigation performance of the proposed helmet and (ii) to compare those performance results against the performance results of an expanded polystyrene (EPS) traditional bicycle helmet. Two bicycle helmet types were subjected to impacts in guided vertical drop tests onto a flat anvil: EPS helmets and air-filled helmets (Bumpair). The maximum acceleration value recorded during the test on the Bumpair helmet was 86.76 ± 3.06 g, while the acceleration during the first shock on the traditional helmets reached 207.85 ± 5.55 g (p < 0.001). For the traditional helmets, the acceleration increased steadily over the number of shocks. There was a strong correlation between the number of impacts and the response of the traditional helmet (cor = 0.94; p < 0.001), while the Bumpair helmets showed a less significant dependence over time (cor = 0.36; p = 0.048), meaning previous impacts had a lower consequence. The air-filled helmet significantly reduced the maximal linear acceleration when compared to an EPS traditional helmet, showing improvements in impact energy mitigation, as well as in resistance to repeated impacts. This novel helmet concept could improve head injury mitigation in cyclists.
abstract_id: PUBMED:24888851
Racial/ethnic and socioeconomic disparities in the use of helmets in children involved in bicycle accidents. Purpose: While bicycle helmet use reduces bicycle-related head injury, few children wear them regularly. We aimed to describe racial/ethnic and socioeconomic differences in pediatric helmet use in Los Angeles County (LAC) to help target groups for injury prevention programs.
Methods: A retrospective review of all pediatric patients involved in bicycle-related accidents in LAC between 2006 and 2011 was performed. Our primary analysis examined the association between helmet use and age, gender, insurance status, and race/ethnicity. We also evaluated the association between helmet use and the need for emergency surgery, mortality, and length of hospital stay (LOH), after adjusting for injury severity score (ISS), age, insurance status, and race/ethnicity.
Results: Of 1248 patients, 11.3% wore helmets, with decreased use among children 12years and older, minorities, and those without private insurance. Overall, 5.9% required an emergency operation, 34.1% returned to their pre-injury capacity, and mortality was 0.7%. On multivariable analysis, higher ISS increased LOH, the risk for emergency surgery, and mortality.
Conclusion: Nearly 90% of children involved in bicycle-related accidents were not wearing helmets. Helmet use was lower among older children, minorities, and those from a low socioeconomic status. Injury prevention programs targeting low-income middle and high schools and minority communities may help increase helmet use in children in LAC.
abstract_id: PUBMED:26038000
Helmet use in bicycle trauma patients: a population-based study. Introduction: In recent years, the increasing number of bicyclists has evoked the debate on use of bicycle helmet. The aim of this study was to investigate the association between helmet use and injury pattern in bicycle trauma patients.
Patients And Methods: We performed a retrospective population-based study of 186 patients treated for bicycle-related injuries at a Level 1 Trauma Centre in Sweden during a 3-year period. Data were collected from case records. Unconditional logistic regression was used to calculate odds ratios (ORs), and 95% confidence intervals (CIs).
Results: 43.5% of the 186 patients used a bicycle helmet at the time of the crash. Helmet users were less likely to get head and facial injuries in collisions than non-helmet users (OR, 0.3; 95% CI, 0.07-0.8, and OR, 0.07; 95% CI, 0.02-0.3), whereas no difference was seen in single-vehicle accidents. The risk of limb injuries was higher among helmet users.
Conclusions: Non-helmet use is associated with an increased risk of injury to head and face in collisions, whereas helmet use is associated with an increased risk of limb injuries in all types of crashes.
Answer: The impact of bicycle helmet campaigns on head injuries among children appears to be significant regardless of socioeconomic status. A study assessing the impact of a community-based bicycle helmet program aimed at children aged 5-12 years from both poor and well-off municipalities found that reductions in bicycle-related head injuries were registered in both categories. The protective effect of the program during the post-programme period was significant among children from poor municipalities (RR= 0.45 95%CI 0.26 to 0.78) as well as those from richer municipalities (RR=0.55 95%CI 0.41 to 0.75), suggesting that population-based educational programs can have a favorable impact on injury risks in poor areas despite lower adoption of protective behaviors (PUBMED:12933770).
However, it is important to note that social disparities in helmet usage still exist. A study found that specific populations, including those with lower socioeconomic status, have decreased rates of helmet usage. Despite legislation mandating that children under 16 years of age wear helmets while operating bicycles, many children injured in bicycle-related trauma are not complying with this requirement (PUBMED:37593306). This indicates a need for continued monitoring and targeted interventions to increase helmet use among these groups.
Furthermore, characteristics of bicycle crashes among children and the effect of bicycle helmets show that helmet use is relatively high up to the age of 10, after which it drops, particularly among teenagers and especially girls. The study found that bicycle helmets effectively reduce injuries to the head and face, highlighting the need for actions aimed at increasing helmet use, especially among teenagers (PUBMED:31834816).
In summary, while poverty does not negate the effectiveness of bicycle helmet campaigns in reducing head injuries among children, socioeconomic factors do influence helmet usage rates. Therefore, targeted interventions and continued education are necessary to address social disparities and improve helmet use among all children, particularly those from lower socioeconomic backgrounds and teenagers who are less likely to wear helmets. |
Instruction: Is there a correlation between molecular markers and response to neoadjuvant chemoradiotherapy in locally advanced squamous cell esophageal cancer?
Abstracts:
abstract_id: PUBMED:29066057
The impact of pathological complete response after neoadjuvant chemoradiotherapy in locally advanced squamous cell carcinoma of esophagus. Background: The impact of pathological complete response after neoadjuvant chemoradiotherapy on survival of patients with squamous cell carcinoma of esophagus is still controversial. We retrospectively investigated the survival outcome in this group of patients.
Methods: Ninety-eight patients with locally advanced squamous cell carcinoma of esophagus, who received neoadjuvant chemoradiotherapy were included in this retrospective analysis. Treatment protocols were radiotherapy with standard dose 50.4 Gy/28 fr, and chemotherapy with cisplatin 20 mg/m2 and 5-FU 800 mg/m2 for 4 days given on week 1 and 5. After neoadjuvant chemoradiotherapy is completed, patients who were eligible for surgery received surgery within 4-6 weeks. Patients who were not suitable for surgery were shifted to definite chemoradiotherapy. The primary end points were overall survival and progression-free survival.
Results: Sixty-eight patients out of the ninety-eight patients received surgery after neoadjuvant chemoradiotherapy. There were 32 patients who achieved pathological complete response with a pCR rate of 47%. Thirty patients were shifted to definite concurrent chemoradiotherapy. The 2-year overall survival rate was 81.3% in the patients whose tumors showed a pCR and 58.3% in the patients with tumors that had a pathological partial response (p = 0.025). The 2-year overall survival in patients who received neoadjuvant chemoradiotherapy followed by surgery and definite chemoradiotherapy were 69.1% and 40.0%, respectively. There are 13 patients experienced grade 3-4 adverse event.
Conclusion: Pathological complete response after neoadjuvant chemoradiotherapy is associated with a significant survival benefit in patients with locally advanced squamous cell carcinoma of esophagus. The toxicities related to neoadjuvant chemoradiotherapy were tolerable.
abstract_id: PUBMED:23335529
Is there a correlation between molecular markers and response to neoadjuvant chemoradiotherapy in locally advanced squamous cell esophageal cancer? Purpose: To evaluate the expression of epidermal growth factor receptor (EGFR), p53, p21 and thymidylate synthase (TS) in a pretherapy biopsy specimen of locally advanced squamous cell esophageal cancer and correlate these markers with response to neoadjuvant chemoradiotherapy.
Methods: Sixty-two patients with histopathologically proven locally advanced (T3 or greater) squamous cell esophageal cancer were enrolled. The expression of EGRF, p53, p21 and TS markers was assessed with immunohistochemistry. Semiquantitative assessment of expression of these markers was performed based on the percent of the stained cells. Radiotherapy (45-50.4 Gy) was delivered concomitantly with 5-fluorouracil (5-FU)/leucovorin (LV)/cisplatin (CIS) chemotherapy. Five to 6 weeks after chemoradiation, response to treatment was assessed. Medically fit and operable patients were operated. The resected material underwent histopathological evaluation of tumor expansion, histological classification after initial multimodality treatment (yp TNM), residual status and tumor regression grade (TRG).
Results: Out of 62 patients enrolled, 41 (66%) were evaluated for molecular markers. Clinical response rate was 43.9%. Out of 41 patients, 12 (29%) underwent surgery. TRG 1 was noted in 58% of the patients. In a pretherapy tumor specimen, positive expression was noted in 80, 90, 80 and 71% for EGFR, p53, p21 and TS, respectively. We noted no statistically significant difference neither between tumor marker expression and clinical response to chemoradiation, nor between tumor marker expression and TRG.
Conclusion: We registered no difference in response to treatment between EGFR, TS, p21 and p53 positive and negative staining.
abstract_id: PUBMED:32140748
Neoadjuvant chemoradiotherapy or chemotherapy for locally advanced esophageal cancer? Background: According to international guidelines neoadjuvant chemoradiotherapy and chemotherapy are recommended for the treatment of locally advanced esophageal cancer. The treatment approach depends on the tumor entity (adenocarcinoma vs. squamous cell carcinoma).
Objective: What benefits do patients with locally advanced esophageal cancer have from neoadjuvant treatment? Is there information in the international literature on whether a particular neoadjuvant treatment is preferred? Does the type of neoadjuvant treatment depend on factors other than the tumor entity? Is there a standard in the drug composition of chemotherapy or a clearly defined chemoradiotherapy regimen?
Material And Methods: A review, evaluation and critical analysis of the international literature were carried out.
Results: Patients with locally advanced esophageal cancer benefit from a neoadjuvant treatment. The current data situation for squamous cell carcinoma of the esophagus demonstrates a better response to neoadjuvant chemoradiotherapy compared to chemotherapy alone. Locally advanced adenocarcinoma of the esophagus can be treated with combined neoadjuvant chemoradiotherapy as well as by chemotherapy alone. Both lead to an improvement in the prognosis. There are often differences particularly among radiation treatment regimens in the different centers. Furthermore, the localization of the tumor can also be important for treatment decisions.
Conclusion: A neoadjuvant treatment is clearly recommended for locally advanced esophageal cancer. Currently, chemoradiotherapy according to the CROSS protocol is preferred for squamous cell carcinoma. For adenocarcinoma both chemotherapy according to the FLOT protocol as well as chemoradiotherapy in a neoadjuvant treatment concept lead to an improvement in the prognosis.
abstract_id: PUBMED:36291954
Recent Advances in Combination of Immunotherapy and Chemoradiotherapy for Locally Advanced Esophageal Squamous Cell Carcinoma. Esophageal cancer has a high mortality rate and a poor prognosis, with more than one-third of patients receiving a diagnosis of locally advanced cancer. Esophageal squamous cell carcinoma (ESCC) is the dominant histological subtype of esophageal cancer in Asia and Eastern Europe. Although neoadjuvant or definitive chemoradiotherapy (CRT) has been the standard treatment for locally advanced ESCC, patient outcomes remain unsatisfactory, with recurrence rates as high as 30-50%. The combination of immune checkpoint inhibitors (ICIs) and CRT has emerged as a novel strategy to treat esophageal cancer, and it may have a synergistic action and provide greater efficacy. In the phase III CheckMate-577 trial, one year of adjuvant nivolumab after neoadjuvant CRT improved disease-free survival in patients with residual disease on pathology. Moreover, several phase I and II studies have shown that ICIs combined with concurrent CRT may increase the rate of pathologic complete response for resectable ESCC, but they lack long-term follow-up results. In unresectable cases, the combination of camrelizumab and definitive CRT showed promising results against ESCC in a phase Ib trial. Phase III randomized trials are currently ongoing to investigate the survival benefits of ICIs combined with neoadjuvant or definitive CRT, and they will clarify the role of immunotherapy in locally advanced ESCC. Additionally, valid biomarkers to predict tumor response and survival outcomes need to be further explored.
abstract_id: PUBMED:31258850
Neoadjuvant nimotuzumab plus chemoradiotherapy compared to neoadjuvant chemoradiotherapy and neoadjuvant chemotherapy for locally advanced esophageal squamous cell carcinoma. Neoadjuvant therapy improves long-term locoregional control and overall survival after surgical resection for esophageal cancer, and neoadjuvant chemotherapy (nCT) or neoadjuvant chemoradiotherapy (nCRT) are commonly used in clinical practice. Nimotuzumab is a humanized monoclonal antibody against epidermal growth factor receptor (EGFR), the efficacy of nimotuzumab added to nCRT for esophageal cancer is uncertain. We conducted this retrospective study in which combining neoadjuvant treatment of nimotuzumab with chemoradiotherapy (Nimo-nCRT) is compared with nCRT and nCT for patients with potentially resectable locally advanced esophageal squamous cell carcinoma. One hundred ninety-five patients received neoadjuvant therapy and 172 (88.2%) underwent esophagectomy. Surgical resection was performed in 94.4% after Nimo-nCRT, versus 92.5% after nCRT and 83.5% after nCT (P = 0.026). The R0 resection rate was 100% after Nimo-nCRT, 95.9% after nCRT and 92.6% after nCT (P = 0.030). Pathological complete response (pCR) was achieved in 41.2% after Nimo-nCRT, versus 32.4% after nCRT and 14.8% after nCT (P = 0.0001). Lymph-node metastases were observed in 29.4% in the Nimo-nCRT group, versus 21.6% in the nCRT group and 35.8% in the nCT group (P = 0.093). More patients in the Nimo-nCRT and nCRT group developed grade 3 esophagitis compared to those in the nCT group, P = 0.008. There was no difference in surgical complications between the treatment groups. nCRT results in improved R0 resection, higher pCR rate, and a lower frequency of lymph node metastases compared to nCT, adding nimotuzumab to nCRT is safe and appears to facilitate complete resection and increase the pCR rate.
abstract_id: PUBMED:37072306
Neoadjuvant therapy in locally advanced esophageal squamous cell carcinoma The efficacy of surgery alone for locally advanced esophageal squamous cell carcinoma (ESCC) is limited. In-depth studies concerning combined therapy for ESCC have been carried out worldwide, especially the neoadjuvant treatment model, including neoadjuvant chemotherapy (nCT), neoadjuvant chemoradiotherapy (nCRT), neoadjuvant chemotherapy combined with immunotherapy (nICT), neoadjuvant chemoradiotherapy combined with immunotherapy (nICRT), etc. With the advent of the immunity era, nICT and nICRT have attracted much attention from researchers. An attempt was thus made to take an overview of the evidence-based research advance regarding the neoadjuvant therapy of ESCC.
abstract_id: PUBMED:35720312
Tislelizumab Plus Chemotherapy Sequential Neoadjuvant Therapy for Non-cCR Patients After Neoadjuvant Chemoradiotherapy in Locally Advanced Esophageal Squamous Cell Carcinoma (ETNT): An Exploratory Study. Background: Esophageal squamous cell carcinoma (ESCC) remains a challenging malignant tumor with poor prognosis and limited treatment methods worldwide, and most patients are at a locally advanced stage at diagnosis. High recurrence and metastasis rates remain the main factors leading to the failure of the current standard treatment of neoadjuvant chemoradiotherapy plus surgery for resectable locally advanced ESCC. Improving the pathological complete response (pCR) rate may significantly benefit the survival of patients with resectable locally advanced ESCC after neoadjuvant therapy.
Methods: Tislelizumab plus sequential neoadjuvant chemotherapy was administered to non-clinical complete response (cCR) patients after neoadjuvant chemoradiotherapy for locally advanced ESCC. The patients then received surgery and adjuvant therapy according to the postoperative pathological results. Eighty patients with locally advanced ESCC were recruited for the study. The primary outcomes of the pCR rate and the incidence of adverse events will be analyzed completely within 24 months, and the secondary endpoints will include cCR rate, major pathological response rate, objective response rate, R0 resection rate, event-free survival, and overall survival.
Discussion: This study explored the safety and efficacy of tislelizumab plus chemotherapy sequential neoadjuvant therapy for non-cCR patients and provided a total neoadjuvant therapy model that can benefit patients with locally advanced ESCC.
Clinical Trial Registration: ClinicalTrials. gov NCT05189730. Registered: November 26, 2021, https://register.clinicaltrials.gov/prs/app/action/SelectProtocol?sid=S000BBD5&selectaction=Edit&uid=U0004UG3&ts=47&cx=e0cm59.
abstract_id: PUBMED:34281566
Diffusion-weighted MRI and 18F-FDG PET/CT in assessing the response to neoadjuvant chemoradiotherapy in locally advanced esophageal squamous cell carcinoma. Background: Neoadjuvant chemoradiotherapy (nCRT) followed by surgery is a currently widely used strategy for locally advanced esophageal cancer (EC). However, the conventional imaging methods have certain deficiencies in the evaluation and prediction of the efficacy of nCRT. This study aimed to explore the value of functional imaging in predicting the response to neoadjuvant chemoradiotherapy (nCRT) in locally advanced esophageal squamous cell carcinoma (ESCC).
Methods: Fifty-four patients diagnosed with locally advanced ESCC from August 2017 to September 2019 and treated with nCRT were retrospectively analyzed. DW-MRI scanning was performed before nCRT, at 10-15 fractions of radiotherapy, and 4-6 weeks after the completion of nCRT. 18F-FDG PET/CT scans were performed before nCRT and 4-6 weeks after the completion of nCRT. These 18F-FDG PET/CT and DW-MRI parameters and relative changes were compared between patients with pathological complete response (pCR) and non-pCR.
Results: A total of 8 of 54 patients (14.8%) were evaluated as disease progression in the preoperative assessment. The remaining forty-six patients underwent operations, and the pathological assessments of the surgical resection specimens demonstrated pathological complete response (pCR) in 10 patients (21.7%) and complete response of primary tumor (pCR-T) in 16 patients (34.8%). The change of metabolic tumor volume (∆MTV) and change of total lesion glycolysis (∆TLG) were significantly different between patients with pCR and non-pCR. The SUVmax-Tpost, MTV-Tpost, and TLG-Tpost of esophageal tumors in 18F-FDG PET/CT scans after neoadjuvant chemoradiotherapy and the ∆ SUVmax-T and ∆MTV-T were significantly different between pCR-T versus non-pCR-T patients. The esophageal tumor apparent diffusion coefficient (ADC) increased after nCRT; the ADCduring, ADCpost and ∆ADCduring were significantly different between pCR-T and non-pCR-T groups. ROC analyses showed that the model that combined ADCduring with TLG-Tpost had the highest AUC (0.914) for pCR-T prediction, with 90.0% and 86.4% sensitivity and specificity, respectively.
Conclusion: 18F-FDG PET/CT is useful for re-staging after nCRT and for surgical decision. Integrating parameters of 18F-FDG PET/CT and DW-MRI can identify pathological response of primary tumor to nCRT more accurately in ESCC.
abstract_id: PUBMED:36466925
Pathologic responses and surgical outcomes after neoadjuvant immunochemotherapy versus neoadjuvant chemoradiotherapy in patients with locally advanced esophageal squamous cell carcinoma. Background: Currently, the role of immunotherapy in neoadjuvant setting for patients with locally advanced esophageal squamous cell carcinoma (ESCC) is gradually attracting attention. Few studies compared the efficacy of neoadjuvant immunochemotherapy (NICT) and neoadjuvant chemoradiotherapy (NCRT). Our study aimed to compare treatment response and postoperative complications after NICT followed by surgery with that after conventional NCRT in patients with locally advanced ESCC.
Methods: Of 468 patients with locally advanced ESCC, 154 received conventional NCRT, whereas 314 received NICT. Treatment response, postoperative complications and mortality between two groups were compared. Pathological response of primary tumor was evaluated using the Mandard tumor regression grade (TRG) scoring system. Pathological complete response (pCR) of metastatic lymph nodes (LNs) was defined as no viable tumor cell within all resected metastatic LNs. According to regression directionality, tumor regression pattern was summarized into four categories: type I, regression toward the lumen; type II, regression toward the invasive front; type III, concentric regression; and type IV, scattered regression. Inverse probability propensity score weighting was performed to minimize the influence of confounding factors.
Results: After adjusting for baseline characteristics, the R0 resection rates (90.9% vs. 89.0%, P=0.302) and pCR (ypT0N0) rates (29.8% vs. 34.0%, P=0.167) were comparable between two groups. Patients receiving NCRT showed lower TRG score (P<0.001) and higher major pathological response (MPR) rate (64.7% vs. 53.6%, P=0.001) compared to those receiving NICT. However, NICT brought a higher pCR rate of metastatic LNs than conventional NCRT (53.9% vs. 37.1%, P<0.001). The rates of type I/II/III/IV regression patterns were 44.6%, 6.8%, 11.4% and 37.1% in the NICT group, 16.9%, 8.2%, 18.3% and 56.6% in the NCRT group, indicating a significant difference (P<0.001). Moreover, there were no significant differences in the incidence of total postoperative complications (35.8% vs. 39.9%, P=0.189) and 30-d mortality (0.0% vs. 1.1%, P=0.062).
Conclusion: For patients with locally advanced ESCC, NICT showed a R0 resection rate and pCR (ypT0N0) rate comparable to conventional NCRT, without increased incidence of postoperative complications and mortality. Notablely, NICT followed by surgery might bring a promising treatment response of metastatic LNs.
abstract_id: PUBMED:35463306
Neoadjuvant Therapy for Locally Advanced Esophageal Cancers. Esophageal carcinoma is one of the most aggressive malignant diseases. At present, neoadjuvant chemotherapy and neoadjuvant chemoradiotherapy are regarded as the standard modalities for the treatments of locally advanced esophageal cancers based on several landmark trials. However, the optimal regimen, radiation dose, and surgical intervals are uncertain and the rate of recurrence after neoadjuvant therapy is high. Patients receiving neoadjuvant therapy and reaching a pathological complete response have been reported to have a better survival benefit and a fewer recurrence risk than those non-pathological complete responses. Nevertheless, less than half of patients will reach a pathological complete response after neoadjuvant therapy, and the methods to evaluate the efficacy after neoadjuvant therapy accurately are limited. Immune checkpoint inhibitors have been recommended for the treatments of advanced esophageal cancers. Recently, research has been beginning to evaluate the safety and efficacy of immunotherapy combined with neoadjuvant therapy. Here, we will review and discuss the development of the neoadjuvant therapy of locally advanced esophageal cancers and unsolved clinical problems.
Answer: The investigation into the correlation between molecular markers and response to neoadjuvant chemoradiotherapy in locally advanced squamous cell esophageal cancer (ESCC) has yielded inconclusive results. A study aimed to evaluate the expression of epidermal growth factor receptor (EGFR), p53, p21, and thymidylate synthase (TS) in pretherapy biopsy specimens of locally advanced ESCC and correlate these markers with response to neoadjuvant chemoradiotherapy. However, the results indicated no statistically significant difference between tumor marker expression and clinical response to chemoradiation, nor between tumor marker expression and tumor regression grade (TRG). This suggests that the expression of EGFR, TS, p21, and p53 may not be predictive of the response to neoadjuvant chemoradiotherapy in ESCC (PUBMED:23335529).
Therefore, based on the available abstract, it can be concluded that there is no clear correlation between the expression of the investigated molecular markers (EGFR, p53, p21, and TS) and the response to neoadjuvant chemoradiotherapy in patients with locally advanced squamous cell esophageal cancer. Further research may be needed to identify potential biomarkers that could predict treatment response in this patient population. |
Instruction: Potential caregivers for homebound elderly: more numerous than supposed?
Abstracts:
abstract_id: PUBMED:19607768
Potential caregivers for homebound elderly: more numerous than supposed? Background: This qualitative study examined the experiences and perspectives of caregivers of homebound elderly patients.
Methods: We performed in-depth, semistructured interviews with 22 caregivers (average age 59 years) of homebound elderly patients and analyzed them to determine major themes. The homebound patients were part of a house call program of a US academic medical center in Baltimore, Maryland.
Results: Caregiver relationships in our study were diverse: 41% were spouses or children, and 41% were unrelated to the homebound patient; 36% were male. We identified 3 themes: (1) caregiving has both positive and negative aspects, (2) caregiver motivation is heterogeneous, and (3) caregivers sometimes undergo transformation as a result of their caregiving experience.
Conclusion: Caregiver experience is varied. Interviewees reported a variety of motivations for becoming caregivers and both positive and negative aspects of the experience. Caregivers in this study were diverse with respect to sex and relationship to the patient, suggesting the pool of potential caregivers may be larger than previously thought.
abstract_id: PUBMED:33522364
Understanding the Daily Experiences and Perceptions of Homebound Older Adults and Their Caregivers: A Qualitative Study. More than 7.3 million older adults in the United States have difficulty leaving their homes or are completely homebound, yet little data exist on the experiences of homebound older adults and their caregivers. We conducted 30 semi-structured qualitative interviews with homebound older adults and caregivers recruited through home-based medical care practices in Baltimore and San Francisco. Thematic template analyses revealed that homebound older adults experience varying degrees of independence in activities of daily living, although their degree of dependence increases over time. Caregivers have a multifaceted, round-the-clock role. Both patients and caregivers experience burdens including social isolation and guilt. Navigating medical care and caregiving was further complicated by the complexity of the U.S. health care system; however, home-based medical care was viewed as a high-quality alternative to hospitals or nursing homes. Our findings suggest that providers and health care systems should expand the availability and accessibility of home-based care and improve caregiver support opportunities.
abstract_id: PUBMED:25442810
Home visits by care providers--influences on health outcomes for caregivers of homebound older adults with dementia. Homebound older adults benefit from provider home visits and there is an increasing need for these visits. A study was conducted to evaluate the effect of provider (MD, NP) visits on the caregivers of homebound older adults. Fifty-five caregivers were interviewed to determine any difference in health measures between those whose care recipients had access to a provider and those that did not. The participants completed the SF-36, questionnaires on demographics and access and one opened ended question. Analysis revealed statistically significant differences between the two groups of caregivers. The caregivers whose care recipients did not have access to a provider showed poorer health measures. Providers may have a positive impact on caregiver's health as well as that of the homebound. Developing new and innovative ways to support caregivers while providing care for our patients will be even more important as the population ages and the numbers of available caregivers decrease.
abstract_id: PUBMED:33216908
Attitudes of Homebound Older Adults and Their Caregivers Toward Research and Participation as Research Advisors. Background And Objectives: Homebound older adults and their caregivers have not historically been engaged as advisors in patient-centered outcomes research. This study aimed to understand the attitudes of homebound older adults and their caregivers toward research and participation as research advisors.
Research Design And Methods: Descriptive thematic analysis of semistructured interviews conducted with 30 homebound older adults and caregivers recruited from home-based medical care practices. Interview questions addressed opinions on research and preferences for engaging as research advisors.
Results: Of 30 participants, 22 were female, 17 were people of color, and 11 had Medicaid. Two themes emerged related to perceptions of research overall: (a) utility of research and (b) relevance of research. Overall, participants reported positive attitudes toward research and felt that research could affect people like them. Three themes emerged related to participating as research advisors: (a) motivators, (b) barriers, and (c) preferences. Participants were open to engaging in a variety of activities as research advisors. Most participants were motivated by helping others. Common barriers included time constraints and caregiving responsibilities, and physical barriers for homebound individuals. Participants also reported fears such as lacking the skills or expertise to contribute as advisors. Many were willing to participate if these barriers were accommodated and shared their communication preferences.
Discussion And Implications: Diverse homebound older adults and caregivers are willing to be engaged as research advisors and provided information to inform future engagement strategies. Findings can inform efforts to meet new age-inclusive requirements of the National Institutes of Health.
abstract_id: PUBMED:36197037
Assessing the wellbeing of family caregivers of multimorbid and homebound older adults-A scoping literature review. Background: The prevalence of homebound older adults in the United States more than doubled during the COVID-19 pandemic with greater burden on family caregivers. Higher caregiver burden, more specifically higher treatment burden, contributes to increased rates of nursing home placement. There exist a multitude of tools to measure caregiver well-being and they vary substantially in their focus. Our primary aim was to perform a scoping literature review to identify tools used to assess the facets of caregiver well-being experienced by caregivers of persons with multiple chronic conditions (MCC) with a special focus on those caregivers of homebound adult patients.
Methods: The search was conducted in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) extension for scoping reviews. After refining search terms, searches were performed of the peer-reviewed and gray literature.
Results: After removal of duplicate studies, a total of 5534 total articles were screened for relevance to our study. After all screening and review were completed, 377 total articles remained for full review which included 118 different quantitative tools and 20 different qualitative tools. We identified the 15 most commonly utilized tools in patients with MCC. The Zarit Burden Interview was the most commonly used tool across all of the studies. Of the 377 total studies, only eight of them focused on the homebound population and included 13 total tools.
Conclusions: Building on prior categorization of well-being tools, our work has identified several tools that can be used to measure caregiver well-being with a specific focus on those caregivers providing support to older adults with MCC. Most importantly, we have identified tools that can be used to measure caregiver well-being of family caregivers providing support to homebound older adults, an ever-growing population who are high cost and high utilizers of health care services.
abstract_id: PUBMED:12119652
Computers and caregiving: reaching out and redesigning interventions for homebound older adults and caregivers. This article discusses computer resources for homebound older adults and informal caregivers as an intervention to promote social support and mental health. Published information related to a computer network designed as an intervention for informal caregivers of persons with Alzheimer's disease is included. This information suggests that homebound older adults and informal caregivers can gain valuable information, confidence, and support by using computer resources. A review of the literature supported those findings and suggested that computer technology can facilitate continuing education and the refinement of skills for nurses. Implications for the use of computer resources in nursing education, practice, and research are presented.
abstract_id: PUBMED:38324373
Expectation, Attitude, and Barriers to Receiving Telehomecare Among Caregivers of Homebound or Bedridden Older Adults: Qualitative Study. Background: In recent years, telehomecare has become an increasingly important option for health care providers to deliver continuous care to their patients.
Objective: This study aims to explore the expectations, attitudes, and barriers to telehomecare among caregivers of homebound or bedridden older adults.
Methods: This qualitative study used semistructured interviews to explore caregivers' perspectives on telehomecare for homebound or bedridden older adults. The study adhered to the SRQR (Standards for Reporting Qualitative Research) guidelines. Participants were selected using convenience sampling from caregivers of homebound or bedridden older adults with experience in both in-person home visits and telehomecare services provided by the Department of Family Medicine at Chiang Mai University, in an urban area of Chiang Mai Province in Northern Thailand. Semistructured interviews were conducted. The interviews were audio recorded with participant consent and transcribed verbatim. The framework method was used, involving multiple readings of transcripts to facilitate familiarization and accuracy checking. The study used the technology acceptance model and comprehensive geriatric assessment as the analytical framework.
Results: The study included 20 caregivers of older adult patients. The patients were predominantly female (15/20, 75%), with an average age of 86.2 years. Of these patients, 40% (n=8) of patients were bedridden, and 60% (n=12) of patients were homebound. Caregivers expressed generally positive attitudes toward telehomecare. They considered it valuable for overall health assessment, despite recognizing certain limitations, particularly in physical assessments. Psychological assessments were perceived as equally effective. While in-person visits offered more extensive environmental assessments, caregivers found ways to make telehomecare effective. Telehomecare facilitated multidisciplinary care, enabling communication with specialists. Caregivers play a key role in care planning and adherence. Challenges included communication issues due to low volume, patient inattention, and faulty devices and internet signals. Some caregivers helped overcome these barriers. The loss of information was mitigated by modifying signaling equipment. Technology use was a challenge for some older adult caregivers. Despite these challenges, telehomecare offered advantages in remote communication and resolving scheduling conflicts. Caregivers varied in their preferences. Some preferred in-person visits for a broader view, while others favored telehomecare for its convenience. Some had no strong preference, appreciating both methods, while others considered the situation and patient conditions when choosing between them. Increased experience with telehomecare led to more confidence in its use.
Conclusions: Caregivers have positive attitudes and high expectations for telehomecare services. Although there may be barriers to receiving care through this mode, caregivers have demonstrated the ability to overcome these challenges, which has strengthened their confidence in telehomecare. However, it is important to enhance the skills of caregivers and health care teams to overcome barriers and optimize the use of telehomecare.
abstract_id: PUBMED:31242823
Homebound Status and the Critical Role of Caregiving Support. The homebound population relies on both paid and family caregivers to meet their complex care needs. In order to examine the association between intensity of caregiving support and leaving the home, we identified a population of community-dwelling, homebound Medicare beneficiaries age ≥65 (n = 1,852) enrolled in the 2015 National Health and Aging Trends Study and measured the support they received from paid and family caregivers. Those who had ≥20 h of caregiving support per week had 50% less odds of being "exclusively homebound" (rarely or never leave home) (OR 0.56, p < .01). Policies that facilitate increased support for family caregivers and better access to paid caregivers may allow homebound individuals who would otherwise be isolated at home to utilize existing community-based long-term care services and supports.
abstract_id: PUBMED:33686794
Preliminary investigation of family caregiver burden and oral care provided to homebound older patients. Objectives: Family caregivers play an important role in maintaining the oral health of homebound older adults. Thus, this preliminary study investigated family caregivers' burdens and the oral care they provide to homebound older patients.
Material And Methods: A cross-sectional survey was conducted. A questionnaire was distributed to 230 family caregivers of homebound older patients. We used the Japanese version of the Zarit Burden Interview (J-ZBI) to measure caregiver burden. The cut-off score for the J-ZBI was 21 points. Caregivers with a care burden score below 21 points formed the mild group, while those scoring 21 points or more were included in the moderate/severe group. The differences between the groups were examined. The implementation status of oral care was assessed by the amount of time caregivers spent providing oral care and related concerns. The degree of independence for homebound older patients was measured using the Barthel Index. Multiple logistic regression analyses were conducted to determine the factors associated with the severity of caregiver burden.
Results: A total of 114 caregivers returned the questionnaires by mail (response rate: 49.6%). The moderate/severe care burden group represented 80.7% of the caregivers. A multiple logistic regression analysis revealed that the level of patient independence and time spent performing oral care were significantly associated with the severity of caregiver burden.
Conclusions: The results show that family caregivers experiencing high caregiver burden spent less time providing oral care than caregivers who reported less caregiver burden. Thus, support for oral health management provided by oral health professionals is considered necessary for family with a high caregiver burden of homebound older patients.
abstract_id: PUBMED:10337848
Cognitive-behavioral intervention for homebound caregivers of persons with dementia. Background: Family caregivers experience considerable stress in their management of specific behaviors of persons with dementia.
Objective: To examine the effects of an 8-week cognitive-behavioral (C-B) intervention tailored to the specific deficits of persons with dementia (PWDs) on selected outcomes for homebound caregivers and the functional status of the PWD.
Method: The design was a two-group randomized trial with measures taken at baseline, 4 weeks, 8 weeks, and 12 weeks. Subjects were 65 caregiver-care recipient dyads. A majority of the 65 female caregivers were spouses with a mean age of 66.5 years. Thirty-one women who received the C-B intervention were compared to 34 who received the attention-only (A-O) placebo telephone calls. Coping strategies, perceptions of caregiver burden and satisfaction, and emotional and physical health were assessed at the four points in time.
Results: Results indicated a time by group interaction in depression, with less depression in the C-B than the A-O group. Both groups experienced a significant reduction in anxiety, and a significant consistent decrease in satisfaction over time. Both groups also demonstrated decreased use of emotion-focused coping strategies.
Conclusion: The results suggest a plausible effectiveness of the C-B intervention in preventing the progression of depression. Suggestions for future research are provided.
Answer: The pool of potential caregivers for homebound elderly patients may indeed be larger than previously thought. A qualitative study examining the experiences and perspectives of caregivers of homebound elderly patients found that caregiver relationships were diverse, with 41% being spouses or children and another 41% being unrelated to the homebound patient. Additionally, 36% of the caregivers were male, suggesting that caregiving is not limited to traditional roles or family ties (PUBMED:19607768). This diversity in caregiver relationships indicates that there may be a broader community of individuals who are willing and able to provide care for the homebound elderly than is commonly assumed. |
Instruction: Prevalence of measles susceptibility among health care workers in a UK hospital. Does the UK need to introduce a measles policy for its health care workers?
Abstracts:
abstract_id: PUBMED:14514907
Prevalence of measles susceptibility among health care workers in a UK hospital. Does the UK need to introduce a measles policy for its health care workers? Objectives: First, to determine the prevalence of measles non-immunity in a group of health care workers (HCW), and secondly, to investigate what pre-employment screening for measles is carried out by NHS occupational health departments.
Methods: Two hundred and eighteen HCWs with patient contact on the medical wards at Addenbrooke's hospital provided an oral fluid sample and answered a questionnaire. A postal survey of Association of National Health Occupational Physicians Society (ANHOPS) members was conducted to assess whether UK NHS Trusts identify measles non-immune individuals.
Results: Of the HCWs tested, 3.3% of were found to be non-immune to measles (both oral fluid and confirmatory serum sample were measles IgG negative). Less than one third of a sample of 80 NHS occupational health departments enquired about measles immunity.
Conclusion: The prevalence of measles non-immune health care workers is low, but with a fall in uptake of MMR immunization and increased likelihood of measles outbreaks, it is important to identify these at-risk individuals. Serum testing is the most reliable method to use. Oral fluid testing and history of measles disease or vaccination are unreliable methods of identifying non-immune individuals. To achieve complete immunity, it is cost-effective to screen and then offer immunization. NHS trusts vary greatly in their measles policies for health care workers.
abstract_id: PUBMED:33151246
Susceptibility to measles among health workers in a university hospital in southern Italy. Objectives: Measles still has a high impact on the health of the population in Italy and therefore requires a strong commitment to prevention at national level. In addition to Italy, measles outbreaks have also been reported in other EU countries, with a high number of cases and a rapid spread of the disease even in the nosocomial context between patients and health personnel. The aim of this study is to evaluate the prevalence of measles in a group of health workers working at a university hospital in southern Italy.
Materials And Methods: A seroepidemiological study was conducted on 458 health workers. Measles antibody IgG and IgM levels were evaluated by immunoenzymatic testing.
Results: The highest percentage of susceptible subjects was ≤30 years old, with a statistically significant difference compared to the age group ≥51 years. With regard to gender, susceptibility to measles in males was significantly higher than in females (p<0.05). Additional statistically significant differences were found in the different age groups in both genders.
Conclusions: Although the results show that most health workers are immune to measles, a 20% susceptibility certainly represents a risk for the spread of the disease among operators and patients. Vaccination and control of suspected cases, especially in community settings such as the hospital environment, are the main measures to prevent the transmission and spread of the disease.
abstract_id: PUBMED:33086853
Seroprevalence of IgG antibodies against measles in health care workers of the Strakonice Hospital. Aim: Due to mandatory vaccination introduced in the Czech Republic since 1969, only few measles cases were reported annually until recently. However, a rapid increase of cases has been recorded in last two years. In contrast to the pre-vaccination era, in recent measles outbreaks, many cases have been reported among vaccinated adults. Health care workers (HCWs) are particularly at high risk of contact with measles. Therefore, to minimize transmission in health care settings, many hospitals evaluate measles immune status of their HCWs and offer free vaccination to those with too low anti-measles antibody levels. The aim of this study was to evaluate the seroprevalence of IgG antibodies against measles in all HCWs of the Strakonice Hospital.
Materials And Methods: Anti-measles IgG serum levels were measured using quantitative ELISA.
Results: Almost all HCWs born before 1969, when the mandatory vaccination started, showed high levels of IgG antibodies (93.5%). Contrarily, among previously vaccinated individuals, only 64.8% were seropositive. A high percentage of seronegative or borderline samples was observed even in the age groups who were previously vaccinated with two doses.
Conclusions: In total, 25.4% of all HCWs of the Strakonice Hospital had too low anti-measles IgG levels, and most of them were immunized with one dose of MMR vaccine. Prioritized vaccination substantially decreased the number of staff at higher risk of measles acquisition and, at the same time, of those who would need to be quarantined after exposure.
abstract_id: PUBMED:7625214
The prevalence of measles, rubella, mumps and chickenpox antibodies in a population of health care workers We present an epidemiological and serological study in 409 health care workers randomly selected from the 4,103 workers of the University Hospital of Coimbra. A low level of susceptibility for measles (1.2%; 95% confidence interval (95% CI): 0.15-2.23%), rubella (2.4%; 95% CI: 0.9-3.9) and varicella (3.2%; 95% CI: 1.5-4.7%) and a very high one for mumps (17.3%; 95% CI: 13.7-21.1%), were found. Ineffectiveness of historical information in predicting immune status to all of these diseases was found. An economic analysis of preventive measures was done. A mumps vaccination policy for health care workers is recommended and the opportunity of measles and rubella vaccination is discussed, facing the results of this study. Continuous monitoring of these diseases is needed anticipating the changes in epidemiology that are expected to occur with childhood vaccination.
abstract_id: PUBMED:37321897
National vaccination policies for health workers - A cross-sectional global overview. Background: Immunization is essential for safeguarding health workers from vaccine-preventable diseases (VPDs) that they may encounter at work; however, information about the prevalence and scope of national policies that protect health workers through vaccination is limited. Understanding the global landscape of health worker immunization programmes can help direct resources, assist decision-making and foster partnerships as nations consider strategies for increasing vaccination uptake among health workers.
Methods: A one-time supplementary survey was distributed to World Health Organization (WHO) Member States using the WHO/United Nations Children's Fund (UNICEF) Joint Reporting Form on Immunization (JRF). Respondents described their 2020 national vaccination policies for health workers - detailing VPD policies and characterising technical and funding support, monitoring and evaluation activities and provisions for vaccinating health workers in emergencies.
Results: A total of 53 % (103/194) Member States responded and described health worker policies: 51 had a national policy for vaccinating health workers; 10 reported plans to introduce a national policy within 5 years; 20 had subnational/institutional policies; 22 had no policy for vaccinating health workers. Most national policies were integrated with occupational health and safety policies (67 %) and included public and private providers (82 %). Hepatitis B, seasonal influenza and measles were most frequently included in policies. Countries both with and without national vaccination policies reported monitoring and reporting vaccine uptake (43 countries), promoting vaccination (53 countries) and assessing vaccine demand, uptake or reasons for undervaccination (25 countries) among health workers. Mechanisms for introducing a vaccine for health workers in an emergency existed in 62 countries.
Conclusion: National policies for vaccinating health workers were complex and context specific with regional and income-level variations. Opportunities exist for developing and strengthening national health worker immunization programmes. Existing health worker immunization programmes might provide a foothold on which broader health worker vaccination policies can be built and strengthened.
abstract_id: PUBMED:36560423
Beliefs and Sociodemographic and Occupational Factors Associated with Vaccine Hesitancy among Health Workers. Introduction: Vaccine hesitancy has been implicated in the low-vaccination coverage in several countries. Knowledge about vaccine hesitancy predictors in health workers is essential because they play a central role in communication about the importance and safety of vaccines. This study aimed to assess beliefs and sociodemographic and occupational factors associated with vaccine hesitancy in health workers. Methods: This was a cross-sectional study among 453 health workers in primary and medium complexity services in a municipality in the state of Bahia, Brazil. The variable vaccine hesitancy was operationalized based on the answers related to incomplete vaccination against hepatitis B, measles, mumps and rubella, and diphtheria and tetanus. Associations between variables were expressed as prevalence ratios (PR) and their respective 95% confidence intervals (CI). Results: Endemic disease combat agents, administrative service workers, and support staff had the highest levels of vaccine hesitancy. Among the analyzed variables, the following were associated with vaccine hesitancy: working in secondary health care services (PR: 1.21; CI: 1.07-1.36), working as an endemic disease combat agent (PR = 1.42; 95% CI: 1.165-1.75), not sharing information about vaccines on social media (PR = 1.16; 95% CI: 1.05-1.28), distrusting information about vaccinations (PR: 0.86; CI: 0.75-0.99), and not feeling safe receiving new vaccines (PR = 1.16; 95% CI: 1.06-1.28). Conclusions: Strategies to enhance confidence in vaccination among health workers should consider differences in occupations and their working settings. Improving vaccination-related content in training and continuing education activities and facilitating access to onsite vaccinations at the workplace are crucial elements to reduce vaccine hesitancy among health workers.
abstract_id: PUBMED:19095425
Vaccination coverage among health care workers in the pediatric emergency and intensive care department of Edouard Herriot hospital in 2007, against influenza, pertussis, varicella, and measles Aim: The aim of this study was to determine the vaccination coverage among the medical and paramedical health care workers of the pediatric intensive care and emergency department of Edouard Herriot hospital in Lyon, with respect to influenza, pertussis, varicella, and measles, 4 diseases with air transmission and vaccination recommendations.
Method: During February and March 2007, a questionnaire was given by hand to 123 health care workers by a medical student working there or available in the intensive care unit.
Results: The response rate to the questionnaire was 68.3%. The vaccination coverage against influenza was 42.8%; men and medical health care workers were better vaccinated. With respect to vaccination against pertussis, one third had received an injection in adulthood, adults under age 30 and medical health care workers were better vaccinated, but the difference was not statistically significant. Ten health care workers were not vaccinated and had no history of measles: only 1 had had a measles serology and none were vaccinated. Eleven had no history of varicella: 6 had had a varicella serology and none were vaccinated.
Conclusions: Vaccination coverage against influenza is higher than what has been reported in the literature, possibly because of a mobile vaccination campaign against influenza made during winter 2006 in this pediatric department. Vaccination coverage against pertussis is encouraging and probably the consequence of an awareness of the gravity of the disease among infants. Individual information is necessary for health care workers on the nosocomial risk for influenza and pertussis in infants, and vaccination must be proposed. Serology against varicella and measles is compulsory for all health care workers with no history and no vaccination against these 2 diseases, to track and vaccinate the nonimmunized personnel. Occupational physicians have a very important role to play in meeting this goal.
abstract_id: PUBMED:30833540
Study of prevalence of protection against measles in health workers of Murcia Health Service Objective: In the month of April 2017, two cases of measles were reported in one of the basic health zones (ZBS) of the Region of Murcia. The Occupational Risk Prevention Services of the Murcian Health Service (SMS) were urged to review the immunological status of health workers, born as of 1971, from Primary Care Centers, referral hospitals and emergency services that cover the affected area with the general objective of preventing the appearance of a possible outbreak of measles in this personnel, checking the protection of these workers against this disease (the vaccine status and / or the serological status (IgG)) and offering the vaccine to non-immune workers.
Methods: A descriptive study of the prevalence of protection against measles of this group of workers during the period from January to February 2017 was carried out. Initially, the stories of the workers for whom data were available were reviewed, and cited for the provision of vaccine data (90) or extraction of serology to those for whom data were not available (138).
Results: 408 medical records / workers were reviewed. At the end of the study, we did have data about the vaccination of 22.1% of the workers and serology of the 33.8%. 91.5% of the workers for whom we had data were protected against measles.
Conclusions: We can conclude that the coverage among our workers is lower than that proposed by the Measles and Rubella Elimination Plan, so a program to promote vaccination against this disease among health personnel would be advisable.
abstract_id: PUBMED:36298438
Susceptibility towards Chickenpox, Measles and Rubella among Healthcare Workers at a Teaching Hospital in Rome. Immunization is the best protection against chickenpox, measles and rubella. It is important to identify and immunize susceptible healthcare workers to prevent and control hospital infections. Our aim was to estimate the susceptibility level of healthcare workers at a Teaching Hospital in Rome concerning these diseases and the factors associated to the susceptibility. Methods: a cross sectional study was carried out at the Department of Occupational Medicine of the Umberto I General Hospital of Rome. Participants were recruited during routine occupational health surveillance. As far as inclusion criteria, the following professionals were considered: doctors, nurses, laboratory technicians and other health professionals. Concerning the exclusion criteria, patients with HIV, cancer and diseases of the immune system, and acute illness or fever more than 38.5 °C, were not included in the study. A blood sample was tested for the presence of antibodies against measles, rubella and chicken pox. Results: 1106 healthcare professionals were involved in the study (41.8% nurses, 30.4% doctors, 12.3% laboratory technicians, 15.1% other health professionals): 25 (2.3%), 73 (6.6%) and 35 (3.2%) of these were susceptible to measles, rubella and chicken pox, respectively. The only variable associated with susceptibility of measles was age (p < 0.001). Furthermore, there was evidence of an association between various susceptibilities, particularly between measles and chickenpox (OR: 4.38). Conclusion: this study showed that even if the majority of our healthcare professionals are immunized for MRV, it is necessary not to underestimate the seronegativity of non-immune ones. All health professionals should be vaccinated to ensure safety for patients, especially the weakest.
abstract_id: PUBMED:34165349
Screening and Vaccination Against Measles and Varicella Among Health Care Workers: A Cost-effectiveness Analysis. The aim of this study was to examine the most cost-effective strategy of screening and vaccinating measles- and varicella-susceptible health care workers (HCWs). A retrospective cost-effectiveness analysis was conducted at a tertiary hospital in Korea with 300 HCWs who were at high risk of infection. Self-reported histories of vaccinations, infectious diseases, and contact with such cases were collected. Serological tests for immunoglobulin G titers of measles and varicella were performed. Data were analyzed using analysis of variance, Kruskal-Wallis test, χ2 test, and Fisher exact test. Seropositivity rates were 96.3% for measles and 95.7% for varicella. Four different strategies (cases) for vaccination were investigated. Considering the progressive decline in antibody concentrations and the false-positive responses in self-reported histories, case 3, which involved administering 2-dose vaccinations to susceptible HCWs demonstrated by antibody screening tests for both measles and varicella, was the most cost-effective strategy. Health care facilities should establish mandatory immunization policies that reduce the risk of transmission of vaccine-preventable diseases.
Answer: The prevalence of measles non-immunity among health care workers (HCWs) in a UK hospital was found to be 3.3%, as determined by a study conducted at Addenbrooke's hospital, where 218 HCWs provided oral fluid samples and answered a questionnaire. The study also revealed that less than one-third of 80 NHS occupational health departments inquired about measles immunity (PUBMED:14514907). This low prevalence of non-immune HCWs suggests that while the current risk is low, the potential for measles outbreaks, especially with a decline in MMR immunization uptake, necessitates the identification of at-risk individuals. The study concluded that serum testing is the most reliable method for identifying non-immune individuals and that it is cost-effective to screen and offer immunization to achieve complete immunity. The study also highlighted that NHS trusts vary greatly in their measles policies for HCWs.
Given these findings, it appears that there is a need for the UK to consider introducing or standardizing a measles policy for its health care workers. Such a policy would help to identify non-immune HCWs, provide them with the necessary immunizations, and potentially prevent the spread of measles within healthcare settings. This is particularly important given the increased likelihood of measles outbreaks and the critical role that HCWs play in both the potential transmission and control of infectious diseases. |
Instruction: Does epidural analgesia play a role in postpartum urinary incontinence?
Abstracts:
abstract_id: PUBMED:34586440
The effect of epidural analgesia on postpartum urinary incontinence: a systematic review. Introduction And Hypothesis: Urinary incontinence (UI) is common during pregnancy and in the postpartum period. Some women appear to recover their usual urinary function but in others UI persists, playing an important role in women's quality of life. Even though postpartum UI seems to have a multifactorial etiology, pregnancy, vaginal delivery, birth weight and parity are recognized as risk factors. This systematic review aims to evaluate the effect of one particular potential risk factor, epidural analgesia, on the development of postpartum UI in women with vaginal delivery.
Methods: PubMed, Cochrane and Scopus were searched for "epidural analgesia," "epidural anesthesia" or "epidural" and "urinary incontinence." All studies published until 31 July 2020 were considered. A total of 393 studies were identified, and 23 studies were included in the systematic review.
Results: From the total 23 articles included in this review, 21 showed a non-significant association between epidural analgesia and postpartum UI. One study found that the risk of postpartum SUI and any type of UI was significantly, but only slightly, increased in women with epidural analgesia. Another study showed a protective effect but was lacking control for important confounders.
Conclusion: There appears to be no association between epidural analgesia and postpartum UI. Therefore, pregnant women should not fear epidural analgesia because of a possible increased risk of UI.
abstract_id: PUBMED:37716951
Association of epidural analgesia during labor and early postpartum urinary incontinence among women delivered vaginally: a propensity score matched retrospective cohort study. Background: Although epidural analgesia is considered the gold standard for pain relief during labor and is safe for maternity and fetus, the association between the epidural analgesia and pelvic floor disorders remains unclear. Thus we estimate the association between epidural analgesia and early postpartum urinary incontinence (UI).
Methods: A propensity score-matched retrospective cohort study was conducted at a university-affiliated hospital in Shanghai, China. Primiparous women with term, singleton, and vaginal delivery between December 2020 and February 2022 were included. UI was self-reported by maternity at 42 to 60 days postpartum and was classified by International Consultation on Incontinence Questionnaire-Urinary Incontinence Short Form (ICIQ-UI SF). Using logistic regression models, the associations between epidural analgesia and early postpartum UI were assessed.
Results: Among 5190 participants, 3709 (71.5%) choose epidural anesthesia during labor. Analysis of the propensity-matched cohort (including 1447 maternal pairs) showed epidural anesthesia during labor was independently associated with UI in early postpartum period (aOR 1.50, 95% CI 1.24-1.81). This association was mainly contributed to stress UI (aOR 1.38, 95% CI 1.12-1.71) rather than urge UI (aOR 1.45, 95% CI 0.99-2.15) and mixed UI (aOR 1.52, 95% CI 0.95-2.45). Furthermore, we observed that the association between epidural anesthesia and UI was more pronounced among older women (≥ 35 y) and women with macrosomia (infant weight ≥ 4000 g), compared with their counterparts (both P for interaction < 0.01). After further analysis excluding the women with UI during pregnancy, the results remained largely consistent with the main analysis.
Conclusions: The findings support that epidural anesthesia was associated with SUI in the early postpartum period.
abstract_id: PUBMED:17596005
Does intrapartum epidural analgesia affect nulliparous labor and postpartum urinary incontinence? Background: The effect of epidural analgesia on nulliparous labor and delivery remains controversial. In addition, pregnancy and delivery have long been considered risk factors in the genesis of stress urinary incontinence (SUI). We sought to determine the effect of epidural analgesia and timing of administration on labor course and postpartum SUI.
Methods: Five hundred and eighty three nulliparous women were admitted for vaginal delivery at > or = 36 gestational weeks. We compared various obstetric parameters and SUI, at puerperium and 3 months postpartum, among patients who had epidural and non-epidural analgesia, and among those who had early (cervical dilatation < 3 cm) and late (cervical dilatation > or = 3 cm) epidural analgesia.
Results: When compared with the non-epidural analgesia group (n = 319), the group that received epidural analgesia (n = 264) had significant prolongation of the first and second stages of labor, and higher likelihood for instrumental and cesarean delivery but similar incidence of severe vaginal laceration and postpartum SUI. Except for the first stage of labor, early administration of epidural analgesia did not result in a significant influence on obstetric parameters or an increased incidence of postpartum SUI.
Conclusion: Our findings showed that epidural analgesia is associated with an increased risk of prolonged labor, and instrumental and cesarean delivery but is not related to increased postpartum SUI. Regarding the impact of the timing of epidural analgesia given in the labor course, the first stage of labor appeared to last longer when analgesia was administered early rather than late.
abstract_id: PUBMED:26135762
Does epidural analgesia play a role in postpartum urinary incontinence? Medium-term results from a case-control study. Objective: To evaluate the medium-term effect of epidural analgesia (EA) on the possible onset of postpartum urinary incontinence (PUI).
Methods: We performed a single-centre, retrospective case-control study. At 8-week postpartum, we recruited a cohort of women who had term singleton pregnancy and foetus in cephalic presentation, and divided in six groups: (1) vaginal delivery without episiotomy, without EA; (2) vaginal delivery without episiotomy, with EA; (3) vaginal delivery with episiotomy, without EA; (4) vaginal delivery with episiotomy, with EA; (5) emergency caesarean section without previous EA during labour and (6) emergency caesarean section with previous EA during labour. For each woman, we recorded age, Body Mass Index (BMI) and the result of the following questionnaire for urinary incontinence: International Consultation on Incontinence Questionnaire Short Form (ICIQ-SF), Incontinence Impact Questionnaire-7 (IIQ-7) and Urogenital Distress Inventory-6 (UDI-6). Subsequently, we compared group 1 versus group 2, group 3 versus group 4 and group 5 versus group 6.
Results: We did not evidence any significant difference for age, BMI and incontinence scores between groups 1 and 2, 3 and 4, and 5 and 6.
Conclusions: EA did not affect the onset of PUI in medium-term, regardless the mode of delivery.
abstract_id: PUBMED:34419692
Effect of Epidural Analgesia on Pelvic Floor Dysfunction at 6 Months Postpartum in Primiparous Women: A Prospective Cohort Study. Introduction: Epidural analgesia has become a universal intervention for relieving labor pain, and its effect on the pelvic floor is controversial.
Aim: To investigate the effect of epidural analgesia on pelvic floor dysfunction (PFD) in primiparous women at 6 months postpartum.
Methods: We performed a prospective cohort study involving 150 primiparous women in preparation for vaginal delivery, with 74 (49.3%) receiving epidural analgesia. Baseline demographic and intrapartum data were collected. At 6 months postpartum, PFD symptoms, including stress urinary incontinence, overactive bladder, defecation disorder, pelvic organ prolapse, and 4 kinds of sexual dysfunction (arousal disorder, low sexual desire, dyspareunia, and orgasm disorder), were evaluated. Pelvic floor muscle (PFM) function and postpartum depression were also assessed. Multivariate logistic regression was applied to identify factors associated with the PFD symptoms affected by epidural analgesia.
Main Outcome Measure: PFD symptoms and sexual dysfunction were evaluated through Pelvic Floor Distress Inventory-20 (PFDI-20) and Female Sexual Function Index (FSFI-12). PFM function was examined with palpation and surface electromyography (sEMG). Postpartum depression was assessed using Self-Rating Depression Scale (SDS).
Results: At 6 months postpartum, women who delivered with epidural analgesia had a higher incidence of dyspareunia (43.2% vs 26.3%, P <0.05) and longer first, second, and total stage of labor durations (P <0.01) than those who without. No significant difference in other PFD symptoms or PFM function was found between the 2 groups (P >0.05). Multivariate logistic regression revealed that epidural analgesia (OR = 3.056, 95% CI = 1.217-7.671) and SDS scores (OR = 1.066, 95% CI = 1.009-1.127) were independent risk factors for dyspareunia.
Conclusion: At 6 months postpartum in primiparous women, epidural analgesia was associated with an increased risk of postpartum dyspareunia and longer labor durations, which deserves attention for rehabilitation after delivery. Future studies with a larger sample size are needed to evaluate the impact of epidural analgesia on other PFD symptoms. Du J, Ye J, Fei H, et al. Effect of Epidural Analgesia on Pelvic Floor Dysfunction at 6 Months Postpartum in Primiparous Women: A Prospective Cohort Study. Sex Med 2021;9:100417.
abstract_id: PUBMED:31802160
Does epidural anesthesia influence pelvic floor muscle endurance and strength and the prevalence of urinary incontinence 6 weeks postpartum? Introduction And Hypothesis: With the increasingly extensive application of epidural analgesia, its effect on pelvic floor function outcomes has received growing attention. The aim of the study is to determine the possible effect of epidural analgesia on pelvic floor muscle (PFM) endurance and strength and the prevalence of urinary incontinence (UI) and stress urinary incontinence (SUI) at 6 weeks postpartum.
Methods: This is a retrospective cohort study of 333 primiparous women after vaginal delivery. At 6 weeks postpartum, a vaginal balloon connected to a high-precision pressure transducer was used to measure PFM strength and endurance. SUI/UI was determined using the verified Chinese International Classification of Urinary Incontinence Short Form (ICIQ-UI-SF) questionnaire. Statistical analysis was performed using binary logistic regression and multiple linear regression analysis.
Results: Women in the epidural analgesia group experienced longer first and second stages of labor (p < 0.05). There were no statistically significant differences in the rates of perineal lacerations, forceps assistance or episiotomy between women with or without epidural analgesia (p > 0.05). No statistically significant differences were found in PFM endurance (B: 0.933, 95% CI confidence interval: -1.413 to 3.278, p: 0.435) or PFM strength (B: 0.044, 95% CI: -3.204 to 3.291, p:0.979) between these two groups. In addition, the prevalence of UI (30.77% vs. 26.87%) and SUI (21.54% vs. 16.42%) in women with or without epidural analgesia was not statistically significant (p > 0.05).
Conclusions: PFM function and UI prevalence at 6 weeks postpartum are not significantly affected by epidural analgesia.
abstract_id: PUBMED:12011873
The effects of epidural analgesia on labor, maternal, and neonatal outcomes: a systematic review. Mothers given an epidural rather than parenteral opioid labor analgesia report less pain and are more satisfied with their pain relief. Analgesic method does not affect fetal oxygenation, neonatal pH, or 5-minute Apgar scores; however, neonates whose mothers received parenteral opioids require naloxone and have low 1-minute Apgar scores more frequently than do neonates whose mothers received epidural analgesia. Epidural labor analgesia does not affect the incidence of cesarean delivery, instrumented vaginal delivery for dystocia, or new-onset long-term back pain. Epidural analgesia is associated with longer second-stage labor, more frequent oxytocin augmentation, hypotension, and maternal fever (particularly among women who shiver) but not with longer first-stage labor. Analgesic method does not affect lactation success. Epidural use and urinary incontinence are associated immediately postpartum but not at 3 or 12 months. The mechanisms of these unintended effects need to be determined to improve epidural labor analgesia.
abstract_id: PUBMED:12005470
Epidural analgesia: effects on labor progress and maternal and neonatal outcome. The intended and unintended effects of epidural labor analgesia are reviewed. Mothers randomized to epidural rather than parenteral opioid analgesia have better pain relief. Fetal oxygenation is not affected by analgesic method; however, neonates whose mothers received intravenous or intramuscular opioids rather than epidural analgesia require more naloxone and have lower Apgar scores. Epidural analgesia does not affect the rates of cesarean delivery, obstetrically indicated instrumented vaginal delivery, neonatal sepsis, or new-onset back pain. Epidural analgesia is associated with longer second labor stages, more frequent oxytocin augmentation, and maternal fever (particularly among women who shiver and women receiving epidural analgesia for > 5 hours) but not with longer first labor stages. Epidural analgesia has no affect but intrapartum opioids decrease lactation success. Epidural use and urinary incontinence are weakly, but probably not causally, associated. Epidural labor analgesia would improve if the mechanisms of these unintended effects could be determined.
abstract_id: PUBMED:12466906
Effects of epidural analgesia on pelvic floor function after spontaneous delivery: a longitudinal retrospective study. The aim of the study was to assess the effects of epidural analgesia on pelvic floor function. Eighty-two primiparous women (group 1, consisting of 41 given an epidural, and group 2 of 41 not given an epidural) were investigated during pregnancy and at 2 and 10 months after delivery by a questionnaire, clinical examination, and assessment of bladder neck behavior, urethral sphincter function and intravaginal/intra-anal pressures. The prevalence of stress urinary incontinence was similar in both groups at 2 months (24% vs. 17%, P = 0.6) and 10 months (22% vs. 7%, P = 0.1), as was the prevalence of decreased sexual vaginal response at 10 months (27% vs. 10%, P = 0.08). Bladder neck behavior, urethral sphincter function and intravaginal and intra-anal pressures showed no significant differences between the two groups. Ten months after spontaneous delivery, there were no significant differences in the prevalence of stress urinary incontinence and decreased sexual vaginal response, or in bladder neck behavior, urethral sphincter function and pelvic floor muscle strength between women who had or had not had epidural analgesia.
abstract_id: PUBMED:15567685
medullar adhesive arachnoiditis: a late complication after obstetrical epidural analgesia A 30-year-old woman, G3P3, was progressively affected by spastic paraparesis with loss of sensitivity and urinary incontinence due to medullar adhesive arachnoiditis occurring five months after an epidural analgesia for repeat cesarean section. Magnetic resonance imaging showed a voluminous subarachnoid cyst and a septated syringomyelic cavitation attributed to metabisulfite, the preservative of epinephrine and to multiple lidocaine injections through the catheter in the postoperative period. Despite two decompressive neurosurgical operations, the neurological state of the patient continues to worsen.
Answer: The role of epidural analgesia in postpartum urinary incontinence (UI) is a subject of ongoing research and debate. A systematic review found that out of 23 articles included, 21 showed a non-significant association between epidural analgesia and postpartum UI, suggesting that there appears to be no association between the two, and thus, epidural analgesia should not be feared by pregnant women because of a possible increased risk of UI (PUBMED:34586440).
However, other studies have reported varying findings. One propensity score-matched retrospective cohort study found that epidural anesthesia during labor was independently associated with UI in the early postpartum period, particularly stress urinary incontinence (SUI), and the association was more pronounced among older women and those with macrosomia (PUBMED:37716951). Another study indicated that epidural analgesia is associated with an increased risk of prolonged labor and instrumental and cesarean delivery but is not related to increased postpartum SUI (PUBMED:17596005).
A case-control study found no significant medium-term effect of epidural analgesia on the onset of postpartum urinary incontinence, regardless of the mode of delivery (PUBMED:26135762). Similarly, a prospective cohort study reported that at 6 months postpartum, epidural analgesia was associated with an increased risk of postpartum dyspareunia but not with other pelvic floor dysfunction symptoms or pelvic floor muscle function (PUBMED:34419692).
Another retrospective cohort study concluded that pelvic floor muscle function and the prevalence of UI at 6 weeks postpartum are not significantly affected by epidural analgesia (PUBMED:31802160). Longitudinal retrospective research also found no significant differences in the prevalence of stress urinary incontinence and decreased sexual vaginal response, or in bladder neck behavior, urethral sphincter function, and pelvic floor muscle strength between women who had or had not had epidural analgesia (PUBMED:12466906).
In summary, while some studies suggest an association between epidural analgesia and postpartum urinary incontinence, particularly stress urinary incontinence, others do not find a significant link. The discrepancies in findings may be due to differences in study design, population, and other confounding factors. More research is needed to fully understand the relationship between epidural analgesia and postpartum urinary incontinence. |
Instruction: Selective capacity of glass-wool filtration for the separation of human spermatozoa with condensed chromatin: a possible therapeutic modality for male-factor cases?
Abstracts:
abstract_id: PUBMED:7606151
Selective capacity of glass-wool filtration for the separation of human spermatozoa with condensed chromatin: a possible therapeutic modality for male-factor cases? Purpose: The aim of this study was to evaluate chromatin condensation of human spermatozoa following swim-up compared to glass-wool separation. Semen aliquots from men attending an andrological outpatient clinic were processed by means of a swim-up procedure and glass-wool filtration. Chromatin condensation was recorded using aniline blue staining and results were reported according to color intensity of stained sperm heads. Morphometric measurements of sperm heads were performed on stained sperm samples.
Results: Glass-wool filtration resulted (i) in a significantly higher total motile sperm count (P < 0.0005) compared to swim-up and (ii) in a significantly higher percentage of normal chromatin-condensed spermatozoa compared to the ejaculate (P < 0.01).
Conclusion: In contrast, comparing swim-up to the ejaculate, the percentage of matured nuclei (unstained spermatozoa) retrieved following swim-up was significantly lower (P < 0.005). Glass-wool filtration separates human spermatozoa according to motility and size of the sperm head. The size of the sperm head closely correlated with the chromatin condensation quality.
abstract_id: PUBMED:7308506
Effect of glass wool filtration on ultrastructure of human spermatozoa. The results of this study strongly suggest that filtration by glass wool can induce damage to the membrane and acrosome of the heads of some spermatozoa in a population. It is possible that the potential fertilizing capacity of a population of human spermatozoa may be reduced as a consequence of these alterations, especially those to the acrosome. The results suggest that sufficient clinical applications of glass wool filtration in artificial insemination is the only way to evaluate both the potential benefits of this process and the potential drawbacks to efficiency that may be caused by a degree of ultrastructural damage.
abstract_id: PUBMED:2632658
Glass wool-filtered spermatozoa and their oocyte penetrating capacity. The capacity of glass wool-filtered spermatozoa to penetrate zona-free hamster oocytes was studied. As compared to prefiltered sperm samples, oocyte penetration was significantly increased. A significant increase in the penetration rate for the filtered sperm population was noted even after the sperm motility in the filtrate was adjusted with medium equal to that of the prefiltered sample. However, no significant differences in oocyte penetration were seen between the prefiltered and the filtered sperm population when the filtered sperm samples were diluted with nonviable spermatozoa. These results show that glass wool filtration yields a sperm population with a greater penetrating capacity. It was concluded that motility alone could not account for the improved penetrability and that the removal of nonviable spermatozoa may at least, in part, be responsible for this effect.
abstract_id: PUBMED:3350941
Glass wool column filtration of human semen: relation to swim-up procedure and outcome of IVF. The number and viability of spermatozoa recovered by glass wool column filtration and a swim-up procedure were compared using different types of ejaculates, such as normal, asthenozoospermic and very viscous oligozoospermic semen. The filtration procedure resulted in significantly (P less than 0.01) higher recovery of viable spermatozoa than the swim-up procedure from all types of ejaculates studied. Further, the spermatozoa from 50 (78.1%) of the 64 ejaculates filtered through glass wool column fertilized at least one intact human egg in an in-vitro fertilization (IVF) procedure. It is concluded that glass wool column filtration is superior to the swim-up procedure since it yields a higher recovery of viable spermatozoa that are potentially fertile. Therefore, the glass wool column filtration procedure used to prepare spermatozoa may be of benefit for IVF, intra-uterine insemination, in-vitro fertilization and GIFT (gamete intra-Fallopian transfer), especially in cases of poor quality semen.
abstract_id: PUBMED:9806275
Glass wool filtration leads to a higher percentage of spermatozoa with intact acrosomes: an ultrastructural analysis. We investigated the possibility of ultrastructural damage to human spermatozoa induced by different sperm preparation techniques. Ejaculates from 20 normozoospermic men were divided into equal aliquots and processed by glass wool filtration, Percoll density gradient centrifugation, and a simple two-step centrifugation procedure which served as a control. The evaluation of 60 spermatozoa from each of 20 test subjects (in all, n = 1200) ensured that a sufficiently large number of spermatozoa were investigated. Ultrastructural damage was assessed by scanning electron microscopy. We investigated the state of the acrosome after sperm preparation and measured the percentage of intact spermatozoal structures compared with that of the control. Compared with Percoll density gradient centrifugation, glass wool filtration yielded a significantly increased proportion of intact acrosomes. However, both preparations gave significantly better results than the control. In conclusion, both glass wool filtration and Percoll centrifugation are efficient techniques for the accumulation of spermatozoa with intact acrosomes. Because of the significantly higher percentage of intact acrosomes, glass wool filtration appears to be the more appropriate method. The significance of the conspicuous bending of sperm tails after Percoll centrifugation is not yet known.
abstract_id: PUBMED:11472334
An improved method of sperm selection by glass wool filtration. An improved method of sperm selection by glass wool filtration is introduced. After incubation of glass wool filtrates for 30 min at 37 degrees C in a conical-shaped 1.5-ml tube, an enrichment of highly motile spermatozoa was found in the bottom layer of the tube. The effect turned out to be dependent on the conical shape of the tube, as it was not observed in flat-bottomed tubes. Native ejaculates (obtained from 30 men) and their glass wool filtrates were analysed by cell counter, computer-assisted sperm-motility analysis, morphological differentiation and supravital staining of spermatozoa. When 400 microl of ejaculate, diluted with 800 microl of medium, was applied to the top of a column consisting of a 1-ml disposable syringe barrel gently packed with 15 mg of glass wool to a depth of 6 mm, an enrichment of viable spermatozoa was found in the first three 100-microl fractions taken from the bottom of the tube. It is the simplicity of this technique that makes it so easily applicable.
abstract_id: PUBMED:2624014
The effect of glass wool filtration on human spermatozoa--a comparison with the swim-up technic 20 fresh human ejaculates (6X normozoospermia, 7X oligozoospermia and 7X asthenozoospermia) were prepared either with the swim-up-method as with the glass wool filtration. After both techniques the motility of the spermatozoa improved markedly. In cases of asthenozoospermia and oligozoospermia the motility increased significantly compared with the value of the ejaculate. From the motile spermatozoa of the ejaculate 45-60% were detectable in the filtrat after glass wool filtration or in the upper fluid after swim-up-procedure.
abstract_id: PUBMED:8473461
Synergistic effect of TEST--yolk buffer treatment and glass wool filtration of spermatozoa on the outcome of the hamster oocyte penetration assay. Enhanced penetration of zona-free hamster oocytes occurs after spermatozoa have either been treated with TEST--yolk buffer or processed by glass wool filtration. The present study was conducted to determine if a synergistic effect on fertilizing ability could be achieved by combining these two treatments. Ejaculates (n = 16) were treated with TEST--yolk buffer (TYB) and divided into two aliquots. One aliquot was kept as control and the other aliquot was filtered through glass wool prior to the sperm penetration assay. The TYB--glass wool filtered spermatozoa penetrated a significantly higher percentage of oocytes than the TYB treated spermatozoa.
abstract_id: PUBMED:2924937
Human sperm selection by glass wool filtration and two-layer, discontinuous Percoll gradient centrifugation. Glass wool filtration and two-layer, discontinuous Percoll (Pharmacia, Uppsala, Sweden) density gradient centrifugation resulted in an average recovery of 50% to 70% of the progressively motile and about 50% of the hypoosmotic swelling (HOS)-positive spermatozoa. Glass wool filtration tended to be more successful than Percoll centrifugation when the ejaculates were asthenozoospermic or produced a suspect/abnormal HOS test. After selection, the acrosin activity increased approximately two- to threefold, but no significant improvement in the percentage of normal sperm forms occurred. Experiments with mixtures of untreated and frozen-thawed ejaculates confirmed that glass wool filtration is more effective in removing nonmotile and HOS-negative spermatozoa than the two-layer Percoll centrifugation technique when the percentage of these types of spermatozoa in the ejaculate is high. The simplicity of these techniques and the good recovery of apparently viable spermatozoa makes these methods more desirable than other, more complicated techniques or procedures that yield a lower recovery of motile spermatozoa.
abstract_id: PUBMED:2061686
Glass-bead column separation of motile and nonmotile human spermatozoa. Glass-bead columns were tested for their efficiency in concentrating motile human spermatozoa from frozen semen samples. The data show that glass-bead filtration concentrates the motile gametes in each sample and is significantly more efficient than the swim-up method for obtaining populations of motile spermatozoa. The data suggest that this method can be applied clinically to obtain motile spermatozoa from poor-quality semen for use in in vitro fertilization.
Answer: The selective capacity of glass wool filtration for the separation of human spermatozoa with condensed chromatin has been studied and suggests that it could be a possible therapeutic modality for male-factor infertility cases. Glass wool filtration has been shown to result in a significantly higher total motile sperm count and a significantly higher percentage of normal chromatin-condensed spermatozoa compared to the ejaculate and the swim-up procedure (PUBMED:7606151). This indicates that glass wool filtration can separate spermatozoa according to motility and the size of the sperm head, which closely correlates with chromatin condensation quality.
However, it is important to note that glass wool filtration can induce damage to the membrane and acrosome of the heads of some spermatozoa in a population, which may potentially reduce the fertilizing capacity of human spermatozoa (PUBMED:7308506). Despite this, glass wool-filtered spermatozoa have shown an increased capacity to penetrate zona-free hamster oocytes compared to prefiltered sperm samples, suggesting that filtration yields a sperm population with greater penetrating capacity (PUBMED:2632658).
Furthermore, glass wool column filtration has been found to be superior to the swim-up procedure in terms of recovering viable spermatozoa that are potentially fertile, which may benefit various assisted reproductive techniques such as IVF, intra-uterine insemination, and GIFT (PUBMED:3350941). Additionally, glass wool filtration leads to a higher percentage of spermatozoa with intact acrosomes, which is crucial for fertilization (PUBMED:9806275).
An improved method of sperm selection by glass wool filtration has also been introduced, which enriches highly motile spermatozoa after incubation, further enhancing its potential as a therapeutic modality (PUBMED:11472334).
In summary, glass wool filtration has demonstrated the capacity to select spermatozoa with better motility and chromatin condensation, which are important factors for successful fertilization. Despite some concerns about potential ultrastructural damage, the technique's benefits in improving sperm quality for assisted reproductive technologies suggest it could be a valuable therapeutic option for male-factor infertility (PUBMED:7606151; PUBMED:7308506; PUBMED:2632658; PUBMED:3350941; PUBMED:9806275; PUBMED:11472334). |
Instruction: Lying-down nystagmus and head-bending nystagmus in horizontal semicircular canal benign paroxysmal positional vertigo: are they useful for lateralization?
Abstracts:
abstract_id: PUBMED:25413310
Lying-down nystagmus and head-bending nystagmus in horizontal semicircular canal benign paroxysmal positional vertigo: are they useful for lateralization? Background: Lateralization of horizontal semicircular canal benign paroxysmal positional vertigo (HSC-BPPV) is very important for successful repositioning. The directions of lying-down nystagmus (LDN) and head-bending nystagmus (HBN) have been used as ancillary findings to identify the affected sites. This retrospective study was performed to evaluate the lateralizing values of LDN and HBN using clinical and laboratory findings for lateralizing probabilities in patients with HSC-BPPV.
Methods: For 50 HSC-BPPV patients with asymmetric direction-changing horizontal nystagmus (DCHN) during the head-rolling test (HRT) using Frenzel goggles, the directions of LDN and HBN were evaluated and compared to those determined by video-oculography. Directional LDN was defined as the contralesional direction of nystagmus in geotropic types and the ipsilesional direction in apogeotropic types. Directional HBN was defined as the opposite direction relative to directional LDN. We also analyzed LDN and HBN in 14 patients with a history of ipsilesional peripheral vestibulopathy, caloric abnormality or conversion from other types of BPPV (such as probable localized HSC-BPPV, pro-BPPV).
Results: LDN and HBN were seen in 68% (34/50) and 76% (38/50) of patients, respectively. Of these, 19 (55.9%), and 28 (73.7%) patients showed directional LDN and HBN, respectively. The proportion of patients with directional LDN and HBN was much smaller among the pro-BPPV patients (4/12 for LDN, 3/10 for HBN).
Conclusions: LDN and HBN did not seem to predict lateralization in patients with HSC-BPPV. To improve the prediction of lateralization of HSC-BPPV, it is necessary to modify the maneuvers used to elicit LDN or HBN, especially in cases of symmetric DCHN during HRT.
abstract_id: PUBMED:31282790
Lateralization of horizontal semicircular canal benign paroxysmal positional vertigo (HSC-BPPV) with the latency test: a pilot study. Background: The treatments of horizontal semicircular canal benign paroxysmal positional vertigo (HSC-BPPV) have low remission rates ranging between 60% and 90%, connected to the difficulty in correctly identifying the affected side of HSC-BPPV. Objective: To propose and compare the efficacy of the latency test (LT) in identifying the affected ear in patients with HSC-BPPV. Materials and methods: Twenty-one subjects diagnosed with HSC-BPPV, as ascertained by head rolling test (HRT), were prospectively identified. Lateralization was assessed with pseudo-spontaneous nystagmus, lying-down nystagmus, bow and lean (BLT), HRT and LT tests. LT is a novel technique involving a 180° movement of the head and the analysis of the time required to reverse the nystagmus. Results: About 57% of patients were diagnosed with geotropic, and 43% with apogeotropic type HSC-BPPV. LT achieved a correct side diagnosis in 86%. Efficacy analysis of the tests compared to HRT revealed a substantial fair level of agreement for lying-down test (κ = 0.32, p < .05), a slight level of agreement for BLT (κ = 0.19, p < .05) and a substantial level of agreement for LT (κ = 0.071, p < .001). Conclusions and significance: LT was proven to show a substantial level of agreement compared to HRT in identifying the affected ear in patients with HSC-BPPV in this pilot study.
abstract_id: PUBMED:26758464
Lateralization of horizontal semicircular canal canalolithiasis and cupulopathy using bow and lean test and head-roll test. Accurate lateralization is important to improve treatment outcomes in horizontal semicircular canal (HSCC) benign paroxysmal positional vertigo (BPPV). To determine the involved side in HSCC-BPPV, the intensity of nystagmus has been compared in a head-roll test (HRT) and the direction of nystagmus was evaluated in a bow and lean test (BLT). The aim of this study is to compare the results of a BLT with those of a HRT for lateralization of HSCC-canalolithiasis and cupulopathy (heavy cupula and light cupula), and evaluate treatment outcomes in patients with HSCC-canalolithiasis. We conducted retrospective case reviews in 66 patients with HSCC-canalolithiasis and 63 patients with HSCC-cupulopathy. The affected side was identified as the direction of bowing nystagmus on BLT in 55 % (36 of 66) of patients with canalolithiasis, which was concordant with the HRT result in 67 % (24 of 36) of cases (concordant group). Lateralization was determined by comparison of nystagmus intensity during HRT in 30 patients who did not show bowing or leaning nystagmus. The remission rate after the first treatment was 71 % (17 of 24) in the concordant group and 45 % (5 of 11) in the discordant group. Both bowing and leaning nystagmus were observed in all patients with cupulopathy, and the side of the null plane was identified as the affected side. In conclusion, bowing and/or leaning nystagmus were observed in only 55 % of patients with HSCC-canalolithiasis, and the first treatment based on the result of BLT alone was effective in only 45 % of the patients in whom the BLT and HRT were discordant, which may suggest that the usefulness of BLT in lateralizing the HSCC-canalolithiasis may be limited.
abstract_id: PUBMED:16639276
Value of lying-down nystagmus in the lateralization of horizontal semicircular canal benign paroxysmal positional vertigo. Background: Horizontal canal benign paroxysmal positional vertigo is characterized by horizontal direction-changing nystagmus induced by lateral head turning in supine position. According to Ewald's second law, the direction of head turning that creates a stronger response represents the affected side in geotropic nystagmus and the healthy side in apogeotropic nystagmus. However, it may not always be possible to lateralize the involved ear only by comparing the intensity of the nystagmus. We studied the values of nystagmus induced by position change from sitting to supine in the lateralization of horizontal canal benign paroxysmal positional vertigo.
Methods: A retrospective study of 54 patients who had been diagnosed as having horizontal canal benign paroxysmal positional vertigo at the Dizziness Clinic of Seoul National University Bundang Hospital from May 2003 to February 2004 was performed. The directions of the nystagmus induced by lying down were compared with those determined by Ewald's second law.
Results: Of the 54 patients, 32 (20 apogeotropic and 12 geotropic) showed horizontal nystagmus induced by lying down. The nystagmus tended to be ipsilesional in apogeotropic patients (80%) and contralesional in their geotropic counterparts (75%).
Conclusion: In horizontal canal benign paroxysmal positional vertigo, lying-down nystagmus mostly beats toward the involved ear in the apogeotropic type and directs to the healthy ear in the geotropic type. The direction of lying-down nystagmus may help lateralizing the involved ear in horizontal canal benign paroxysmal positional vertigo.
abstract_id: PUBMED:32310200
The association of head shaking nystagmus with head-bending and lying-down nystagmus in horizontal canal benign paroxysmal positional vertigo. Background: In benign paroxysmal positional vertigo (BPPV), the otolithic debris may alter the dynamics of the endolymph or cupula during head-shaking. This dynamic may generate head-shaking nystagmus (HSN) but exact pathomechanism of HSN in BPPV has not been elucidated. The association of positional nystagmus induced by head-bending or lying-down with HSN may help to understand the dynamics of HSN.
Objective: To assess the presence, pattern, and relationship with head-bending nystagmus (HBN) and lying-down nystagmus (LDN) of HSN in horizontal canal (HC)-BPPV.
Methods: We recruited 173 patients with HC-BPPV (76 geotropic and 97 apogeotropic). We analyzed the pattern of HSN, and correlation with HBN and LDN.
Results: Half of patients (83/173, 48%) with HC-BPPV showed HSN. The directional preponderance of HSN was also not found in patients with geotropic or apogeotropic HC BPPV (p = 0.488). The presence of HSN was related with the occurrence of HBN in both geotropic (p = 0.005) and apogeotropic type (p = 0.001). The direction of HSN was same with HBN and was opposite to LDN in both geotropic and apogeotropic type.
Conclusions: HSN was frequently found in patients with HC-BPPV and related with HBN and LDN. HSN in BPPV might be contributed by the otolith movements related with endolymph dynamics.
abstract_id: PUBMED:25996844
Head-Jolting Nystagmus: Occlusion of the Horizontal Semicircular Canal Induced by Vigorous Head Shaking. Importance: We report a new syndrome, which we are calling head-jolting nystagmus, that expands the differential diagnosis of head movement-induced paroxysmal vertigo.
Observations: Two male patients (65 and 58 years old) described rotational vertigo after violent and brief (1- to 2-second) oscillations of the head (head jolting) that triggered intense horizontal nystagmus lasting 45 seconds. Accelerations of the head required to induce these episodes could only be achieved by the patients themselves. In case 1, the episodes gradually disappeared over a 6-year period. In case 2, magnetic resonance imaging (3-T) suggested a filling defect within the left horizontal semicircular canal. He underwent surgical canal plugging in March 2013 that resolved the symptoms.
Conclusions And Relevance: We attribute head-jolting nystagmus to dislodged material within the horizontal semicircular canal and provide a mechanistic model to explain its origin.
abstract_id: PUBMED:36036066
The occurrence and evaluation methods of horizontal semicircular canal dysfunction in patients with common vestibular diseases Objective:To understand the occurrence of horizontal semicircular canal functional impairment in patients with common vestibular diseases and to explore the characteristics and clinical value of different evaluation methods of horizontal semicircular canal. Methods:From July 2013 to December 2016, patients who attended the vertigo clinic of the First Affiliated Hospital of Dalian Medical University and completed more than three horizontal semicircular canal function tests were retrospectively analyzed. A total of 396 patients diagnosed as vestibular migraine (VM), Ménière's disease (MD), benign paroxysmal positional vertigo (BPPV), vestibular neuritis (VN) and 104 patients with unknown diagnosis were enrolled. The results of caloric test (CT), rotation test (RT), head-shaking nystagmus test (HSN) and video head impulse test (vHIT) were collected and the abnormal detection rates of different detection methods were calculated. The sensitivity, specificity and coincidence rate of various detection methods were statistically analyzed using CT as the gold standard. Results:①The abnormal rates of the four evaluation methods from high to low were HSN, CT, RT, vHIT (51.20%, 50.80%, 25.76%, 19.74%, respectively); ②Taking CT as the gold standard, among these four common vestibular diseases, the sensitivity and specificity of vHIT were 0.13-0.41 and 0.69-1.00, the sensitivity and specificity of HSN were 0.44-0.76 and 0.29-0.69, and the sensitivity and specificity of RT were 0.25-0.45 and 0.50-0.84;③According to statistical analysis, only HSN and CT results showed no statistically significant difference in the 4 diseases. There was no significant difference between RT and CT in VM and BPPV, and vHIT and CT in BPPV. Conclusion:The abnormal rate of HSN results in common vestibular diseases is highest, and it could be recommended as a routine vestibular function screening item. The specificity of vHIT is highest and worthy of promotion. CT is still an irreplaceable method to evaluate the function of horizontal semicircular canal.
abstract_id: PUBMED:34447005
Lying-Down Nystagmus (LDN) - When a Lateralizing Sign of Secondary Importance Attains Ascendancy in the Diagnosis of Horizontal Semicircular Canal Benign Paroxysmal Positional Vertigo (HSC-BPPV). Background: The diagnosis of horizontal semicircular canal benign paroxysmal positional vertigo (HSC-BPPV) mainly depends on the elicitation of asymmetric horizontal positional nystagmus on rolling head to either side, during the diagnostic supine roll test (SRT). The asymmetry in the strength of the elicited horizontal positional nystagmus during SRT is explained by the Ewald's second law and is crucial for lateralizing the affected ear. Rarely the elicited horizontal positional nystagmus on the head roll to either side during the SRT is of symmetric strength. In such situations, the signs with secondary lateralizing value are useful in management by the repositioning maneuvers that require the affected side to be precisely known.
Aim: The submitted article is a case report.
Results And Discussion: A 38-year-old male with two days history of vertigo on rolling to either of the lateral recumbent position was seen in the second week of March 2019. His SRT elicited a lying-down nystagmus (LDN) to the right, while the head roll to either side elicited a geotropic horizontal positional nystagmus of symmetric strength. The symmetrical strength of elicited positional nystagmus on SRT to either side led to ascendance of LDN from a lateralizing sign of secondary importance to one that reliably lateralized the involved horizontal semicircular canal. At two short term follow-ups at 1 hour and 24 hours after the therapeutic Gufoni maneuver, patient neither had vertigo nor any nystagmus elicited on the verifying supine roll test.
Conclusion: In rare instances, LDN, which is a lateralizing sign of secondary importance becomes pivotal in the management of HSC-BPPV especially when the affected side needs to be precisely determined for the execution of the therapeutic repositioning maneuver.
abstract_id: PUBMED:33613438
A Show of Ewald's Law: I Horizontal Semicircular Canal Benign Paroxysmal Positional Vertigo. Objective: To evaluate horizontal semicircular canal (HSC) effects according to Ewald's law and nystagmus characteristics of horizontal semicircular canal benign paroxysmal positional vertigo (HSC-BPPV) in the supine roll test. Methods: Patients with HSC-BPPV (n = 72) and healthy subjects (n = 38) were enrolled. Latency, duration, and intensity of nystagmus elicited by supine roll test were recorded using video nystagmography. Results: In patients with HSC-BPPV, horizontal nystagmus could be elicited by right/left head position (positional nystagmus) and during head-turning (head-turning nystagmus), and nystagmus direction was the same as that of head turning. Mean intensity values of head-turning nystagmus in HSC-BPPV patients were (44.70 ± 18.24)°/s and (44.65 ± 19.27)°/s on the affected and unaffected sides, respectively, which was not a significant difference (p = 0.980), while those for positional nystagmus were (40.81 ± 25.56)°/s and (17.69 ± 9.31)°/s (ratio, 2.59 ± 1.98:1), respectively, representing a significant difference (p < 0.0001). There was no positional nystagmus in 49 HSC-BPPV patients after repositioning treatment, nor in the 38 healthy subjects. No significant difference in head-turning nystagmus was detected in HSC-BPPV patients with or without repositioning. Conclusions: The direction and intensity of nystagmus elicited by supine roll test in patients with HSC-BPPV, was broadly consistent with the physiological nystagmus associated with a same HSC with single factor stimulus. Our findings suggest that HSC-BPPV can be a show of Ewald's law in human body.
abstract_id: PUBMED:17228643
Benign paroxysmal positional vertigo of the horizontal semicircular canal Objective: Assesment of 8 new cases of benign paroxysmal positional vertigo of the horizontal semicircular canal (BPPV-HC), since this variant of benign positional vertigo occurs rarely. An oculographic study of features in the positional response was carried out. Electrooculograpy (EOG) allows an exhaustive study of positional nystagmus.
Material And Methods: We have studied retrospectively 8 patients with BPPV-HC and positive head rotation manoeuvre registered by EOG techniques. Possible alterations in other EOG tests in the group of patients were analyzed.
Results: Most of the patients (7/8) expressed bilateral horizontal geotropic changing-direction nystagmus. One patient exhibited apogeotropic horizontal nystagmus. 5/8 cases showed caloric hypofunction.
Conclusions: We have proven that BPPV-HC is an uncommon disorder. In most of the cases, provocative manoeuver generates bilateral horizontal geotropic changing-direction nystagmus that probably is due to canalitiasis of the horizontal semicircular canal. The rest, a few cases, exhibit apogeotropic horizontal response that can be secondary to cupulolitiasis or location particles in the anterior portion of the horizontal canal. A caloric test showed abnormal in many and can help to locatize the affected ear.
Answer: Lying-down nystagmus (LDN) and head-bending nystagmus (HBN) have been investigated for their utility in lateralizing the affected ear in horizontal semicircular canal benign paroxysmal positional vertigo (HSC-BPPV). However, the evidence suggests that their predictive value for lateralization is limited.
A retrospective study evaluated the lateralizing values of LDN and HBN in patients with HSC-BPPV and found that LDN and HBN were seen in 68% and 76% of patients, respectively. However, only a smaller proportion of patients showed directional LDN and HBN, and these findings did not seem to predict lateralization effectively in patients with HSC-BPPV (PUBMED:25413310).
Another study that proposed the latency test (LT) for identifying the affected ear in HSC-BPPV patients found that LT had a substantial level of agreement with the head-rolling test (HRT) in identifying the affected ear, while the lying-down test showed only a fair level of agreement (PUBMED:31282790).
In a study comparing the bow and lean test (BLT) with the HRT for lateralization of HSC-canalolithiasis and cupulopathy, it was found that bowing and/or leaning nystagmus were observed in only 55% of patients with HSC-canalolithiasis, and the first treatment based on the result of BLT alone was effective in only 45% of the patients in whom the BLT and HRT were discordant (PUBMED:26758464).
A study that looked at the value of LDN in the lateralization of HSC-BPPV found that LDN mostly beats toward the involved ear in the apogeotropic type and directs to the healthy ear in the geotropic type, suggesting that the direction of LDN may help lateralize the involved ear in HSC-BPPV (PUBMED:16639276).
In summary, while LDN and HBN may provide some clues for lateralization in HSC-BPPV, their predictive value is not consistently reliable, and other tests such as the latency test or head-rolling test may be more effective in identifying the affected ear. It is important to consider the entire clinical picture and use a combination of tests for accurate lateralization in HSC-BPPV. |
Instruction: Adolescent depressive symptoms as predictors of adult depression: moodiness or mood disorder?
Abstracts:
abstract_id: PUBMED:9892310
Adolescent depressive symptoms as predictors of adult depression: moodiness or mood disorder? Objective: The authors' goal was to examine the relationship between subclinical depressive symptoms in adolescence and major depressive episodes in adulthood.
Method: An epidemiologic sample of 776 young people received psychiatric assessments in 1983, 1985, and 1992. Among adolescents not meeting criteria for major depression, the authors estimated the magnitude of the association between subclinical adolescent depressive symptoms and adult major depression.
Results: Symptoms of major depression in adolescence strongly predicted an adult episode of major depression: having depressive symptoms more than two-standard-deviations above the mean in number predicted a two-fold to three-fold greater risk for an adult major depressive episode.
Conclusions: Symptoms of depression in adolescence strongly predict an episode of major depression in adulthood, even among adolescents without major depression.
abstract_id: PUBMED:29878410
Adult mental health outcomes of adolescent depression: A systematic review. Background: Adolescent depression may increase risk for poor mental health outcomes in adulthood. The objective of this study was to systematically review the literature on the association between adolescent depression and adult anxiety and depressive disorders as well as suicidality.
Methods: EMBASE, MEDLINE, and PSYCinfo databases were searched and longitudinal cohort studies in which depression was measured in adolescence (age 10-19) and outcomes of depressive disorders, anxiety disorders, or suicidality were measured in adulthood (age 21+), were selected. Meta-analysis using inverse variance and random effects modeling, along with sensitivity analyses, were used to synthesize article estimates.
Results: Twenty articles were identified, representing 15 unique cohorts. Seventeen of 18 articles showed adolescent depression increased risk for adult depression; eleven pooled cohorts estimated that adolescents with depression had 2.78 (1.97, 3.93) times increased odds of depression in adulthood. Seven of eight articles that investigated the association between adolescent depression and any adult anxiety found a significant association. Three of five articles showed a significant association between adolescent depression and adult suicidality.
Conclusion: This review shows that adolescent depression increases the risk for subsequent depression later in life. Articles consistently found that adolescent depression increases the risk for anxiety disorders in adulthood, but evidence was mixed on whether or not a significant association existed between adolescent depression and suicidality in adulthood. Early intervention in adolescent depression may reduce long-term burden of disease.
abstract_id: PUBMED:7965577
Unidimensionality of the Brief Symptom Inventory (BSI) in adult and adolescent inpatients. This study investigated the factor structure of the Brief Symptom Inventory (BSI; Derogatis, 1992) for adult and adolescent psychiatric inpatients. The BSI was administered to 217 adults and 188 adolescents at admission and discharge from a private psychiatric hospital. Principal components factor analyses revealed that most variance among dimension scores was accounted for by one unrotated factor. Factorial invariance was evident across adult and adolescent samples for admission and discharge scores. Our findings are consistent with previous research on the BSI and Symptom Checklist-90-R (Derogatis, 1977), suggesting that both instruments measure primarily a unidimensional construct of general psychological distress.
abstract_id: PUBMED:8988957
Phenomenology of adolescent and adult mania in hospitalized patients with bipolar disorder. Objective: Although available data suggest that bipolar disorder most commonly begins in adolescence, it has often been underrecognized and misdiagnosed in this age group. The authors hypothesized that this might in part be because adolescent mania is phenomenologically different from adult mania. To test this hypothesis, they compared a cohort of adolescents hospitalized for acute mania with a group of hospitalized acutely manic adults.
Method: The authors compared symptomatic differences between 40 adolescent (ages 12-18 years) and 88 adult (ages 19-45 years) bipolar patients hospitalized for acute mania. They also compared the two groups with respect to demographic characteristics, psychiatric comorbidity, family history, and short-term outcome.
Results: Compared with adults, adolescent patients displayed a significantly higher rate of mixed bipolar disorder and a significantly lower rate of psychotic features (by DSM-III-R criteria), as well as higher ratings for many depressive symptoms (including suicidality and depressed mood) and lower ratings for thought disorder and delusions. Adolescents also displayed a significantly lower rate of substance abuse and significantly higher rates of familial mood disorder and drug abuse or dependence.
Conclusions: Significant differences were found in the phenomenology of adolescent and adult mania in this study. The reasons for these differences are not known. Possible explanations include artifact due to methodological limitations and differences between adolescents and adults in familial loading for mood or substance use disorders or in developmental or maturational stage.
abstract_id: PUBMED:35001472
Risk of major depressive disorder in adolescent and young adult cancer patients in Japan. Objective: To estimate the risk of major depressive disorder (MDD) in adolescent and young adult (AYA) patients with cancer in Japan and identify risk factors for MDD among these patients.
Methods: This was a matched cohort study using a large claims database in Japan. Included patients were aged 15-39 years, newly diagnosed with cancer during 2012-2017 and assessable for a follow-up period of 12 months. Kaplan-Meier estimates and Cox proportional hazards models were used to calculate hazard ratios (HR) and 95% confidence intervals (CI) for MDD in the AYA patients with cancer versus age-, sex- and working status-matched cancer-free controls. A subgroups analysis of the AYA patients with cancer was performed to explore MDD risk factors.
Results: A total of 3559 AYA patients with cancer and 35,590 matched controls were included in the analysis. Adolescent and young adult patients with cancer had a three-fold higher risk for MDD compared with cancer-free controls (HR, 3.12; 95% CI, 2.64-3.70). Among cancer categories with >100 patients, patients with multiple cancer categories, including those with metastatic cancer (HR, 6.73, 95% CI, 3.65-12.40) and leukemia (HR, 6.30; 95% CI, 3.75-10.58), had the greatest MDD risk versus matched controls. Patients who received inpatient chemotherapy as initial treatment had a higher risk for MDD than patients without chemotherapy (HR, 0.43; 95% CI, 0.30-0.62).
Conclusions: Adolescent and young adult patients in Japan with cancer are at high risk for MDD. Particularly, those with multiple cancer categories, leukemia, and those who receive aggressive anticancer treatments should be monitored closely for symptoms of MDD.
abstract_id: PUBMED:2184797
Adult outcomes of childhood and adolescent depression. I. Psychiatric status. The present study was based on the clinical data summaries ("item sheets") of children who attended the Maudsley Hospital, London, England, during the late 1960s and early 1970s. These summaries were used to identify a group of 80 child and adolescent psychiatric patients with an operationally defined depressive syndrome. The depressed children were individually matched with 80 nondepressed psychiatric controls on demographic variables and nondepressive childhood symptoms by a computer algorithm. At follow-up, on average 18 years after the initial contact, information was obtained on the adult psychiatric status of 82% of the total sample. Adult assessments were made "blind" to case/control status. The depressed group was at an increased risk for affective disorder in adult life and had elevated risks of psychiatric hospitalization and psychiatric treatment. They were no more likely than the control group to have nondepressive adult psychiatric disorders. These findings suggested that there is substantial specificity in the continuity of affective disturbances between childhood and adult life.
abstract_id: PUBMED:8428889
Alcohol consumption in relation to other predictors of suicidality among adolescent inpatient girls. This study of 54 adolescent inpatient girls examined alcohol consumption in relation to depression severity and family dysfunction as predictors of suicidal ideation and behavior. Although alcohol consumption, depression severity, and family dysfunction were intercorrelated, regression analyses revealed their differential importance to the prediction of self-reported suicidal ideation and severity of clinician-documented suicidal ideation or behavior (none, ideation, intent, gesture, attempt). Self-reported ideation was strongly predicted by depression severity and family dysfunction; severity of clinician-documented suicidal ideation or behavior was predicted by alcohol consumption and family dysfunction. Implications for assessment and treatment are discussed.
abstract_id: PUBMED:28421345
Long-term predictors of anxiety and depression in adult patients with asthma. Background: It is well established that anxiety and depression are associated with asthma, but there is limited evidence about the persistence of anxiety/depression in asthma. The aim of our study was to assess the long-term predictors of anxiety and depression in adult asthmatic patients.
Methods: A total of 90 adult asthma patients (63 women, age 18-50 years) with different levels of disease control (28 uncontrolled and 34 partially controlled) were assessed at baseline and at follow-up after 7 years for anxiety, depression and asthma control. The same work-up on both occasions included: demographics, living conditions, medical history (e.g. comorbidities, adherence and exacerbations), Hospital Anxiety and Depression Scale (HADS), Asthma Quality of Life Questionnaire (AQLQ), disease control and lung function. Persistence was defined as the HADS scores for anxiety and depression present at baseline and follow-up.
Results: The HADS results at follow-up visit showed 36 (40%) asthma patients with anxiety and 13 (14%) with depression, with the persistence of anxiety in 17 (19%) and of depression in 7 (8%) patients. Significant predictors of anxiety at follow-up were HADS and AQLQ results at baseline and several parameters of asthma control at follow-up (area under the curve AUC 0.917, 95% confidence interval CI 0.829-0.969, p < 0.001) and for depression AQLQ mood disorder domain, asthma control and lung function (AUC 0.947, 95% CI 0.870-0.986, p < 0.001).
Conclusion: Anxiety and depression persist over years in some patients with asthma. The association between mood disorders and asthma suggests potential mutual treatability.
abstract_id: PUBMED:8935214
Psychiatric diagnoses in the child and adolescent members of extended families identified through adult bipolar affective disorder probands. Objective: To investigate the type and distribution of psychiatric disorders in the child and adolescent members of extended pedigrees identified through bipolar probands.
Method: The child and adolescent offspring (24 male, 26 female, aged 6 to 17 years) and the adult parents (60) of 14 bipolar pedigrees ascertained for the National Institutes of Mental Health Genetics Initiative Study of Bipolar Affective Disorder were personally assessed using structured psychiatric interviews. A parent was also interviewed about each child or adolescent offspring.
Results: Twelve of the 50 interviewed offspring received a lifetime DSM-III-R diagnosis of an affective disorder. This included six cases of bipolar disorder, five cases of major depressive disorder, and one case of dysthymia. Eight of the offspring who received an affective disorder diagnosis also qualified for an anxiety disorder (four), a disruptive behavior disorder (two), or both (two). Offspring who had a parent with an affective disorder had a 5.1-fold higher risk for receiving an affective disorder diagnosis than did offspring with healthy parents.
Conclusions: In a consecutive series of families identified through a proband with bipolar disorder, there were significant increases in the prevalence of affective disorder diagnoses in the child and adolescent offspring. The distribution of illness in offspring was compatible with the presence of important genetic factors which contribute to early-onset affective illness.
abstract_id: PUBMED:9755355
Child and adolescent mood disorders--experience with serotonin-based therapies. By most measures, child and adolescent depression is much like and continuous with adult depression. Aggregating all available data, much of which is relatively new, it seems most likely that noradrenergic and mixed serotonergic/noradrenergic tricyclic antidepressants are ineffective in child and adolescent depression, whereas serotonergic antidepressants (e.g., selective serotonin reuptake inhibitors) are probably effective).
Answer: Adolescent depressive symptoms have been found to be strong predictors of adult depression, suggesting that they may be more indicative of an emerging mood disorder rather than just transient moodiness. Research has shown that symptoms of major depression in adolescence, even when not meeting the criteria for major depression, predicted a two-fold to three-fold greater risk for an adult major depressive episode (PUBMED:9892310). Additionally, a systematic review found that adolescent depression increased the risk for adult depression, with adolescents with depression having 2.78 times increased odds of depression in adulthood (PUBMED:29878410). These findings underscore the importance of early intervention in adolescent depression to potentially reduce the long-term burden of disease.
Furthermore, the continuity of affective disturbances from childhood and adolescence into adult life has been demonstrated, with a study showing that depressed children were at an increased risk for affective disorder in adult life (PUBMED:2184797). This suggests that depressive symptoms during these developmental stages can have long-lasting effects on mental health.
In summary, the evidence points towards adolescent depressive symptoms being a significant predictor of adult depression, supporting the notion that these symptoms may often represent an underlying mood disorder rather than mere moodiness. This highlights the need for careful monitoring and potential early treatment of depression in adolescents to mitigate the risk of persistent depressive disorders into adulthood. |