id
stringlengths
2
9
source
stringclasses
2 values
version
stringclasses
1 value
added
stringlengths
24
24
created
stringlengths
24
24
text
stringlengths
240
297k
210950300
s2orc/train
v2
2020-01-23T09:07:56.631Z
2020-01-06T00:00:00.000Z
Performance of F-18 Fluorocholine PET/CT for Detection of Hyperfunctioning Parathyroid Tissue in Patients with Elevated Parathyroid Hormone Levels and Negative or Discrepant Results in conventional Imaging Objective Our aim was to assess the diagnostic performance of F-18 fluorocholine (FCH) positron emission tomography/computed tomography (PET/CT) in detecting hyperfunctioning parathyroid tissue (HPT) in patients with elevated parathyroid hormone levels with negative or inconclusive conventional imaging results and to compare the findings with those obtained using technetium-99m sestamibi (MIBI) scintigraphy and neck ultrasonography (US). Materials and Methods Images of 105 patients with hyperparathyroidism who underwent FCH PET/CT, dual-phase MIBI parathyroid scintigraphy (median interval: 42 days), and neck US were retrospectively analyzed. The gold standard was histopathological findings for 81 patients who underwent parathyroidectomy and clinical follow-up findings in the remaining 24 patients. Sensitivities, positive predictive values (PPVs), and accuracies were calculated for all imaging modalities. Results Among the 81 patients who underwent parathyroidectomy, either parathyroid adenoma (n = 64), hyperplasia (n = 9), neoplasia (n = 4), or both parathyroid adenoma and hyperplasia (n = 1) were detected, except 3 patients who did not show HPT. Of the 24 (23%) patients who were followed-up without operation, 22 (92%) showed persistent hyperparathyroidism. FCH PET/CT showed significantly higher sensitivity than MIBI scintigraphy and US in detection of HPT (p < 0.01). Sensitivity, PPV, and accuracy of FCH PET/CT were 94.1% (95/101), 97.9% (95/97), and 92.4% (97/105), respectively. The corresponding values for MIBI scintigraphy and US were 45.1% (46/102), 97.9% (46/47), and 45.7% (48/105) and 44.1% (45/102), 93.8% (45/48), and 42.9% (45/105), respectively. Among the 35 patients showing negative MIBI scintigraphy and neck US findings, 30 (86%) showed positive results on FCH PET/CT. FCH PET/CT could demonstrate ectopic locations of HPT in 11 patients whereas MIBI and US showed positive findings in only 6 and 3 patients, respectively. Conclusion FCH PET/CT is an effective imaging modality for detection of HPT with the highest sensitivity among the available imaging techniques. Therefore, FCH PET/CT can be recommended especially for patients who show negative or inconclusive results on conventional imaging. INTRODUCTION In patients with hyperparathyroidism, preoperative localization of hyperfunctioning parathyroid tissue (HPT) using noninvasive imaging techniques has become a mainstay in the management of disease because it facilitates targeted surgery such as minimally invasive parathyroidectomy (1). Although the combination of neck ultrasound (US) and technetium-99m (Tc-99m) methoxyisobutylisonitrile [sestamibi (MIBI)] dualphase scintigraphy is the conventional imaging workup to localize HPT, it may fail to identify abnormal hyperfunctioning glands in up to 30% of cases (2). Fourdimensional computed tomography (4D-CT) and dynamic magnetic resonance imaging (MRI) have emerged as secondline imaging modalities; however, there is currently no sufficient evidence to recommend their routine use in localizing HPT (3,4). Furthermore, a high radiation burden is another concern associated with the use of 4D-CT (5). Following the incidental discovery of fluorine-18 (F-18)labeled fluorocholine (FCH) accumulation in parathyroid adenoma (6), several recent clinical studies showed that FCH positron emission tomography/computed tomography (PET/CT) is a promising modality for detection of HPT, even in patients with multi-glandular disease (7)(8)(9)(10)(11)(12). However, the available data in the literature is still lacking. Therefore, we designed this study to assess the diagnostic performance of FCH PET/CT for detection of HPT in patients with elevated parathyroid hormone (PTH) levels who showed either kjronline.org performed using a fully automated radiochemistry synthesis device (Trasis All-in-One; Ans, Belgium) according to the labeling procedure described by Kryza et al. (13). Quality control tests were performed following specifications described in European Pharmacopoeia. The total time for the entire synthesis procedure, including quality control tests, was 80-90 minutes. Imaging Acquisition PET/CT was performed after injecting a mean of 325.1 ± 86.7 MBq (8.8 ± 2.3 mCi) F-18 FCH intravenously. Dualphase PET/CT images were acquired using PET/CT scanners (GE Discovery 710; Waukesha, WI, USA or Siemens Biograph 6, Knoxville, TN, USA) with an initial scan at 15 minutes post-injection (range: 12-17 minutes) covering the neck and upper chest and a late scan at 45 minutes post-injection (range: 36-60 minutes) covering the whole body (vertexmid thigh) or the cervico-thoracic region. The CT topogram guide, low-dose (49-76 mAs) CT transmission scan for attenuation correction, and PET emission scan at 3 minutes per bed position were acquired. Sixty-five patients (62%) underwent dual-phase MIBI scintigraphy at our institution. After injection of 740 ± 74 MBq (20 ± 2 mCi) Tc-99m MIBI, regional static images of the cervico-thoracic region were acquired at 15 and 90 minutes and single-photon emission computed tomography/ computed tomography (SPECT/CT) scans of the head, neck, and thorax were acquired at 60 minutes using an integrated SPECT/CT camera (Siemens Symbia T16; Hoffman Estates, IL, USA). For the remaining 40 patients (38%), proper dual-phase MIBI scintigraphy with SPECT or SPECT/CT was performed at the referral centers. The median interval between FCH PET/CT and parathyroid MIBI scintigraphy was 42 days (range: 2-291 days). In all patients, at least one neck US examination was performed by a radiologist using a high-frequency linear probe (5-12 MHz) with a median interval of 30 days (range: 1-232 days). Image Analysis All PET/CT images were read separately by two experienced nuclear medicine physicians who were aware of the patients' clinical diagnosis and previous imaging and laboratory data. From the skull base to the lower part of mediastinum, any focal FCH uptake discernible from the background activity that was not related to thyroid tissue or a lymph node seen on CT slices was considered positive for HPT. All lesions involving an initial disagreement between the readers were re-evaluated together by the readers to reach a final consensus. Additionally, maximum standardized uptake values (SUVmax) of all discernible suspicious lesions and mean SUV (SUVmean) from the thyroid gland were calculated for both initial and late FCH PET/CT images. All MIBI scans that were reported by different attending physicians were also re-evaluated by a nuclear medicine physician. Any focal MIBI uptake above the background activity that was not related to the thyroid tissue in the neck and upper mediastinum in the initial scan and/or any remaining focus inside the thyroid bed at the late scan other than the thyroid nodule was considered as positive for parathyroid adenoma. Any focus with initial MIBI uptake with wash-out pattern on late imaging was recorded as a suspicious lesion for parathyroid adenoma. Parathyroidectomy and Histopathology Eighty-one out of 105 (77%) patients underwent parathyroidectomy, of which 49 (60%) underwent minimal invasive parathyroidectomy. Of these 81 patients, 53 (65%) were operated on at our institute by an experienced endocrine surgeon, while the remaining 38 (35%) were operated on at different centers. Pathological results were obtained and recorded. Patients who were not operated on due to negative imaging results or because of unwillingness to undergo the surgery or a contraindication for surgery were followed-up and their serum calcium and PTH levels were determined at 3-6-month intervals for at least 12 months. Statistical Analysis Statistical analysis was performed using SPSS software version 21.0 (IBM Corp., Armonk, NY, USA), and the level of significance was set at a p value < 0.05. Sensitivity, positive predictive value (PPV), and accuracy were calculated for all three imaging modalities while counting suspicious imaging results as all positive. Specificity and negative predictive value were not calculated due to the bias in selecting patients with discordant or equivocal results in conventional imaging. Postoperative histopathological results or persistently elevated serum PTH and calcium levels in non-operated patients were considered as the reference standards. McNemar's test was used to compare the sensitivity of the imaging modalities. Wilcoxon signed-rank test was used for comparison of early and delayed SUVs. Pearson's correlation coefficient was used for assessment of the correlation between early and delayed SUVs and between the SUVs and serum PTH levels. Of the 81 patients who underwent parathyroidectomy, parathyroid adenoma was diagnosed in 64 (79%), parathyroid hyperplasia in 9 (11%), parathyroid neoplasia in 4 (5%), and both adenoma and hyperplasia in 1 (1%) patient. In the remaining 3 (4%) patients, no parathyroid tissue could be found in histopathological analysis. Among these, 1 patient showed a negative preoperative FCH PET/CT scan, and the follow-up PTH levels decreased to the normal range, while in the remaining 2 patients with positive preoperative FCH PET/CT scans, the raised PTH levels persisted during follow-up. Of the 24 (23%) patients who did not undergo an operation, 22 (92%) had persistently elevated PTH levels during follow-up and were also considered to have proven HPT. Sensitivity, PPV, and accuracy of FCH PET/CT in the detection of HPT were 94.1%, 97.9%, and 92.4%, respectively. The corresponding values for MIBI and US were 45.1%, 97.9%, and 45.7% and 44.1%, 93.8%, and 42.9%, respectively. The difference between FCH PET/CT and other imaging modalities was statistically significant (p < 0.001) ( Table 1). Fig. 2A, Table 2). In all 46 patients with concordantly positive results, HPT was proven by either histopathological examination (38 patients) or clinical follow-up (8 patients). Among the 7 patients with concordantly negative results, follow-up PTH levels decreased to the normal range in only 1 patient. However, 3 patients showed proven HPT in parathyroidectomy specimens and the remaining 3 showed persistently elevated PTH levels during follow-up. Among the 51 patients with FCH-PET/CT-positive and MIBI-negative results, 38 received parathyroidectomy, revealing adenoma in 32, parathyroid hyperplasia in 3, and parathyroid neoplasia in 2 patients. One patient was operated on but showed no parathyroid tissue in the postoperative histopathological assessment, and the elevated PTH levels persisted during follow-up. In the remaining 13 patients who did not undergo surgery, 12 had persistently elevated PTH levels (median PTH: 133.5 pg/ mL) and one had normalized PTH levels during follow-up (47 pg/mL). One patient with negative FCH PET/CT and positive MIBI scans underwent parathyroid surgery, but postoperative histopathological assessment revealed no parathyroid tissue and showed benign thyroid nodules instead. kjronline.org Comparison of FCH PET/CT with Neck US Examination When FCH PET/CT was compared with neck US, 50 patients showed concordant results (45 positive; 5 negative) and 55 showed discordant results (52 with positive FCH and negative US findings; 3 with negative FCH and positive US findings) (Fig. 2B, Table 3). Among the 45 patients with concordantly positive results, HPT was proven in all except one by either pathological evaluation (35 patients) or clinical follow-up (9 patients). On the other hand, 5 patients with concordantly negative results had proven HPT. Fifty-two patients showed positive FCH PET/CT findings despite showing negative US results. Of these, 42 were operated on and HPT was found in 40 (adenoma in 33, hyperplasia in 6, and neoplasia in one) while no parathyroid tissue was found in 2 patients despite showing persistently elevated PTH levels during follow-up, with one of these patients showing a positive focus in the mediastinum on FCH PET/CT. Among the 10 patients with positive FCH-PET/ CT and negative US findings who did not undergo surgery, all had persistently elevated PTH levels during follow-up. Of the 3 patients with negative FCH-PET/CT and positive US findings, 2 patients underwent surgery, which revealed parathyroid adenoma in one. However, no parathyroid tissue was detected in the other patient on histopathological examination. Among the remaining 5 patients with concordantly negative results, 2 were operated on and showed HPT, whereas the remaining 3 did not undergo surgery and showed persistently elevated PTH levels during follow-up. Of the 12 patients showing clinical findings of recurrent hyperparathyroidism, 11 showed positive FCH PET/CT findings, and HPT was confirmed by histopathological examination in 10 of these patients. Among these 10 patients, 5 showed positive findings with FCH PET/CT despite showing negative MIBI and US findings. MIBI yielded true-positive findings in 5 patients while US showed true-positive findings in 4 patients in this subgroup. There were 4 cases with parathyroid neoplasia in our series. In all of them, FCH PET/CT yielded positive results with higher early SUVmax (10.8 ± 5.0 vs. 7.1 ± 3.5) and late SUVmax (11.8 ± 5.8 vs. 7.3 ± 3.6) compared to the mean values for whole patients. There was no finding suggestive of local or distant metastasis in whole-body FCH PET/CT in this particular subgroup. MIBI showed 2 true-positive cases and US showed 3 true-positive cases in the neoplasia group. Two patients with chronic renal failure were included as possible cases of tertiary hyperparathyroidism, since both FCH PET/CT and US demonstrated a single parathyroid adenoma. One of these cases also showed brown tumors that were identified by both FCH PET/CT and MIBI (Fig. 5). Comparison of Early and Delayed FCH PET/CT Imaging A total of 97 patients showed positive findings with FCH PET/CT. There was no statistically significant difference between the SUVmax of parathyroid lesions in early (mean SUVmax: 7.1 ± 3.5, median SUVmax: 6.1, range: 1.9-19.4) and late imaging (mean SUVmax: 7.3 ± 3.6, median SUVmax: 6.4, range: 2.0-22.4) (p = 0.190). However, significant differences were found in the parathyroid SUVmax/thyroid SUVmean ratio between early (mean ratio: 2.5 ± 1.5, range: 0.8-7.8) and delayed imaging (mean ratio: 2.9 ± 1.5, range: 1.0-10.0) (p < 0.001) among patients without any previous thyroidectomy operation. Strong positive correlations were observed between early and delayed SUVmax of parathyroid lesions (r = 0.898), as well as for parathyroid/thyroid SUV ratios (r = 0.896). On visual analysis, 12 lesions were more prominent on early imaging, whereas 10 were more prominent on delayed imaging. For the remaining lesions, the results were equivocal for both early and delayed imaging. DISCUSSION Following incidental detection of FCH uptake in parathyroid adenoma, several clinical studies have been published on the role of FCH PET/CT in hyperparathyroidism, and they have shown better diagnostic performance of FCH PET/CT in comparison with other conventional imaging methods, with higher sensitivity and equally high specificity (7)(8)(9)(10)(11)(12)(14)(15)(16)(17). A recent meta-analysis evaluating the diagnostic performance of FCH PET/CT in hyperparathyroidism revealed that the sensitivity and specificity of FCH PET/CT were 90% and 94%, respectively (18), and another study also reported excellent sensitivity, PPV, and detection rate of 95%, 97%, and 91%, respectively (19). Furthermore, PET/ MRI was reported to have better diagnostic performance https://doi.org/10.3348/kjr.2019.0268 kjronline.org than MRI alone in 10 patients with primary HPT (20). Our study also demonstrated the encouraging performance of FCH PET/CT in the detection of HPT. Because our study population included a difficult series of selected patients with negative or discrepant US and MIBI results, the sensitivities of both US and MIBI were lower than those reported in the literature (2,(21)(22)(23). Even in this difficult population, FCH PET/CT presented impressive results with high sensitivity, PPV, and accuracy of 94.1%, 97.9%, and 92.4%, respectively, whereas the corresponding values for Even though our study included 10 histopathologically proven recurrent HPT cases, FCH PET/CT was able to localize all lesions, providing excellent sensitivity (100%) in comparison with MIBI (5 true-positive results) and US (4 true-positive results). FCH PET/CT could also identify all 4 cases with parathyroid neoplasia, whereas both MIBI and US showed positive findings in only 2 cases. Furthermore, FCH PET/CT was superior in demonstrating ectopic localization of HPT in comparison with US and MIBI. Similar to MIBI scintigraphy, FCH PET/CT could identify brown tumors, which are bone lesions that arise as a result of increased osteoclastic activity in hyperparathyroidism and mimic bone metastasis (24)(25)(26), which was also documented by Taywade et al. (27) previously. We believe that the superior spatial resolution of PET technology over SPECT is one of the important factors influencing the better diagnostic performance of FCH PET/CT. Indeed, in this study, MIBI could reveal only 5 (55.6%) of 9 cases of parathyroid hyperplasia while FCH PET/CT demonstrated the lesions in 7 (78%) cases. In addition, the differences in the molecular properties and uptake mechanisms of FCH and MIBI may be responsible for the superiority of FCH PET/CT. Therefore, the additional molecular potential of FCH in terms of its uptake mechanism in HPT should be elucidated. MIBI is a lipophilic cationic isonitrile derivative that is accumulated in mitochondria-rich oxyphilic cells (28). P-glycoprotein may also be responsible for MIBI uptake (29). In contrast, increased FCH uptake in HPT may be related to accelerated phosphatidylcholine turnover or upregulation of phospholipid-dependent choline kinase activity (30)(31)(32). FCH uptake in benign secreting tumors such as parathyroid adenomas was proposed to be related to cholinergic autocrine loop upregulation and increased expression of choline transporters rather than the membrane proliferation rate (33). In contrast to studies showing that MIBI is preferentially accumulated by oxyphilic cells, there is no study yet demonstrating the type of parathyroid cells that preferentially accumulate FCH. As a proposal, if FCH was capable of accumulating in chief cells rather than oxyphilic cells, then it might have a molecular advantage over MIBI in detecting HPT. Further research is needed to elucidate the exact biological processes underlying the FCH uptake in HPT. Another advantage of FCH PET/CT would be the shorter imaging time. Early imaging within the first 15 minutes appeared to be good enough to reveal the majority of the lesions in this study. However, among the proven HPT cases (n = 102), we found an SUVmax increase of more than 5% (up to 37%) in 46 (45%) parathyroid lesions and an SUVmax decrease of more than 5% (up to 54%) in 33 (32%) parathyroid lesions in late scans. These findings suggest that single time-point imaging either in early (15 minutes) or late (45-60 minutes) imaging may be associated with a potential risk of missing lesions. Nevertheless, we have not seen any lesion that was identified in early imaging and completely disappeared in late scans or vice versa. Lezaic et al. (7) reported better lesion contrast at 60-minute imaging compared to the 5-minute scan. We also demonstrated a significantly higher parathyroid-SUVmax/thyroid-SUVmean ratio at 45 minutes compared to that obtained with kjronline.org cost in comparison with MIBI, the use of FCH PET/CT as a second-line imaging tool for difficult cases with negative or discordant conventional imaging findings seems to be more feasible. Another potential drawback of FCH PET/CT is that FCH is not a specific tracer for parathyroid tissue. Therefore, physiological uptake of FCH in thyroid parenchyma and preferential accumulation in well-differentiated thyroid cancers and metastatic or inflammatory lymph nodes may occur. In our study, there were 2 cases with papillary thyroid cancer with increased FCH uptake in which the parathyroid lesions could be differentiated accurately by considering their location. In another case, we considered an FCH-positive lesion in the neck as an enlarged parathyroid tissue, but histopathological examination did not confirm any parathyroid tissue and indicated an inflammatory lymph node instead. Although we do not recommend routine intravenous contrast medium administration for FCH PET/CT imaging, its usage may further improve results in selected cases with inconclusive results. There are several limitations in this study. First, there was a bias in selecting the patient population, which mainly included patients with negative or discrepant conventional imaging findings, leading to reduced sensitivity values for both US and MIBI in comparison with those reported in previous studies. Second, more than half of the US examinations and 38% (40/105) of MIBI examinations were performed at different referral centers and reported by various physicians, which may also have created severe imaging variability and bias. Finally, 24 out of 105 patients (23%) were not operated on after imaging. Instead, followup serum PTH and serum calcium level measurements were used as reference standards, which might have the potential to create a bias. However, the sensitivity, PPV, and accuracy of the histopathology-proven group were similar to those of the entire study population. In conclusion, FCH PET/CT is an effective imaging modality for localization of HPT with the highest sensitivity among the available imaging techniques. Therefore, FCH PET/CT may play a key role as a problem-solving imaging modality in difficult cases such as those showing recurrent hyperparathyroidism and ectopic localizations of HPT.
248900150
s2orc/train
v2
2022-05-20T15:19:58.844Z
2022-05-01T00:00:00.000Z
A Rare Case of Pancreatic Cancer: Undifferentiated Carcinoma of the Pancreas With Osteoclast-Like Giant Cells Ductal adenocarcinoma of the pancreas is the most common pancreatic cancer, but undifferentiated carcinoma of the pancreas with osteoclast-like giant cells (UC-OGCs) is an exceedingly rare tumor. Microscopically, this tumor is characterized by the presence of two different cellular elements, namely, spindle or ovoid mononuclear cells and osteoclast-like giant cells (OGCs). Here, we report a rare case of UC-OGCs in a 79-year-old male with a one-month history of epigastric abdominal pain and unintentional weight loss. A blood workup revealed new-onset type 2 diabetes mellitus, and a computed tomography scan of the abdomen showed acute pancreatitis with a hypodense lesion in the head of the pancreas concerning for malignancy. He underwent an endoscopic ultrasound that also revealed a mass in the head of the pancreas, but no lymphadenopathy was observed. Biopsy was obtained and histopathology revealed UC-OGCs. We present this case to increase awareness of this rare clinical entity in patients presenting with acute-onset pancreatitis. Introduction Pancreatic cancer has emerged as the seventh most common cause of cancer-related death worldwide. Even though pancreatic ductal adenocarcinoma is the most common pancreatic cancer, undifferentiated carcinoma of the pancreas with osteoclast-like giant cells (UC-OGCs) is an exceedingly rare exocrine tumor, accounting for less than 1% of all pancreatic malignancies [1]. Microscopically, this tumor is characterized by the presence of two different cellular elements, namely, spindle or ovoid mononuclear cells and osteoclastlike giant cells (OGCs) [2]. Case Presentation A 79-year-old male initially presented to his primary care provider for evaluation of month-long epigastric pain and unintentional weight loss of 10 lbs. His medical history was significant for coronary artery disease, hypertension, and atrial fibrillation. His vital signs were not significant except for an irregularly irregular pulse. Physical examination was normal except for epigastric tenderness. His blood work showed glycated hemoglobin (HbA1C) of 8.3% (normal: 5.7-6.4%), blood glucose of 383 mg/dL (normal: 40 mg/dL or below), bilirubin of 1.2 mg/dL (normal: 0.3-1.2 mg/dL), aspartate aminotransferase (AST) of 25 U/L (normal: 0-35 U/L), alanine aminotransferase (ALT) of 19 U/L (normal: 0-35 units/L), lipase of 271 U/L (normal: 0-95 U/L), amylase of 100 U/L (range: 0-130 U/L), blood urea nitrogen (BUN) of 34 mg/dL (normal: 8-20 mg/dL), creatinine of 1.6 mg/dL (normal: 0.7-1.3 mg/dL), white blood cell count (WBC) of 7.9 k/µL (normal: 4,000-10,000/µL), hemoglobin of 10.5 g/dL (normal: 14.0-17.0 g/dL), and platelet count of 169,000 k/µL (normal: 150,000-350,000/µL). Cancer antigen 19-9 was 534 U/mL (normal: 0-37 U/mL). The patient underwent computed tomography (CT) scan of the abdomen and pelvis with intravenous and oral contrast which was suggestive of a 2 cm hypodense lesion in the head of the pancreas (Figure 1). Given these findings, a plan was made to proceed with endoscopic ultrasonography (EUS) with biopsy which revealed a complex cystic mass in the head of the pancreas with infiltration into the second part of the duodenum. Biopsies from the cystic mass were obtained. FIGURE 1: Hypodense lesion in the head of the pancreas. Microscopy was suggestive of groups of abnormal spindled and epithelioid cells intermixed with numerous multinucleated giant cells. Some of the giant cells appeared bland (osteoclast-like) (Figure 2), and others exhibited highly pleomorphic, bizarre nuclei. Rare abnormal gland-like formations and focal necrosis were also seen ( Figure 3). On immunohistochemical stains, most lesion cells were strongly positive for vimentin.Pan-cytokeratin stain also showed the rare glandular elements as well as faint focal staining of the spindled tumor cells. Stain for CD68 highlighted the giant cells and the intermixed population of histiocytelike sarcomatous carcinoma cells. Among the non-giant cell population, Ki67 stain demonstrated a proliferation index of approximately 30% ( Figure 4). All these findings were typical for UC-OGCs. For further management, our patient was referred to oncology. He was offered neoadjuvant chemotherapy with gemcitabine and paclitaxel, but he preferred to proceed with surgical resection first. He was lost to followup. Discussion Pancreatic cancer represents the second most common gastrointestinal malignancy, with adenocarcinoma being the most common subtype [3]. Undifferentiated carcinomas are an exceedingly rare subtype of pancreatic cancer, with an incidence of less than 1% recorded in the literature [3][4][5]. In 2000, the World Health Organization (WHO) classified undifferentiated pancreatic tumors into two types, undifferentiated carcinoma of the pancreas (UDC) and UC-OGCs [6]. Previously, UC-OGCs had been classified into three different subtypes, namely, osteoclastic, pleomorphic, and mixed; however, in 2010, the WHO grouped them as one entity under the term UC-OGCs [4]. From previous reports, UC-OGC tends to occur more commonly in elderly women, with a mean age of presentation of approximately 63 years. The most common clinical presentations of UC-OGC are similar to any other types of pancreatic tumors including upper abdominal pain and weight loss [9]. Our patient presented with epigastric pain and significant weight loss. Other less common clinical manifestations include loss of appetite, steatorrhea, nausea, jaundice, and anemia [10]. Though UC-OGCs of the pancreas are mostly found in the body or tail, our patient was found to have a lesion in the head of the pancreas on a CT scan which was later confirmed with an ultrasound-guided biopsy. The presence of non-neoplastic OGCs on microscopy is the hallmark of UC-OGC [11]. UC-OGC can be either pure or associated with other more common pancreatic tumors such as pancreatic ductal adenocarcinoma and mucinous cystic neoplasm. The origin of OGCs in UC-OGC is not well understood. It was thought to originate from mononuclear histiocytes/macrophages due to their nuclear features along with expression of CD68, vimentin, and lack of reactivity to cytokeratin. Their migration was thought to be due to chemotactic factors produced by the cancerous cells. Areas of necrosis, calcifications, and osteoid bone formation can be observed as well [7]. On immunohistochemical stains, mononuclear neoplastic cells are usually positive for vimentin, keratin, and antibodies to p53. However, OGCs are negative for keratin and p53 antibodies but are positive for CD68, vimentin, and leukocyte common antigen. Our case was confirmed as UC-OGC by typical microscopy appearance, positive CD68, and a 30% proliferation index among neoplastic cells. Treatment guidelines are limited and most of the information is obtained from isolated case reports/series. Surgery is usually the first line with unpleasant outcomes in most cases due to early recurrence and mortality. The role of radiation and neoadjuvant chemotherapy is extremely limited due to the rarity of the tumor. Limited evidence exists on the use of cisplatin and gemcitabine due to the epithelial origin of the tumor and favorable response. Our patient was referred to higher centers for surgical resection and was subsequently lost to follow-up [12][13][14]. The prognosis of UC-OGC was found to vary widely with time from diagnosis to death varying from four months to ten years in a study by Togawa et al. [15]. They observed a prolonged survival period associated with surgical resection with an average survival time of 19.6 months (about one and a half years); however, the unoperated group had an average survival time of 6.5 months. Even though not everyone is amenable to surgery, the above findings suggest better survival rates with resection. Other studies have reported poor prognosis among UC-OGC patients with the presence of K-ras oncogene mutations, p53 mutation, and loss of E-cadherin [16,17]. A meta-analysis by Kobayashi et al. comparing short-term and long-term survivors of UC-OGC who underwent surgical resection demonstrated that short-term survivors were noted to be elderly males with smaller tumors and positive lymph node metastasis with a concomitant component of ductal adenocarcinoma. Conclusions Pancreatic osteoclast-like giant cell tumor is an extremely uncommon and complex type of pancreatic cancer with unique characteristics and histopathology. Currently, surgery is the first-line treatment, but the role of radiotherapy and adjuvant/neoadjuvant chemotherapy is not well elucidated. Performing randomized trials is not feasible due to the rarity of the tumor type; hence, maintaining an international registry might help to provide more information to devise potential treatment strategies for this tumor type. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
124643970
s2orc/train
v2
2019-04-21T13:06:52.022Z
2012-09-01T00:00:00.000Z
Dynamics of a dengue fever transmission model with crowding effect in human population and spatial variation Dengue fever is a virus-caused disease in the world. Since the high infection rate of dengue fever and high death rate of its severe form dengue hemorrhagic fever, the control of the spread of the disease is an important issue in the public health. In an effort to understand the dynamics of the spread of the disease, Esteva and Vargas [2] proposed a SIR v.s. SI epidemiological model without crowding effect and spatial heterogeneity. They found a threshold parameter $R_0,$ if $R_0<1,$ then the disease will die out; if $R_0>1,$ then the disease will always exist. To investigate how the spatial heterogeneity and crowding effect influence the dynamics of the spread of the disease, we modify the autonomous system provided in [2] to obtain a reaction-diffusion system. We first define the basic reproduction number in an abstract way and then employ the comparison theorem and the theory of uniform persistence to study the global dynamics of the modified system. Basically, we show that the basic reproduction number is a threshold parameter that predicts whether the disease will die out or persist. Further, we demonstrate the basic reproduction number in an explicit way and construct suitable Lyapunov functionals to determine the global stability for the special case where coefficients are all constant. (Communicated by Xiaoqiang Zhao) Abstract. Dengue fever is a virus-caused disease in the world. Since the high infection rate of dengue fever and high death rate of its severe form dengue hemorrhagic fever, the control of the spread of the disease is an important issue in the public health. In an effort to understand the dynamics of the spread of the disease, Esteva and Vargas [2] proposed a SIR v.s. SI epidemiological model without crowding effect and spatial heterogeneity. They found a threshold parameter R 0 , if R 0 < 1, then the disease will die out; if R 0 > 1, then the disease will always exist. To investigate how the spatial heterogeneity and crowding effect influence the dynamics of the spread of the disease, we modify the autonomous system provided in [2] to obtain a reaction-diffusion system. We first define the basic reproduction number in an abstract way and then employ the comparison theorem and the theory of uniform persistence to study the global dynamics of the modified system. Basically, we show that the basic reproduction number is a threshold parameter that predicts whether the disease will die out or persist. Further, we demonstrate the basic reproduction number in an explicit way and construct suitable Lyapunov functionals to determine the global stability for the special case where coefficients are all constant. 1. Introduction. Dengue fever is an arbovirus disease in the tropical regions of the world, and temporal or sporadic in the subtropical and temperate regions. The symptoms of dengue fever include fever, headache, muscle and joint pains. More seriously, it will occur blood plasma leakage or the dengue shock syndrome and potential to death. Dengue disease is transmitted to humans by the bite of Aedes mosquitoes. Four serotypes (I ∼ IV) have been identified. Infection by any single type of virus usually gives lifelong immunity to that type, but only short-term immunity to the other serotypes ( [25]). The mosquitoes never recover from the 148 TZY-WEI HWANG AND FENG-BIN WANG infection and their infective period ends with their death ( [3]). Since the high infection rate of dengue fever and high death rate of its severe form dengue hemorrhagic fever, the control of the spread of the disease is always an important issue in the public health. It is known that the relatively rate of vertical transmission in the main vector of dengue (A. aegypti.) is low ( [10,18]). In an effort to understand the dynamics of the spread of the disease, Esteva and Vargas [2] proposed a SIR v.s. SI epidemiological model. Basically, they studied the mechanisms that allows the invasion and persistence of a serotype of dengue in a region. Their mathematical model for the dynamics of dengue disease contains only one type of virus and ignore the disease-related death rate. In the following, we shall briefly review the model proposed in [2]. Let S H , I H , and R H denote the number of the susceptible, infectious and immune class in the human population; S V , I V denote the number of the susceptible, infectious class in the mosquito population. Thus, N H := S H +I H +R H and N V := S V +I V represent the population sizes of human and mosquitoes, respectively. The constants µ b , µ d , and γ H represent the birth, death and recover rate of human species; A and µ V denote the recruitment and the per capita mortality rate of mosquitoes, respectively. For each species, flow from the susceptible class into the infectious class depends on the biting rate of the mosquitoes, the transmission probabilities together with the number of infective and susceptible class of each species. The biting rate b of mosquitoes is the average number of bites per mosquito per day. Mosquitoes bite not only human but also pets. Thus, we assume m is the number of alternative hosts available as blood sources. Then the probability that a mosquito chooses a human individual as a host is given by N H N H +m . Thus a human receives b N V N H N H N H +m bites per unit of time, and a mosquito takes b N H N H +m human blood meals per unit of time. The force of infection for human population is given by while the force of infection for vector population is given by where β H is the transmission probability from infectious mosquitoes to susceptible humans; β V is the transmission probability from infectious humans to susceptible mosquitoes. Then we get the following system which is closely related to the one in [2]: (1) We note that system (1) coincides with the one in [2] if we assume µ b = µ d . Esteva and Vargas [2] employed the results of the theory of competitive systems to determine the global dynamics of (1) under the assumption µ b = µ d . More precisely, they found a threshold parameter R 0 , if R 0 < 1, then the disease-free equilibrium is globally stable, or equivalently, the disease will die out; if R 0 > 1, then the only endemic equilibrium is globally stable, which means that the disease will always exist. In this paper, we shall modify the standard model (1) to incorporate the crowding effect and species movements in spatially heterogeneous environments. Let Ω be a spatial habitat with smooth boundary ∂Ω. We consider a closed environment in the sense that the fluxes for each of these subpopulations are zero. Corresponding to this, we shall propose the Neumann boundary conditions to the equations on the boundary. Finally, the crowding effect terms (see, e.g., [8]) in the susceptible class, the infectious class and the immune class in the human population are respectively described by With all these assumptions, the disease dynamics can be described by the following system of differential equations: Here, the spatial dependent functions A(x), b(x), c(x), m(x), β H (x), β V (x) are assumed to be positive; ∆ is the usual Laplacian operator; d H > 0, d V > 0 denote the diffusion coefficients for human and mosquitoes, respectively. Notice that the system (2) reduces to (1) if all the coefficients functions are constants, and The organization of this paper is as follows. In section 2, we first study the model (2) in a spatially variable habitat. By the theory of monotone dynamical systems and uniformly persistent, we determine a threshold number that predicts the disease persistence or extinction. In section 3, we consider the model (2) where all the coefficients are habitat independent (i.e. positive constants). We are able to construct an appropriate Lyapunov functional to discuss the global attractiveness of the steady-state solutions. Finally, a brief discussion is given in section 4. 2. The heterogeneous model. This section is devoted to the study of the dynamics of the system (2). Before demonstrating the limiting system for (2), we first consider the following scalar reaction-diffusion equation where d > 0; D(x) and g(x) are continuous and positive functions onΩ. Then we have the following results. TZY-WEI HWANG AND FENG-BIN WANG Since N H = S H + I H + R H and N V = S V + I V , it follows from (2) that N H and N V satisfy the following equations respectively: and System (4) is a logistic equation and it is well-known that the reaction-diffusion equation (4) admits a unique positive steady state K(x) such that (see, e.g. [16, page 506] and [28, Theorem 3.1.5 and the proof of Theorem 3.1.6] ): for all solutions with nonnegative and nonzero initial data provided that µ b > µ d . From (5) and Lemma 2.1, it follows that there exists a unique continuous function σ(x) which is positive onΩ such that We assume that µ b > µ d and (u 1 , u 2 , u 3 ) := (S H , I H , I V ), then one concludes that the limiting system for (2) takes the form: where Let X := C(Ω, R 3 ) be the Banach space with the supremum norm · X . Define to the Neumann boundary condition, respectively. It then follows that for any ϕ ∈ C(Ω, R), t ≥ 0, and where Γ 1 , Γ 2 and Γ 3 are the Green functions associated with d H ∆ − D 1 (·), d H ∆ − D 2 (·) and d V ∆ subject to the Neumann boundary conditions, respectively. From [20, Section 7.1 and Corollary 7.2.3], it follows that T i (t) : C(Ω, R) → C(Ω, R) is compact and strongly positive, ∀ t > 0 and i = 1, 2, 3. Furthermore, T (t) := (T 1 (t), T 2 (t), T 3 (t)) : X → X, t ≥ 0, is a C 0 semigroup (see, e.g., [17]). Then (8) can be rewritten as the following abstract differential equation or it can be rewritten as the following integral equation The above inequalities imply that (13) holds and thus the lemma is proved. We are in a position to show that solutions of system (8) exist globally on [0, ∞) and converge to a compact attractor in X + σ . The following results will play an important role in establishing the persistence of (8). From the first equation of (8), it is obvious that u 1 satisfies Let v 1 (x, t, φ) be the solution of By the standard parabolic comparison theorem (see, e.g., [20,Theorem 7 uniformly for x ∈Ω. Thus the proof of Part (ii) is complete. In order to find the disease-free equilibrium (infection-free steady state), we let the densities of the diseased compartments (u 2 and u 3 ) be zero, we get the following equation for the density of susceptible human, By Lemma 2.1, it is easy to see that system (17) has a positive steady state u * 1 (x), which is globally asymptotically stable in C(Ω, R). Linearizing system (8) at the disease-free equilibrium (u * 1 (x), 0, 0), we get the following cooperative system for the infectious human and vector population, respectively: We first consider the following generalized version of system (18): where h(x) > 0 and 0 ≤ ρ < σ(x), ∀ x ∈Ω. Note that if one choose h = u * 1 and ρ = 0 in (19) then we get system (18). The basic reproductive number, which is defined as the average number of secondary infections generated by a single infected individual introduced into a completely susceptible population, is one of the important quantities in epidemiology. For models described by ordinary differential equations (finite dimensions), [1,24] provide a standard procedure for defining and computing the basic reproductive number by using the next generation matrix. In the following, we shall adopt the same ideas as in [13,27] to define the basic reproduction ratio for the reaction-diffusion system (8). Let where T 2 (t) and T 3 (t) are defined in (10) and (11) respectively. It then follows that S(t) is a positive C 0 -semigroup on C(Ω, R 2 ). We further define a positive linear operator C on C(Ω, R 2 ) by where In order to define the basic reproduction ratio for system (8), we assume that both human and vector individuals are near the disease-free equilibrium (u * 1 (x), 0, 0), and introduce infectious human and vector individuals at time t = 0, where the distribution of initial infectious human and vector individuals is described by ϕ := (ϕ 2 , ϕ 3 ) ∈ C(Ω, R 2 ). Thus, it is easy to see that S(t)ϕ represents the distribution of infective human and vector individuals at time t ≥ 0. Consequently, at time t ≥ 0, the distribution of new infective human is Thus, the distribution of total new infective human is: Similarly, the distribution of total new infective vector is: represents the distribution of the total infective population generated by initial infectious human and vector individuals ϕ := (ϕ 2 , ϕ 3 ), and hence, L is the next infection operator. We define the spectral radius of L as the basic reproduction ratio for system (8), that is, R 0 := r(L). Now we are ready to prove the main result of this section, which indicates that R 0 is a threshold index for disease persistence. 3. The homogeneous model. In this section, we consider the reaction-diffusion system (2) in the case where all the coefficients are positive constants and one can obtain the following limiting system by using the same arguments in the previous section: where K = µ b −µ d c (µ b > µ d ) and σ = A µ V . By Lemma 2.1, it is easy to see that (K, 0, 0) is the disease-free steady-state solution of the system (29). From (9), it follows that By similar arguments to those in [27, Theorem 2.1], we can show that the basic reproduction ratio R 0 equals the spectral radius of the following 2 × 2 matrix: and hence, we have the following formula for R 0 : Lemma 3.1. For the system (29), the basic reproduction ratio is given by 158 TZY-WEI HWANG AND FENG-BIN WANG We nondimensionalize the system (29) with the following relations: Then the system (29) becomes Here, we assume that the admissible initial data s 0 (x), u 0 (x) and v 0 (x) are in the set By similar arguments to those in the previous section and the standard theory for parabolic equations, the unique solution (s(x, t), u(x, t), v(x, t)) of (31) exists and is positive on Y. In the following, we shall adopt a technique of Lyapunov functional (see, e.g., [6,11,12]) to study the global attractiveness of the positive steady state (s * , u * , v * ). Before we state our results, we first note that R 0 = αδ γθ by using the relations (32). Theorem 3.2. Let R 0 = αδ γθ . Then the following statements hold (i) If R 0 > 1, then E * is globally asymptotically stable in the interior of Y; (ii) If R 0 < 1, then E 0 is globally asymptotically stable in Y. 4. Discussion. In this paper, we studied the qualitative behavior of solutions of a reaction-diffusion system (see (2)), which is used to describe the dynamics of the spread of dengue fever. If the coefficients are spatial dependent, we employed the comparison theorems and the property of eigenvalue problems to establish criterium for the uniform persistence property of system (2), (see Theorem 2.7 and Remark 1). If the coefficients are all constants (see (29)), then we took the advantages of the method of Lyapunov functionals to obtain the global dynamics of (29) (see Theorem 3.2). Notice that our findings could be viewed as a generalization to those obtained in Esteva and Vargas [2].
234475920
s2orc/train
v2
2021-05-13T13:31:28.717Z
2021-05-12T00:00:00.000Z
Impact of in-hospital discontinuation with angiotensin receptor blockers or converting enzyme inhibitors on mortality of COVID-19 patients: a retrospective cohort study Background In the first wave of the COVID-19 pandemic, the hypothesis that angiotensin receptor blockers (ARBs) and angiotensin-converting enzyme inhibitors (ACEIs) increased the risk and/or severity of the disease was widely spread. Consequently, in many hospitals, these drugs were discontinued as a “precautionary measure”. We aimed to assess whether the in-hospital discontinuation of ARBs or ACEIs, in real-life conditions, was associated with a reduced risk of death as compared to their continuation and also to compare head-to-head the continuation of ARBs with the continuation of ACEIs. Methods Adult patients with a PCR-confirmed diagnosis of COVID-19 requiring admission during March 2020 were consecutively selected from 7 hospitals in Madrid, Spain. Among them, we identified outpatient users of ACEIs/ARBs and divided them in two cohorts depending on treatment discontinuation/continuation at admission. Then, they were followed-up until discharge or in-hospital death. An intention-to-treat survival analysis was carried out and hazard ratios (HRs), and their 95%CIs were computed through a Cox regression model adjusted for propensity scores of discontinuation and controlled by potential mediators. Results Out of 625 ACEI/ARB users, 340 (54.4%) discontinued treatment. The in-hospital mortality rates were 27.6% and 27.7% in discontinuation and continuation cohorts, respectively (HR=1.01; 95%CI 0.70–1.46). No difference in mortality was observed between ARB and ACEI discontinuation (28.6% vs. 27.1%, respectively), while a significantly lower mortality rate was found among patients who continued with ARBs (20.8%, N=125) as compared to those who continued with ACEIs (33.1%, N=136; p=0.03). The head-to-head comparison (ARB vs. ACEI continuation) yielded an adjusted HR of 0.52 (95%CI 0.29–0.93), being especially notorious among males (HR=0.34; 95%CI 0.12–0.93), subjects older than 74 years (HR=0.46; 95%CI 0.25–0.85), and patients with obesity (HR=0.22; 95%CI 0.05–0.94), diabetes (HR=0.36; 95%CI 0.13–0.97), and heart failure (HR=0.12; 95%CI 0.03–0.97). Conclusions The discontinuation of ACEIs/ARBs at admission did not improve the in-hospital survival. On the contrary, the continuation with ARBs was associated with a trend to a reduced mortality as compared to their discontinuation and to a significantly lower mortality risk as compared to the continuation with ACEIs, particularly in high-risk patients. Supplementary Information The online version contains supplementary material available at 10.1186/s12916-021-01992-9. Background In mid-March, at the start of the first wave of COVID-19 pandemic in Europe, the hypothesis that the reninangiotensin system inhibitors (RASIs), including angiotensin receptor blockers (ARBs) and angiotensin-converting enzyme inhibitors (ACEIs), increased the risk, and/or severity of the disease [1][2][3], was widely spread. Consequently, many hospitals and clinicians adopted the "precautionary measure" to discontinue these drugs from patients who regularly used them. Promptly, in the first weeks of May, three large epidemiological studies were published supporting the lack of association between the outpatient use of RASIs and risk of COVID-19 [4][5][6]. Later on, a plethora of studies and meta-analyses were published [7,8] reaching the same conclusion, which provides reassurance on the safety of these drugs. Yet, the extent of RASI discontinuation at hospital admission during the first wave of the pandemic and, importantly, its impact on health outcomes have been scarcely studied [9][10][11][12]. The downregulation of angiotensin-converting enzyme type 2 (ACE2), as resulted from the SARS-CoV-2 endocytosis, has been postulated to play a key role in the progression of COVID-19 to severe forms [13]. In physiological conditions, the ACE1-angiotensin II-AT1R axis (the classical RAS) is counter-regulated by the ACE2-Angiotensin (1-7)-MasR axis. Thus, when the latter weakens, angiotensin II is unopposed and its vasoconstrictor, pro-inflammatory, and pro-thrombotic actions may contribute to the pathophysiology of severe COVID-19 [13][14][15]. In this context, it is conceivable that treatment with RASIs in COVID-19 inpatients could compensate the ACE1/ACE2 imbalance provoked by the SARS-CoV-2 and produce a net beneficial effect. According to this, several observational studies have reported a protective effect of inpatient use of RASIs on mortality as compared to non-use (or non-RASI use) in COVID-19 patients [9][10][11][12]. However, such studies have been criticized for incurring in several types of bias [16,17]. Recently, two randomized clinical trials have been published [18,19] reporting no difference in mortality between discontinuation and continuation arms. However, these trials and most observational studies have pooled ACEIs and ARBs and analyzed in a unique group, overlooking that they have different pharmacological actions [20] that could lead to distinct clinical effects [20], particularly in COVID-19 patients [15]. In this sense, no study has carried out a head-to-head comparison of in-hospital use of these drugs in COVID-19 patients admitted to the hospital. The present research was aimed (1) to quantify the magnitude of RASI discontinuation at admission in seven hospitals from the Autonomous Community of Madrid, Spain; (2) to compare in real-life conditions the in-hospital mortality in patients in whom ACEIs or ARBs were discontinued with those in whom RASIs were continued; and (3) to perform a head-to-head comparison between in-hospital use of ACEIs and ARBs regarding mortality in admitted patients for COVID-19. Study design, subject selection, and follow-up We collected information from patients aged 18 years or older admitted to the hospital from March 1, 2020, to March 31, 2020, with a diagnosis of COVID-19 confirmed by RT-PCR. Seven hospitals of the Autonomous Community of Madrid (Spain) took part. According to drug exposure in the month prior to admission, patients were classified in three study groups: (1) users of RASIs, (2) users of non-RASI antihypertensive drugs, and (3) non-users of antihypertensive drugs. For the present study, only RASI users were considered. Among them, we excluded those in whom the continuation or discontinuation of RASI treatment could not be properly assessed at admission, including patients transferred to another hospital from the emergency department (ED) and patients who presented the outcome (death or admission to the intensive care unit (ICU)) or were discharged within the first 3 days of hospital admission. Hence, eligible patients had to survive and be outcomefree in a hospital ward (excluding ICU) at least during the first 3 days since admission to the ED. Then, they were subdivided in two closed cohorts: (1) Continuation cohort: patients in whom RASI prescriptions were recorded in at least 2 of the first 3 days since ED admission (including switching from one RASI to another) and (2) Discontinuation cohort: patients in whom no prescription of RASI was recorded in the first 3 days since ED admission. When there was a sole prescription of RASIs in the first 3 days, the intention-to-discontinue was considered uncertain and these patients were not included in the main analysis; however, we carried out two sensitivity analyses in which these patients were reclassified (see "Sensitivity analyses"). Both cohorts were then followed-up until discharge or in-hospital death (any cause), recording any ICU admission. The date of admission to the ED was considered the index date for the follow-up, so the above definitions assume an immortal time of 3 days in both continuation and discontinuation cohorts (avoiding this way a bias). Sources of information and data collection The information on co-morbidities and drug exposure before admission was extracted from electronic primary healthcare records, which are accessible through the viewer HORUS from any hospital in Madrid for authorized healthcare workers. The information on disease severity at admission and its clinical evolution (death, discharge, ICU admission, and in-hospital treatment received) was retrieved from hospital medical records. All data extracted were anonymized and included in ad hoc case report forms in each participating hospital, then sent out to the coordinating center, where a data quality control was undertaken to assure that all hospitals collected the information in the same manner. Baseline co-morbidities and outpatient treatments The presence of the following baseline co-morbidities was recorded at index date: antecedents of hypertension, dyslipidemia (recorded as such or when there was at least one prescription of a lipid-lowering drug), diabetes (recorded as such or when there was at least one prescription of a glucose-lowering drug), ischemic heart disease, atrial fibrillation, heart failure, thromboembolic disease, cerebrovascular accident (including stroke and transient ischemic accident), asthma, chronic obstructive pulmonary disease (COPD), chronic renal failure, and cancer (past and active). We also collected information on obesity (defined as a body mass index-BMI-≥ 30 kg/m 2 ), smoking (current smoker, past smoker, nonsmoker, or not recorded), and the outpatient use of calcium channel blockers (CCBs), beta-blocking agents, alpha-adrenoceptor antagonists with cardiovascular (CV) indications, high-ceiling diuretics, low-ceiling diuretics, antagonists of mineralocorticoid receptor (AMRs), lipidlowering drugs, glucose-lowering drugs, antiplatelet drugs, oral anticoagulants, nonsteroidal anti-inflammatory drugs (NSAIDS), systemic corticosteroids, and non-opioid analgesics (paracetamol and metamizole). Disease severity To characterize the severity of COVID-19 at admission, we collected information on the presence of pneumonia, hypoxemia (defined as oxygen saturation ≤90% at rest breathing ambient air, or a PaO 2 /FiO 2 ratio ≤300 mm Hg), lymphopenia, and abnormal values of five inflammatory biomarkers (according to the reference values of each hospital laboratory), when available: C-protein reactive (CPR), procalcitonin, troponin, D-dimer, and Nterminal type B natriuretic propeptide (NT-pro-BNP) [13]. With these 5 biomarkers plus hypoxemia and lymphopenia (1: abnormal; 0: otherwise), we generated a "severity score" ranging from 0 to 7 (values 0 and 1, as well as 6 and 7, were collapsed to assure enough number of patients) which showed a positive linear trend with the hazard ratio of in-hospital mortality (p=0.01), after adjusting for age, sex, baseline characteristics, outpatient treatments, hospital, and date of admission (see Additional file 1: Figure S1). In-hospital drug exposure The main exposure of interest was the inpatient use of RASIs (ACEIs and ARBs), including combinations with other antihypertensive drugs. We also collected information of in-hospital use of the following drugs: calcium channel blockers (CCBs), beta-blocking agents, alpha-adrenoceptor antagonists with cardiovascular (CV) indications, highceiling diuretics, low-ceiling diuretics, AMRs, lipid-lowering drugs, glucose-lowering drugs (oral and insulin), antiplatelet drugs, anticoagulants (oral or parenteral), antiviral agents, chloroquine/hydroxychloroquine, azithromycin, and other macrolides, other antibiotic agents, systemic steroids, and other immunomodulators. Outcomes The main outcome variable was time to in-hospital death for any cause. As a secondary outcome, we also considered the time to a composite of in-hospital death and time to ICU admission, whichever occurred first. Statistical analysis We expressed quantitative variables as mean and standard deviation (SD), or median and interquartile range (IQR) for not normally distributed data, and qualitative variables as frequencies and percentages. Differences in quantitative variables were assessed using the Student's t test or Mann-Whitney U test (for parametric or nonparametric evaluation between two groups, respectively). Differences in frequencies were assessed using the chisquared test or Fisher's exact test when assumptions for chi-square test were not met. The standardized difference was also calculated for means and proportions as a measure of the covariate balance between the exposure groups [21]. To estimate the effect of RASI discontinuation on the outcomes, we carried out an intention-to-treat (ITT) analysis, so that patients were analyzed in their assigned closed cohorts (discontinuation or continuation) defined in the first 3 days of hospitalization, whatever happened thereafter. Then, we proceeded as follows: (1) A binary logistic model was constructed to estimate the propensity score (PS) of RASI discontinuation conditioned on baseline co-morbidities, outpatient treatments, hospital of admission, date of admission (in three periods of equal length), severity score at admission, presence of pneumonia, and treatments prescribed in the first 3 days of hospitalization (including antihypertensive drugs, chloroquine/hydroxychloroquine, and antivirals, the latter two prescribed per protocol for most admitted COVID-19 patients) [22]; (2) Then, we built a Cox proportional hazards model which included the exposure and the estimated PS as a flexible function (restricted cubic splines with 5 knots accounting for 5th, 25th, 50th, 75th, and 95th percentiles) to compute the PSadjusted hazard ratios (HRs) and their 95% confidence intervals (95%CI); we preferred to use a flexible function instead of simple PS adjustment due to the lack of a linear relationship between PS and the outcome) [23]; (3) We also estimated the controlled direct effect of RASI discontinuation on outcomes by including in the PSadjusted Cox model the potential mediators (those associated with the exposure, as well as the outcome, controlling for the exposure [23]: systemic corticosteroids, anticoagulants, and immunomodulators when death was the outcome and immunomodulators and anticoagulants when the outcome was death plus ICU admission). To avoid a collider bias, we also included potential mediator-outcome confounders in the Cox model [24,25] (antiplatelet drugs when the outcome was death and systemic steroids when the outcome was death plus ICU admission), according to our hypothesized causal graph (see Additional file 1: Figure S2). This way we computed the mediator-controlled HRs (MC-HR) and their 95% CIs. We also built univariate Kaplan-Meier survival curves for the exposures and outcomes of interest, using logrank test to evaluate the differences in survival curves across different levels of exposure. The proportional hazard assumption of COX models was checked using the Schoenfeld residuals test and confirmed graphically with a log-minus-log survival plot and by comparison of the Kaplan-Meier survival curves with the Cox predicted curves [23]. The possible effect modification (or interaction) by gender, age, diabetes, obesity, background CV risk, heart failure, severity score (in two categories, using the median as the cut-off point), and in-hospital use of corticosteroids and beta-blockers was assessed stratifying the Cox model by the categories of the potential interacting variables and then comparing the HRs across strata with the Altman and Bland test for interaction [26]. The background CV risk was built as a composite variable with two categories: (1) antecedents of CV disease which includes ischemic heart disease, cerebrovascular accident, heart failure, atrial fibrillation, and thromboembolic disease and (2) CV risk factors only which includes hypertension, dyslipidemia, diabetes, or chronic renal failure. Sensitivity analyses Three sensitivity analyses were performed: (1) reclassifying patients in whom RASI discontinuation was uncertain, so that those with a sole prescription recorded in day 2 or day 3 were assigned to the continuation cohort, and patients with a sole prescription recorded in day 1 were assigned to the discontinuation cohort; (2) assigning all patients in whom discontinuation was uncertain to the discontinuation cohort; and (3) using a 2-day window, instead of a 3-day window, to define RASI (dis)continuation (see Additional file 1: Figure S3). Patient selection and discontinuation rates A total of 2029 patients were consecutively admitted with a PCR-confirmed COVID-19, being 819 outpatient users of RASIs. In 141 of them, we were unable to assess the continuation of RASIs (59 patients were directly admitted to the ICU: 47 from the ED and 12 from other hospitals; 44 were transferred from the ED to another hospital; 38 had the event-death or ICU admission-or were discharged within the first 3 days of admission); and in 53, the intention-to-discontinue was uncertain (22 presented a sole prescription in days 1 and 31 in days 2 or 3). Overall, 625 patients were included in the main analysis; out of them, 285 (45.6%) continued and 340 (54.4%) discontinued RASI treatment (Fig. 1). RASI discontinuation rates varied greatly across participating hospitals (ranging from 23.5 to 93.0%) and proved to be highly dependent on the date of admission (from 32.1% in the first 10 days of March to 74.2% in the last 10 days of March) ( Table 1 and Additional file 1: Figure S4). Among patients who discontinued RASIs, 131 (38.5%) received treatment with CCBs (alone or combined with other antihypertensive drugs), 51 (15.0%) with other antihypertensive drugs (OADs) alone, and 158 (46.5%) had no recorded antihypertensive treatment within the first 3 days of admission (furosemide excluded) (Fig. 2). A similar pattern was observed when ACEIs and ARBs were considered separately (Additional file 1: Figure S5). Patient characteristics The baseline characteristics of patients who discontinued and continued treatment with RASIs are shown in Table 1. Baseline co-morbidities and co-medications appeared to be well-balanced, though patients who discontinued had a broadly lower prevalence of co-morbidities (statistically significant for obesity, history of heart failure, and history of a cerebrovascular accident). At admission, severity markers appeared to be well-balanced, though patients who discontinued presented a higher proportion of pneumonia (93.8% vs. 88.4%; p=0.02), and average severity score (3.1 vs. 2.9; p=0.03) ( Table 1). The distribution of estimated PS for RASI discontinuation according to actual discontinuation or continuation of RASIs is shown in Additional file 1: Figures S8a and S8b. During hospitalization, patients in whom RASIs were discontinued presented a higher proportion of treatment with parenteral anticoagulants, systemic corticosteroids, and CCBs, while patients who continued with RASIs presented a higher use of oral anticoagulants, statins, oral glucose-lowering drugs, other macrolides (different from azithromycin), tocilizumab or other immunomodulating agents, beta-blockers, and low-ceiling diuretics ( Table 2). ICU admission was similar in both groups (5.6% vs. 6.0% for patients who discontinued and continued with RASIs, respectively), as well as the median hospital stay (11 vs. 10 days). Similar patterns were observed when RASIs were disaggregated by ACEIs and ARBs (Additional file 1: Tables S1 and S2). Mortality rates associated with RASI discontinuation vs. Head-to-head comparison between ARB versus ACEI continuation Among 285 patients who continued with RASIs, 136 did so with ACEIs and 125 with ARBs; 24 patients who used dual therapy or were crossed over to the other treatment were excluded from this analysis. The baseline characteristics and in-hospital treatment of patients who continued with ARBs and ACEIs appeared to be evenly distributed with some exceptions (i.e., use of corticosteroids, beta-blockers, and low-ceiling diuretics, all of them greater among ARB users) (Additional file 1: Table S3), but the mortality rates were remarkably different (20.8% vs. 33.1% for ARBs and ACEIs, respectively; p= 0.03), yielding a head-to-head crude HR of 0.57 (95%CI 0.35-0.93), which barely changed after adjustment for baseline covariates (PS-HR=0.56; 95%CI 0.32-0.99) and after controlling for mediators (including systemic corticosteroids, immunomodulators, and anticoagulants) (MC-HR=0.52; 95%CI 0.29-0.93) ( Table 4). The respective Kaplan-Meier survival curves are shown in Fig. 3, with the log-rank test resulting in a p value of 0.02. The median survival time was 25 days for patients who continued with ACEIs and was not reached for patients who continued with ARBs. For the composite outcome, the trend to a reduced mortality risk associated with ARBs as compared to ACEIs was still present, but did not reach statistical significance (MC-HR= 0.59; 95%CI 0.35-1.01) ( Table 4). Analysis of potential interactions No statistically significant interaction was observed by gender, age (<75; 75+years), obesity, diabetes, heart failure, background cardiovascular risk, severity score (0-3; 4-7), and in-hospital use of corticosteroids or beta-blockers (Additional file 1: Figure S6). The results disaggregated by ACEIs and ARBs are shown in Additional file 1: Figure S7. A trend to a higher risk associated with ARBs discontinuation was observed in all subgroups, being particularly relevant for obese people (MC-HR= 5.40; 95%CI 1.25-23.3; test for interaction, p= 0.08) For the comparison between continuation with ARBs vs continuation with ACEIs, we found a statistically significant interaction with a past history of heart failure . 4). It is interesting to note that the reduced risk of mortality associated with ARB continuation as compared to ACEI continuation was particularly relevant (and statistically significant) in high-risk subgroups: males, patients aged 75 years or older, obese, diabetics, and patients with antecedents of heart failure (Fig. 4). It is also important to highlight that the use of in-hospital systemic corticosteroids did not appear to mediate or modify the reduced risk associated with ARB continuation (MC-HR in patients who received corticosteroids= 0.54, 95%CI 0.27-1.09 and MC-HR in patients who did not = 0.46 (95%CI 0.17-1.23) (Fig. 4). Sensitivity analyses Sensitivity analyses performed after reclassifying patients with uncertain (dis)continuation or using a 2-day window yielded similar results to the main analysis (Additional file 1: Table S4). The proportional hazards assumption was fulfilled for all Cox regression analyses according to the Schoenfeld residuals test. Discussion The main findings of the present study are as follows: (1) RASIs were discontinued in around half of the patients admitted to hospital for COVID-19 during March 2020; (2) the discontinuation rate increased over time, being particularly notorious since March 11; (3) the discontinuation of RASIs as a group was not associated with an increased or decreased risk of in-hospital death or ICU admission, but the results disaggregated by ARBs and ACEIs were not homogeneous; and (4) the continuation of treatment with ARBs was associated with a significantly lower all-cause mortality than the continuation of treatment with ACEIs. The RASI discontinuation rate was strongly influenced by the date of admission (doubling from mid-March), which seems to be a direct consequence of the hypothesis that quickly spread since March 11 on the possibility that these drugs could make COVID-19 more severe [3]. Notwithstanding, the rate varied considerably by the hospital (and possibly by the attending physician within each hospital). In other countries, researchers have reported discontinuation rates ranging from 12.4 to 67.7%, though using different definitions for discontinuation (Additional file 1: Table S5) [9-12, 17, 27-34] . Of note, in our study, as much as 46.5% of patients in whom treatment with RASIs were discontinued (25.3% of the total number of patients who used them prior to admission) were left without any antihypertensive drug (excluding furosemide), which suggests that in a relevant part of patients RASIs were discontinued for medical reasons, likely related to an unstable hemodynamic situation. Our main finding is that the discontinuation of RASIs, as a group, did not have an impact on in-hospital mortality or in the composite of in-hospital mortality plus ICU admission. This result seems robust as it hardly varied in different sensitivity analyses in which we modified the definition of (dis)continuation. Contrary to the huge number of studies carried out to assess the impact of outpatient use of RASIs on different outcomes (COVID-19 infection, hospitalization, and mortality, among others) [7,8]; fewer studies have been performed thus far to examine the association of inpatient use of RASIs with in-hospital mortality. One of the first studies was published by Zhang et al. [11] with data from 9 hospitals in Hubei province (China). They found an allcause mortality among inpatients treated with RASIs much lower than non-treated patients, with an adjusted HR of 0.42 (95%CI 0.19-0.92). However, this study was criticized because the authors considered exposure to all patients who received RASIs at any time point during hospitalization, which implies that exposed patients had to survive long enough, or be clinically stable enough, to receive the treatment with RASIs [16]. Thus, such definition of the exposure could have introduced an immortal-time bias [16] and a confounding by severity (also graphically called "healthy user-sick stopper" bias" [17], that is, RASIs were more likely to be continued, initiated, or reinstated in less severe cases), both favoring an overestimation of the benefit of RASIs on mortality. Most researchers thereafter used similar definitions incurring in the same types of bias and most coinciding to show an important reduced mortality risk associated with RASIs [9,10,12,[27][28][29][30][31][32][33][34] (see Additional file 1: Table S5 for a detailed description of studies). To overcome these problems, we defined continuation or discontinuation during the first 3 days (or during the first 2 days in a sensitivity analysis) and then followed an ITT analysis (each patient analyzed in his/her assigned closed cohort), as it would have been done in a clinical trial. Abbreviations: CCBs calcium channel blockers, ICU intensive care unit, IQR interquartile range, RASIs renin-angiotensin system inhibitors *Other antivirals: remdesivir, aciclovir, bictegravir-emtricitabine-tenofovir, tenofovir, emtricitabine-tenofovir, lamivudine-abacabir-dolutegravir, valaciclovir, and valganciclovir **Other immunomodulators: Jak inhibitors, interferon beta-1b, ciclosporin, anakinra, ceftriaxone, leflunomide, methotrexate, and mycophenolic acid Also, to avoid a reverse causation, we excluded patients directly admitted to the ICU (from the ED or from another hospital), a situation in which RASIs are usually discontinued as a consequence of the disease severity. Interestingly, if we had defined continuation as "use of RASIs at any time point during hospitalization" and included patients directly admitted to the ICU in the discontinuation cohort, the mortality rates would have been 25.3% and 30.3% in the continuation and discontinuation cohorts, respectively, yielding a HR of 0.83 (95%CI 0.66-1.05) for in-hospital mortality. For the composite outcome (death plus ICU admission), the rates would have been 30.0% for patients in whom RASIs were continued and 43.6% in those who discontinued giving rise to a HR of 0.67 (95%CI 0.57-0.83). Therefore, the results would have been dramatically different than the ones we actually obtained, showing the extent of such biases. Abbreviations: ACEIs angiotensin-converting enzyme inhibitors, ARBs angiotensin receptor blockers, CI confidence interval, HR hazard ratio, ICU intensive care unit, RASIs renin-angiotensin system inhibitors # 2 patients discontinued a dual ACEI-ARB treatment and were excluded from the disaggregated analysis below ## 9 patients who were prior users of ARBs continued with ACEIs in-hospital, 8 patients who were prior users of ACEIs continued in hospital with ARBs, and 7 patients received dual ACEI-ARB treatment. All of them (n=24) were excluded from the disaggregated analysis below by ACEIs and ARBs *Propensity-scores-adjusted hazard ratio (adjusted total effect) **Mediator-controlled hazard ratio (controlled direct effect): (a) systemic corticosteroids, anticoagulants, and immunomodulators when the outcome was death and (b) immunomodulators and anticoagulants when the outcome was death plus ICU admission Recently, the results from two randomized clinical trials in which regular users of RASIs who were admitted to hospital for COVID-19 were assigned to discontinuation or continuation arms, have been reported (BRACE-CORONA [18] and REPLACE COVID [19] trials) and both found no difference in the mortality rates, supporting our results. However, it is important to emphasize that in the BRACE-CORONA trial the mortality rates were very low (2.7% among patients assigned to discontinuation and 2.8% in those assigned to continuation), casting doubts on the generalizability of their results (the mean age of the study population was 55 years old, 20 years younger than our population). Also, the measure of association of mortality was too imprecise (odds ratio=0.97; 95% CI 0.38-2.52) to be informative. Interestingly, 80% of patients were prior users of ARBs, and the authors found quasi-significant results favoring continuation in older persons, obese patients, and in those clinically more severe, in line with our findings (see later). The REPLACE COVID trial had a more representative population and consistently found no difference in all-cause mortality (15% and 13% in the continuation and discontinuation arms, respectively). Unfortunately, the sample size was too small to make a meaningful separate analysis by ACEIs and ARBs. The different mortality rates among patients who continued with ACEIs versus those who continued with ARBs is a novel finding that merits specific comments. Firstly, it is important to emphasize that this comparison is ideal for several reasons: (a) these drugs have overlapping indications, thereby the subjects who use them are highly comparable, seemingly reducing by design the possibility of confounding (due to either known and unknown factors); (b) the possibility of an immortal-time bias is inexistent, as the same definition of continuation was applied to both cohorts; (c) the possibility of a confounding by severity is unlikely, as it is not reasonable to think that physicians used different criteria for the continuation of ARBs or ACEIs, and additionally, we applied an ITT analysis once continuation was defined based on the records of the first 3 days of hospitalization; and, finally, (d) the few differences we found (such as the greater in-hospital use of systemic corticosteroids in the ARB continuation cohort) were controlled for by including this factor in the outcome regression model and by stratification, and none of these strategies changed the results, reinforcing the internal validity of the comparison. Secondly, most previous studies have pooled ACEIs and ARBs (see Additional file 1: Table S5), as if they were the same type of drugs. However, our results show that this approach may be wrong; also, there are profound pharmacological reasons that make this grouping invalid, in particular for COVID-19 patients. ARBs block selectively the action of angiotensin II on AT1 receptor (AT1R), and free angiotensin II is then converted by the ACE2 into angiotensin (1-7) which acts on Mas1 receptor (Mas1R) to induce opposite actions to angiotensin II (anti-inflammatory, anti-oxidant, anti-fibrotic, antithrombotic, anti-hypertrophic, vasodilatation, and natriuresis) [13][14][15]. Also, angiotensin II not used in activating AT1R acts on AT2 receptor (AT2R) (for which ARBs have no affinity), whose activation is known to Fig. 3 Kaplan-Meier survival curves of in-hospital death among patients in whom treatment with ARBs was continued as compared to those in whom ACEIs was continued (defined in the first 3-day window). Abbreviations: ACEIs angiotensin-converting enzyme inhibitors, ARBs angiotensin receptor blockers. *Log-rank test produce opposite actions to the ones derived from the activation of AT1R [15], thereby collaborating with the protective effect of angiotensin (1-7). Instead, ACEIs inhibit the formation of angiotensin II, which pre-empts the generation of angiotensin (1-7) from both angiotensin II via ACE2, but also from angiotensin (1-9) via ACE1 [13][14][15]; additionally, the beneficial actions derived from activation of AT2R do not take place. In sum, both ARBs and ACEIs effectively block RAS, whereas only ARBs appear to reinforce its counterregulatory system, via ACE2-angiotensin (1-7)-Mas1R axis and AT2R activation, a difference that could be critical in COVID-19 patients. Additionally, ACE1 is well-known to be the major vascular peptidase of bradykinin, an abundant peptide which promotes vasodilatation, vascular permeability, and liberation of inflammatory cytokines (IL-1, IL-2, IL-6, IL-8, and TNF-alpha) implicated in the cytokine storm associated with the severe forms of COVID-19 [15]. Therefore, ACEIs will reduce bradykinin degradation, thereby potentiating its effects, which ultimately could be detrimental for COVID-19 patients [15,35,36]. These negative collateral actions of ACEIs may offset the benefits derived from the inhibition of angiotensin II formation and, we postulate, that these could account for the important difference we found in the mortality rates among inpatients treated with ARBs and those treated with ACEIs (an absolute difference of 12.3%, corresponding to a number needed to treat as low as 8). Importantly, the benefit of ARBs seems to be particularly evident in high-risk subgroups: males, the very old, obese, diabetics, and patients with antecedents of heart failure (as the BRACE-CORONA trial [18] also has shown, as commented before). Nevertheless, our results need confirmation, in particular through randomized clinical trials, and until then, we should take these findings with caution. Some are in progress aiming to assess the benefits of using ARBs in COVID-19 patients with the acute respiratory syndrome as compared to placebo or standard care [NCT04394117, NCT04312009, and NTC04355936], but, as far as we know, no study has Fig. 4 Head-to-head comparison of continuation with angiotensin receptor blockers vs. continuation with angiotensin-converting enzyme inhibitors, by different subgroups. Abbreviations: ACEIs angiotensin-converting enzyme inhibitors, ARBs angiotensin receptor blockers, CV cardiovascular. *Mediator-controlled hazard ratio (controlled direct effect): including systemic corticosteroids (excepting stratification by corticosteroids), anticoagulants, and immunomodulators been designed to compare ARBs with ACEIs in this context. Rodilla et al. [30] compared survival of COVID-19 patients according to the use of ARBs and ACEIs prior to admission and found a significantly reduced mortality risk with the former (25.6% vs. 30.4%, respectively, p= 0.0001); but, unfortunately, a head-to-head comparison of in-hospital use of ARBs vs. ACEIs was not reported. Finally, it is of interest to note that in the study by Zhang et al. [11], 83.5% of patients reported to be on RASIs were actually treated with ARBs. Our study has some limitations that must be discussed: (1) as in all observational studies, the possibility exists that there is some residual confounding due to unknown or unmeasured factors; also, a residual confounding by indication cannot be ruled out. Notwithstanding, it is important to remark that all our patients were users of RASIs prior to admission and were highly comparable at baseline, as shown by the good balance of covariates and the fact that the mean and median of the propensity scores for RASI discontinuation was close to 0.5 (Additional file 1: Figure S8); indirectly, it is likely that unmeasured confounding variables are evenly distributed too, albeit this cannot be assured; as previously commented, this is specially applicable to the comparison of ACEI and ARB continuation cohorts; (2) the information on some severity biomarkers (i.e., interleukins 6 or 1β) were not routinely performed at that time and were not considered in the severity score built for this study; on this regard, we would like to emphasize that such score was created to reduce the number of covariates included in the PS models, and it is not proposed as a prognostic index (as we are quite aware that a specific and independent validation study would be necessary for that); (3) the study period selected (March, 2020) was the most critical of the first wave in Spain and, at that time, health professionals worked under an extraordinary pressure, which may have led to under-recording of some relevant clinical information; this limitation, however, does not apply to drugs as they were prescribed through an electronical tool, making unlikely the misclassification of drug exposure; and (4) the mortality rates recorded in our study were extraordinary high (partly accounted for the lack of preparedness of the health system to address this disease at the very beginning of the pandemic) and remarkably different from figures corresponding to other periods during the first and successive waves in Spain or in other countries, so the generalizability of our data on this regard cannot be assured; however, we do not think that this affects the internal validity of our results. Conclusions The discontinuation of RASIs at hospital admission was common place in the first wave of COVID-19 pandemic in Spain, influenced by the widely spread hypothesis that postulated a more severe disease in patients treated with these drugs. Our results show that the discontinuation of these drugs at admission did not improve the inhospital survival. On the contrary, we found that the discontinuation of treatment with ARBs was associated with a trend to an increased mortality risk as compared to their continuation. Moreover, the continuation with ARBs was associated to a significantly lower mortality risk as compared to the continuation with ACEIs, particularly evident in high-risk subgroups.
120352080
s2orc/train
v2
2019-04-18T13:07:35.877Z
2012-07-30T00:00:00.000Z
LArGe R&D for active background suppression in Gerda LArGe is a GERDA low-background test facility to study novel background suppression methods in a low-background environment, for future application in the GERDA experiment. Similar to GERDA, LArGe operates bare germanium detectors submersed into liquid argon (1 m3, 1.4tons), which in addition is instrumented with photomultipliers to detect argon scintillation light. The light is used in anti-coincidence with the germanium detectors to effectively suppress background events that deposit energy in the liquid argon. The background suppression efficiency was studied in combination with a pulse shape discrimination (PSD) technique using a BEGe detector for various sources, which represent characteristic backgrounds to GERDA. Suppression factors of a few times 103 have been achieved. First background data of LArGe with a coaxial HPGe detector (without PSD) yield a background index of the order 10−2 cts/(keV-kg-y), which is at the level of the GERDA phase I design goal. As a consequence of these results, the development of an active liquid argon veto in GERDA is pursued. Introduction Gerda is an experiment to search for the neutrinoless double beta (0νββ) decay in 76 Ge. It has been proposed in 2004 [1], and has recently started data taking at the Laboratori Nazionali del Gran Sasso (Lngs), Italy. Gerda operates high-purity germanium (HPGe) detectors enriched to 86% in 76 Ge, which are submersed naked into liquid argon (LAr). The LAr both acts as a high purity shielding against background from gamma radiation, and as a cooling medium for the HPGe detectors. The Ge-crystals are simultaneously used as a source and as a detector for the 0νββ-decay. The expected signal is caused by the full absorption of the two emitted electrons in the detector, causing a faint peak at Q ββ = 2039 keV corresponding to a half life of > 10 25 years. In order to detect this peak, the region of interest (Roi) must be kept quasi background free, which poses the key challenge to Gerda. LArGe is the LAr Germanium test facility of Gerda, which was constructed to study novel active background suppression methods in a low-level environment [2]. Similar to Gerda, LArGe operates bare Ge-detectors in 1 m 3 (1.4 tons) of liquid argon, which in addition is instrumented with photomultiplier tubes (PMT). Liquid argon scintillates upon the interaction with ionizing radiation, and produces ∼40,000 XUV photons per MeV deposited energy. Typical background events have excess energy, which is deposited outside the Ge-detectors in the surrounding argon. In contrast, ββ-events are single site events confined to the Ge-diode, so that no scintillation light is triggered. Therefore, by detecting scintillation in anti-coincidence to Ge-signals one can actively suppress these background events (LAr veto) [3]. LArGe combines this approach with a pulse shape discrimination (PSD) technique using a BEGe detector: The objective of PSD is to distinguish the single site events (SSE) of the ββ-decay from multi site events (MSE) of common gamma-background with multiple interaction vertices within the Ge-diode. It has been demonstrated that by using the signal-time-structure of a Broad-Energy Ge-detector (BEGe), one can efficiently discriminate SSE from MSE, and thus efficiently suppress background [4]. In LArGe such a BEGe detector was used to study the combined suppression efficiency of LAr veto and PSD. Setup description Like Gerda, LArGe is located at the Lngs underground lab at 3800 m w.e. The core of the experiment is a vacuuminsulated copper cryostat filled with 1000 l ultra-pure LAr, which is actively cooled by liquid nitrogen (figure 1). The inner wall of the cryostat is lined with VM2000 mirror foil to guide the scintillation light towards nine 8" ETL 9357 PMTs located at the top. Both, mirror foil and the photocathodes, are covered with a 1-4 µm thin layer of wavelengthshifter (TPB in polystyrene), to convert the 128 nm scintillation photons into the sensitive range of the PMTs around 420 nm. Up to nine Ge-detectors on three strings can be inserted into the cryostat through a lock system on top of the assembly. The cryostat is encased by a graded shield of increasing radiopurity: poly-ethylene, steel, lead, and copper. Characteristic suppression of LAr veto and PSD The background suppression efficiency was studied for different gamma sources ( 137 Cs, 60 Co, 226 Ra, 228 Th) in different locations (close-by the BEGe or external to the cryostat), which represent characteristic background sources to Gerda. As an example we use the spectrum of internal 228 Th (figure 2) 7 cm from the detector: the region around Q ββ (2039 keV) and above is dominated by the Compton spectrum of the 2615 keV gamma line from 208 Tl. At lower energies other lines become important. Figure 3 illustrates the fundamental differences of the suppression mechanisms of LAr veto and PSD: the double escape peak (DEP) at 1593 keV is predominantly SSE and as such not rejected by PSD. Conversely, the annihilation gammas that leave the diode have a high probability of beeing absorbed in LAr. Hence, by applying the LAr veto the DEP vanishes (note the color code in the figures). The neighboring peak at 1621 keV is a full energy peak (FEP) from 212 Bi. Beeing emitted as a single gamma, this line is not vetoed by scintillation. Other gammas are emitted in cascades (e.g. 583 keV and 2615 keV) and are therefore vetoed depending on their origin in the setup, whereas PSD is mostly position independend. In general, the particular response of the different suppression methods is a useful tool to understand the origin of different backgrounds on the basis of low counting statistics. Background suppression at Q ββ The best background suppression in the region of interest (Roi) around Q ββ (2039 keV) has been achieved for internal 228 Th (figure 4): the background is reduced by more than three orders of magnitude. A summary of the measured suppression factors for all sources is given in table 1. Generally external sources are suppressed less by the LAr veto than inner sources, whereas PSD is largely position independent. In addition, the suppression factor depends on whether single or coinident gammas are emitted, and on their excess energy. On average the combined suppression of the LAr veto and PSD is enhanced by a factor (1.8 ± 0.2), compared to the product of the individual suppression factors. This is very beneficial for its application in Gerda. 5). Yet, no PSD was available for this detector type, and the LArGe passive shield is incomplete. Nonetheless, the background index achieved by applying the LAr veto is 0.12 − 4.6 · 10 −2 cts/(keV·kg·y) (90% c.l.), which is at the level of the Gerda phase I design goal. An analysis of the residual vetoed spectrum indicates the observation of the 2νββ-signal . This would be the first measurement of 2νββ in a non-enriched germanium detector. Conclusion The LArGe test facility has demonstrated the great potential of an active liquid argon veto for the suppression of residual background signals which deposit part of their energy in LAr. The combined suppression of LAr veto and PSD is mutually enhanced. Another application for the LAr veto is background diagnostics. As a consequence of these results, the development of an active liquid argon veto in Gerda is pursued.
54023290
s2orc/train
v2
2016-07-17T08:36:30.178Z
2015-10-20T00:00:00.000Z
Can the fluctuations of the motion be used to estimate performance of kayak paddlers? Today many compact and efficient on-water data acquisition units help the modern coaching by measuring and analyzing various inertial signals during kayaking. One of the most challenging problems is how these signals can be used to estimate performance and to develop the technique. Recently we have introduced indicators based on the fluctuations of the inertial signals as promising additions to the existing parameters. In this work we report about our more detailed analysis, compare new indicators and discuss the possible advantages of the applied methods. Our primary aim is to draw the attention to several exciting and inspiring open problems and to initiate further research even in several related multidisciplinary fields. More detailed information can be found on a dedicated web page, http://www.noise.inf.u-szeged.hu/kayak. Introduction Periodic processes are very common in various disciplines. The processes can be inherently periodic with an intrinsic rate, like the heart function, or can be driven periodically as the ambient temperature on the surface of the Earth. Artificial systems including machines often have periodically moving parts, motors also. In several cases deterministic or random changes of the operation frequency, noise in the period can be informative about the proper operation of the system, can be a good measure of quality, indicator of dysfunction or even predictor of a possible forthcoming damage. Noise coming from such a system can be an efficient diagnostic tool in both inherently periodic or in periodically driven systems. The wide range of examples include the analysis of heart rate variability [1], hemodynamic regulation during metronomic breathing [2], gait dynamics, fluctuations of human walking [3], daily activity measured by actigraphs, smart watches [4][5][6], daily temperature and other environmental fluctuations [7,8], period fluctuations of variable stars [9], fault diagnosis of induction motors [10][11]. Note that having noisier period is not necessarily badtoo small heart rate variability can indicate a possible heart disease [1]. Oddly enough noise can even play constructive role: adding noise can improve signal to noise ratio via the mechanism of stochastic resonance [12][13][14]. Following the idea of using fluctuations as a diagnostic tool we have introduced noise analysis methods to estimate the performance of kayak paddling [15]. Inertial sensors like accelerometers and gyroscopes are used in coaching devices developed for professional kayak paddlers and trainers [16][17][18][19], and the measured quantities and their changes in a stroke cycle are used to classify the athletes' performance [20][21][22][23]. In our previous work we have suggested thatsince the optimal motion of a kayak can be assumed to be purely periodicthe fluctuations of its period could be an indicator of the quality of paddling [15]. We have calculated time and frequency domain parameters and we have introduced a promising signal-to-noise ratio (SNR) estimation using the raw signals without the typically required detection of strokes. In this paper we report on our latest results presented at the conference of Unsolved Problems of Noise (UPON) in an invited talk [24]. We show a more detailed analysis of the introduced older and new indicators both in the time and frequency domain. Following the spirit of the conference here we focus on the most interesting open questions that can be inspiring not only for the noise research community but for a wider range of scientific and engineering audience also. It is important to note that the results can be related to many multidisciplinary applications as well. Kayak motion data The motion signals of the kayaks were measured by a special portable instrument developed in our laboratory for this purpose [25]. The device contains a 3-axis accelerometer and a 3-axis gyroscope to support acquisition of the most important inertial signals, see figure 1. The built in microcontroller's data converter digitizes these signal with a sample rate of 1000 Hz. Since there can be several different types of paddlings (for example due to different tasks at trainings or at races), one can have different aims of the data analysis like optimizing paddling techniques, detecting faults, examining long-term evolution of technical parameters, comparing paddlings at races, etc. Our primary aim was the general estimation of the athletes' performance using fluctuation analysis based methods that could be very useful at many of the mentioned purposes. In order to examine indicators for performance estimation the really important question is: can we classify the performance or technical skills? In the case of different athletes' paddling, their age could be used like in our related works [15,25], but its connection with skills is not always clear. In this paper we used classification done by the trainer in the scale: 1-10, too. Another problem is to find how we can compare paddlings of several athletes using different paddling techniques especially if they are influenced by significantly different conditions. In order to compare the typical performance of paddlers in very similar circumstances we have analysed the first 10 minutes of long range (>5 km) training paddlings of 14 athletes with different age and technical skills. Note that another approach can be the analysis of one athlete's different paddlings for example testing the indicators in the function of race times. This analysis needs systematic measurements at many trainings and real or simulated races. As it is shown on figure 2, the fluctuation-based indicators were calculated (both in the time and frequency domain) for shorter time window widths (30 seconds), and the averages for the examined 10 minutes long part were compared. Note that in the case of spectral methods all six inertial signals were used instead of using the x-axis acceleration only. Temporal indicators In the time-domain the forward axis (x-axis) acceleration plays the major role in the analysis of the motion. The interpretation of the signals' shape and its connection with an optimal stroke cycle were discussed in several papers [19][20][21][22][23]. Furthermore the classical parameters of a stroke cycle could be calculated on this signal after identifying each stroke using peak and level crossing detection algorithms. The most important quantities, which we use below, are illustrated on figure 3. The stroke cycle is characterized by its time length (stroke time) and the speed increase in the pulling phase (stroke impulse) which can be calculated by integration of the positive part of the acceleration signal [25]. The parameters were also calculated for the total duration of the left and right hand stroke, which is the total period of the motion (two hands stroke time). The mean values and trend curves of these classical parameters or the shape of the raw signals can be helpful for trainers and athletes. Unfortunately the analysis is rather complex and time-consuming in most cases. Easily usable but still accurate and reliable indicators of performance or technical skills could be useful at many levels of the trainers' work. The idea behind using fluctuation analysis is to measure the variability of kayaks periodic motion because the steadiness of the rate could be correlated with the quality of the paddling [15]. This correlation is shown on figure 4, where the relative standard deviation (SD) of the two hands stroke impulse and the two hands stroke time decreases with better class and age significantly. Each point on these plots corresponds to an athlete and was defined as it is shown on figure 2: the standard deviation parameters (SD) were calculated for each 30 seconds wide time windows, and were averaged over the first 10 minutes of a long-range paddling at training. Neither the athletes' age nor the classification indicates exactly their technical skills, nevertheless the relationship between performance and variability seems to be evident. On the other hand, there are some open questions about how one should calculate these SD-s. On figure 5, we compared SD-s of the stroke impulse calculated in different ways in the function of classes using the coefficient of determination (R 2 ). As it can be seen, the relative SD-s shows better correlation than the absolute SDs. Changing stroke rate and effects of tiredness can be observed in every paddling, so the length of the processed data and the use of detrending algorithms can have some impact on the indicators' values and their observed relationship with technical skills. As one can see on figure 5, in the case of the 30 seconds long evaluation the detrending has no significant role, however in the case of comparing longlength race paddlings it can be useful. The presented methods can be used to analyse the periodicity of the kayak's motion. In the case of the forward axis acceleration, one can consider the period as one stroke cycle or the duration of a left and a right hand stroke, too. Therefore it is a really important question if indicators related to one hand or two hands have stronger relationship with performance. As depicted on figure 5, the two hands stroke impulses SD shows better correlation with classification in every case. Comparison of the SD-s of the stroke time shows the same ratio between R 2 of different methods but the coefficients has lower values which fact is consistent with trends shown on figure 4. The reason for the relative low R 2 values is the low number of data points used for calculating the regression, but we note that all SD-s show better correlation than the mean value of the stroke impulse that has already known relationship with performance. Spectral indicators Detecting the strokes using the complex signals with additional irregularities and noise can be rather complicated and inaccurate in most cases. Uncertainty of time-domain peak and zero crossing detection can be eliminated by using indicators calculated in the frequency domain. In this case the power spectra of the raw signals can be used to derive indicators to describe the period fluctuations. Following this idea we have introduced a certain kind of signal-to-noise ratio (SNR) as another possible indicator of performance [15]. Note that it can be even extended to a joint time-frequency analysis that allows monitoring of the time dependence of the spectral indicators too. One of the most important questions is how one can separate the "signal" and the "noise" in the power density spectra. In the case of the forward axis acceleration and the yaw and roll gyroscope signals, the dominant frequency is the first harmonic that belongs to the one hand stroke cycle. On the other hand, in the case of the other three signals, the fundamental frequency is more significant as these signals belong to the whole period of both hands strokes. Figure 6 shows examples of these two cases. The first harmonic is the dominant peak in the x-axis acceleration power spectral density (PSD), but the dominant frequency of roll axis angular velocity is the fundamental frequency. Furthermore, the magnitude of the harmonics in the two cases are different, therefore calculating the indicators can differ significantly. In order to calculate the SNR we consider the area under the harmonic peaks as the signal power and the rest of the power as noise. Note also that there are several ways to calculate these values and therefore the SNR. It can depend on the number of harmonics taken into account for the signal part and the area under the peaks is a function of the extent used in the calculations. Besides using the SNR, both of its components can be used as indicators, too. Figure 7 shows a certain type of SNR (details can be found in its caption) calculated for the roll axis gyroscope signal. The strong relationship with both the technical quality and with the paddlers' age can be clearly seen. Since several paddling techniques can lead to high performance, the corresponding signal power is not necessarily a good indicatorthe magnitude of the harmonics can be rather different. However the noise power seems to reflect the technical skills much better. These facts are illustrated on Figure 8. The spectral indicators were obtained using the same signals and time windows as in the case of the temporal indicators. Namely the spectral indicators were calculated over 30 second long windows, and were averaged for the first 10 minutes of a long-range paddling at training. Figure 9 depicts the coefficient of determination between the class and SNR, signal and noise power for all six inertial signals. As we have pointed out above, the SNR appears to have the strongest relationship with technical skills for almost all inertial signals. It can be seen that the best correlation with the class is obtained for the roll and yaw axis angular velocities. For these two signals the dominant frequency is equal to the fundamental frequency, therefore one can conclude that the two hands spectral indicators characterize the performance better. We have compared two signal and noise definitions by using two different numbers of harmonics for the calculations. As it is depicted on both plots in figure 10, SNR values where the first 6 peaks of the spectrum were defined as signal (fundamental frequency and the first five harmonics) describe the technical skills much better than SNR using only the first two peaks (frequency of one hands and two hands stroke). Detecting the peak location and extent in the spectrum accurately can be a problem, however on the other hand it is desirable to find simple and universal methods that can be used for all kinds of signals and paddling techniques. We have designed and tested numerical methods for finding the signal power in the spectra based on fixed peak width of 0.2 Hz and estimated half-width. We have used these in the above mentioned different signal and noise definitions based on six and two harmonics. We have compared all of these results for two different spectral window types (rectangular and Hanning). We have found no significant difference between integrating the peaks over a predefined peak width or over a frequency dependent half-width based extent. The results are also insensitive to the choice of the window functions we have tested. The obtained coefficients of determination are plotted on figure 10. In the x-axis, different SNR definitions and PSD peak detect methods were compared, too. The data and steps of calculating PSDs is described in section 2. Note that as in the case of temporal indicators, using different window lengths can have impact on the values of the indicators and their relationship with performance. Conclusion and open problems We have shown that noise analysis can be a promising diagnostic tool for the estimation of the performance of kayak paddlers. In addition to our previous results we have introduced and evaluated new time and frequency domain indicators. We have found that the most useful indicator of the quality is the SNR of the roll angular velocity and since it is calculated in the frequency domain, no complex and rather uncertain peak and level crossing detection is needed. We have investigated several different calculations of SNR, signal and noise power and developed simple algorithms to calculate these for six inertial sensor signals. We have found that both the spectral and time domain indicators worked well only for the signals whose dominant frequency is the fundamental frequency, when the period was the sum of a left and a right hand stroke. Although it sounds likely that the steadiness of the motion has a primary role in the paddling quality, many open questions may arise. The following interesting questions and problems can be evidently identified.  There can be other temporal parameters or spectral methods, indicators that can indicate the performance even better, so more detailed analysis could be useful.  The indicators discussed above were tested using classification of the technical skills. However the actual performance of the athlete depends on many factors. This is exceptionally important in order to determine how reliably the indicators can be used for certain cases, to determine what kind of data processing is needed. There can be several problems, subjective elements about this.  It is one of the most exciting questions what are the sources of the noise found in the paddling periodicity and strength? It can depend on mechanical effectsmovement of the kayak and of the human body, dissipation -, learned technical skills and even mental condition.  It is well known that mental condition can be a significant factor of excellent performance. However it is not clear, how it can affect the noise level, SNR or other indicators.  The above mentioned open problem can be a starting point of neurology related experiments and analysis.  We have tested a few methods of separating signal and noise; however it is not straightforward. It is not easy to define what should be considered as noise, what are the sources of noise and if they can be separated from each other. Slowpossibly randomly changingdrifts, lower and higher frequency components can appear easily.  Although noise analysis is used in several diagnostic applications, it is not yet clear, how this fluctuation analysis can be related to other periodic motions and especially to other sport fields including running, swimming, cycling.  Simulations can be very useful to know more about the processes, to find out the sources of noise, to test possible indicators and evaluation algorithms. It can also be important to support development of theoretical models and to verify their compliance.  Since smart phones, smart watches have more and more integrated sensors including inertial sensors it seems to be possible to implement noise analysis algorithms also to evaluate health indicators, and use these during commercial sport exercises. Future research may focus on answering these questions and the methods can have potential applications in many other fields as well.
241639650
s2orc/train
v2
2021-10-15T16:09:11.577Z
2021-01-01T00:00:00.000Z
Flotation solution influence on the quantification of Ascaris suum eggs in pig feces using McMaster technique Copyright Fausto et al. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution and reproduction in any medium provided the original work is properly cited. Flotation solution influence on the quantification of Ascaris suum eggs in pig feces using McMaster technique Influência da solução de flutuação na quantificação de ovos de Ascaris suum em fezes de suínos utilizando a técnica de McMaster Introduction Parasitic diseases can cause economic losses for the swine production industry. Parasites can damage blood vessels and internal organs (including the intestinal epithelium), resulting in malabsorption and decreased assimilation of nutrients, reduction in average daily gain (ADG), reproductive failure, failures in vaccine response, increases in the fattening period, decreases in meat quality and economic losses due to condemnation of affected organs (Fausto et al., 2015;Knecht et al., 2011). Monitoring the parasite load is an essential method of maintaining livestock health. Stool ova and parasite analyses to detect the presence of eggs, larvae, cysts and oocysts are widely used in veterinary parasitology. Among these analyses, flotation techniques such as the McMaster technique, used to count the eggs present in feces, are the most routine method for detecting and quantifying the parasitic load in different animal species (Foreyt, 2005). The McMaster technique has the advantage of being practical and delivering results quickly. However, many variations have been described in the literature, and therefore standardization is needed (Cringoli et al., 2004). This standardization is important to avoid false negative results, which could lead to the neglect of parasitic problems in animal production. The aim of this study was to evaluate different saturated solutions prepared from four different salts (NaCl, MgSO 4 , NaNO 3 , ZnSO 4 ), a sugar solution and the use of Polysorbate Tween 80 for parasite detection using the McMaster technique, in feces of pigs from commercial farms. Materials and methods Solutions of sugar (C 12 H 22 O 11 ), sodium chloride (NaCl), sodium nitrate (NaNO 3 ), magnesium sulfate (MgSO 4 ) and zinc sulfate (ZnSO 4 ) were prepared following the protocol described in Foreyt (2005). The viscosity of the solutions was measured at 25 °C, based on a 100 sec-1 shear rate using a concentric cylinder Searle-type rheometer (Brookfield model R/S Plus). Polysorbate Tween 80 at 0.2% was added to the solution tht had the greatest recovery of eggs by fecal flotation, according to the protocol described by Santarém et al. (2009). The sugar solution was composed of C 12 H 22 O 11 , 454 g + H 2 O, 355 mL, with specific gravity of 1.27 and viscosity of 3.42mPas (Millipascal). The sodium chloride solution was composed of Nacl, 400 g + H 2 O, 1000 mL, with specific gravity of 1.2 and viscosity of 2.14mPas. The sodium nitrate solution was composed of NaNO 3 , 400 g + H 2 O, 1000 mL, with specific gravity of 1.2 and viscosity of 1.37mPas. The zinc sulfate solution was composed of ZnSO 4 , 371 g + H 2 O, 1000 mL, with specific gravity of 1.18 and viscosity of 1.92mPas. The magnesium sulfate solution was composed of MgSO 4 , 400 g + H 2 O, 1000 mL, with specific gravity of 1.2 and viscosity of 3.65mPas. Feces were directly collected from the rectal ampoule of pigs. Samples were collected from both parasite-free pigs as well as naturally infected pigs, which were previously checked by the feces eggs per gram (EPG) technique, conducted in a McMaster chamber according to the following methodology: Dilution of 2g of feces in 29 mL of saturated solution. Homogenization of the material, followed by filtration in sieve. Then, a mixture containing 14.5 mL of saturated solution and 14.5 mL of tap water was passed through the sieve. The material was homogenized with a pipette and aliquots were collected to fill the McMaster chamber. Ascaris suum eggs were obtained by dissection of adult female worms, which were collected from naturally infected pigs. The fresh fecal samples were inoculated with an aqueous solution containing A. suum eggs, in the proportion of 2000 fertile unembryonated eggs per gram of feces. The material was then homogenized in Griffin beakers and divided into 2g aliquots for the assay with different solutions. Feces of naturally infected animals were also divided into 2 g aliquots. The EPG method was performed using the previously mentioned solutions. For artificially infected feces, 15 replications for each of the evaluated solutions were performed. In naturally infected feces, 20 replications for each of the evaluated solutions were performed. Polysorbate Tween 80 at 0.2% was added to the solution that recovered the most eggs by flotation. All analyses were performed at the standard laboratory ambient temperature of 25 °C. The methods used in this study was approved by the Ethics Committee for the Use of Animals of the of the School of Biological and Health Sciences -CEPEUA/FACISA 025/2015-I. The data Flotation solution influence on the quantification of Ascaris suum eggs in pig feces using McMaster technique were tabulated and analyzed by ANOVA followed by Tukey's test, with a 5% significance level, by using the Sigma Plot software (version 11.0). Results All solutions evaluated were able to float A. suum eggs. However, the type of solution significantly influenced the number of eggs found in the McMaster chamber. In artificially infected feces, the EPG values using the sodium nitrate solution were higher (p < 0.05) than the other solutions (1533.33 ± 409.99). Magnesium sulfate and sodium chloride solutions provided intermediate values for EPG (926.66 ± 321.75 and 720 ± 174.02, respectively). Lower values were obtained with zinc sulfate and sugar solutions (526.66 ± 284.01 and 480 ± 227.40, respectively). It was observed that in artificially infected feces, the results obtained with sodium nitrate and sodium chloride solutions were statistically similar (p > 0.05). The sugar, zinc sulfate, magnesium sulfate and sodium chloride solutions also did not differ significantly (Table 1). The results for naturally infected feces were similar. The EPG values found by using the sodium nitrate solution were higher (p < 0.05) than the other solutions (3930 ± 1237.6). Sodium chloride and magnesium sulfate solutions showed intermediate values for EPG (2755 ± 856.8 and 2750 ± 1100.5, respectively). Lower values were obtained with the zinc sulfate and sugar solutions (1600 ± 857.8 and 1735 ± 752.7, respectively). For naturally infected feces, the results demonstrated that the sodium chloride and magnesium sulfate solutions were not different (p > 0.05), and the zinc sulfate and sugar solutions were also equivalent (p > 0.05). However, all four solutions obtained lower results compared to the sodium nitrate solution (Table 2). Discussion Differences between the results obtained with the artificially and naturally infected stool may be attributed to variations in the outer egg layer, called the uterine layer, since this layer is responsible for the adherence of eggs to the fecal debris (Methanitikorn et al., 2003). These variations may have occurred because the methodology used to artificially infect the feces was based on the dissection of adult females to collect eggs that were later added to the stool. Thus, these eggs might not have received their outer layer, which is secreted by the female's uterus before oviposition (Souza et al., 2011). Comparisons between the results obtained with different floating solutions in naturally infected swine feces are important to the practice of routine field detection. Ballweber et al. (2014) mention that there are several factors that influence stool examination techniques. Floating solutions with high gravity can favor the detection of parasite eggs, but they can also be difficult to read due to the amount of debris in the preparation. Dryden et al. (2005) demonstrated that the process of centrifugation was able to increase the recovery and detection of nematode eggs in dog feces. However, Ballweber et al. (2014) mention that the centrifugation process can improve the ability to detect some parasites, but not all. Pereckiene et al. (2007), analyzing 7 modifications to the McMaster technique found in the literature, describe that the technique proposed by Henriksen & Aagaard (1976) was the technique with the highest sensitivity for detecting A. suum eggs in swine feces. The technique evaluated by these authors uses 4g of feces, 56 mL of floating solution, which is composed of Nacl + sugar and had a specific gravity of 1.27, and centrifugation of the material for 7 minutes at 1200 rpm. Pouillevet et al. (2017) evaluated 3 different solutions (sugar, sodium chloride and zinc sulfate) for the McMaster technique, analyzing mandrill stools and researching the best results with the use of zinc sulfate. These variations reinforce the fact that many variables are involved in choosing the best stool ova and parasite diagnostic technique. The analysis of naturally infected swine feces showed that the sodium nitrate solution had higher efficiency floating eggs (p < 0.05). Corroborating this result, Menezes et al. (1999) demonstrated that the sodium nitrate solution is the most efficient and the most appropriate for the recovery of avian nematode and tapeworm eggs and coccidian oocysts by the McMaster technique. Guimarães et al. (2005) used the centrifugation-flotation technique with saturated sugar, sodium dichromate or sodium nitrate solutions to assess contamination with Ancylostoma sp. eggs in soil samples collected from public parks and children's play areas, and found that the three solutions presented the same efficiency in the recovery of Ancylostoma sp. eggs. Xavier et al. (2010) evaluated the influence of different saturated solutions on centrifugal flotation techniques of soil samples artificially infected with Toxocara canis eggs, and they found that there was no significant difference between the zinc sulfate and sodium nitrate solutions. However, the zinc sulfate solution showed greater sensitivity for detecting positive samples containing ≥ 10 eggs, while the sodium nitrate solution only effectively detected positive samples containing a minimum of 25 eggs. Traditionally, sodium chloride is used for egg count in the McMaster chamber (Elsheikha & Khan, 2011). Comparing the effect of sodium chloride with sodium nitrate in naturally infected feces, there was a difference (p < 0.05) in this study. In absolute values, the recovery with the sodium chloride solution was 1.42 times lower. This result can probably be attributed to viscosity, since both solutions have the same specific gravity. Schramm (2006) referred to viscosity resistance of a fluid as any irreversible change of its elements, and according to Bretas & D' Ávila (2000), the greater the viscosity of the fluid, the greater its resistance to changes. The higher viscosity of the sodium chloride solution may therefore be responsible for an increased resistance to fluctuation of eggs in this solution. Although in the present study this solution was the second most effective in the recovery of eggs, it is not recommended for flotation techniques of A. suum eggs because it showed a low detection sensitivity, which easily enables occurrence of false negative results. Viscosity may also be one of the factors responsible for the low recovery of eggs observed with the magnesium sulfate solution. The solution made with this salt showed the highest viscosity, and therefore, greatest resistance to fluctuating eggs. There was a significant difference (p < 0.05) between this solution and the sodium nitrate solution. In absolute values, the recovery efficiency of the magnesium sulfate solution in feces naturally infected with A. suum eggs was 1.42 times lower compared to the sodium nitrate solution. The viscosity of the zinc sulfate solution was higher than that of the sodium nitrate solution. Additionally, the two solutions had different specific gravities, with sodium nitrate having the highest value. According to Ruoti et al. (2000), specific gravity refers to the relationship between the density of a substance and the density of a reference material, which is usually water. When a substance of lower specific gravity is added to a solution with greater specific gravity, the substance tends to float. Therefore, the greater the specific gravity of the float solution used, the greater the fluctuation of the eggs. The zinc sulfate solution had lower specific gravity and higher viscosity, so there was a significant difference (p < 0.05) between this solution and sodium nitrate, and absolute values showed a recovery rate of 2.45 times lower. Cringoli et al. (2004) used a zinc sulfate solution with specific gravity of 1.35 for floating strongyle eggs in sheep, which resulted in low egg counts when compared to other solutions. Whereas for floating eggs of the trematode Dicrocoelium dendriticum, increasing the specific gravity from 1.2 to 1.35 resulted in a better performance in the fecal tests (Rinaldi et al., 2011). Thus, adjustments to the solutions, such as increased concentration and thus increased specific gravity, may influence the fluctuation rate of A. suum eggs. There was a significant difference (p < 0.05) between the sugar solution and the sodium nitrate solution, and in absolute values, the sugar solution obtained a number of eggs 2.26 times lower if compared to the sodium nitrate solution. According to Great Britain (1986), ascarid eggs, as well as some trematode eggs, are heavier and larger than strongyle eggs, and require solutions with higher specific gravity to float. However, other properties of the solution aside from specific gravity can interfere with fluctuation such as temperature, viscosity and especially the ability to exert effects on the surface of the eggs (Quinn et al., 1980). For these reasons, lower specific gravity solutions can sometimes, in certain situations, have greater flotation capacity than solutions with higher specific gravities (Menezes et al., 1999). Pereckiene et al. (2007) reported better performance of sugar solutions with 1.27 specific gravity when compared to saline solutions with 1.20 specific gravity for floating Ascaris eggs. However, these data do not corroborate the results obtained in the present study because, despite having higher specific gravity, the lowest egg counts per gram of feces were obtained with the sugar solution. The addition of Polysorbate Tween 80 at 0.2% to the sodium nitrate solution caused an increase in viscosity of the solution, from 1.30 to 1.824 mPas. Besides increasing the viscosity, the sodium nitrate solution with Tween 80 at 0.2% was more efficient (p < 0.05) in the fluctuation of eggs (1330 + 447.3312678) compared to the nitrate solution without the addition of Polysorbate Tween 80 (1000 ± 316.27766) (Figure 1). Ascaris eggs have great adhesiveness capacity (Massara et al., 2003) because the outermost layer consists of mucopolysaccharides and proteins (Souza et al., 2011). Capizzi & Schwartzbrod (2001) suggested that the surface of Ascaris eggs has hydrophobic characteristics. These factors may favor the link between eggs and fecal material, hindering their recovery in the floating solutions. Polysorbate Tween 80 is a nonionic surfactant with both a hydrophobic and a hydrophilic region (Maniasso, 2001). Due to this characteristic, it has the ability to lower the interfacial tension between two immiscible phases and solubilize species of low solubility. Surfactants help move and disperse nonpolar particles in water (Gomes, 2010;Moura, 2009). In this case, the hydrophobic regions of the Polysorbate Tween 80 molecules can bind to the nonpolar surface of the uterine layer of the eggshell, keeping the hydrophilic regions in contact with water. Thus, by using Polysorbate Tween 80 in the float solution, the eggs can move with the water flow and separate more easily from the fecal debris, favoring their fluctuation (Methanitikorn et al., 2003). However, eggs of different helminth species have different sizes, shapes, weights and shell constitutions, and cannot simply be considered as inert floating elements in solution (Cringoli et al., 2004). Because of these differences it is not feasible to use the same fecal examinations to diagnose parasites of various species. Therefore, the results of this study should not be extrapolated to eggs of other helminth species. It is important to standardize the flotation solutions used in the diagnosis of other parasites of different animal species. Only then will it be possible to optimize the techniques and improve results, combining optimal specificity and sensitivity with affordable costs. Conclusion Considering the convenience and accessibility of the McMaster technique, and according to the results obtained in the present study, the use of a sodium nitrate solution with Polysorbate Tween 80 added at 0.2% is indicated for the diagnosis of infection with A. suum in pigs. Ethics statement The methods used in this study was approved by the Ethics Committee for the Use of Animals of the School of Biological and Health Sciences -CEPEUA/FACISA 025/2015-I.
126352870
s2orc/train
v2
2019-04-22T13:11:09.636Z
2018-03-01T00:00:00.000Z
NDT of Rating Impact of Laser Padding on the Surface Layer The article presents the problem of quality control the paramagnetic material of weld overlays laser made on ferroand paramagnetic materials (steels, cast iron). To assess the quality of weld overlays paramagnetic material, and also the impact of overheating, besides laboratory researches on Keyence optical microscope, used measurement of existing magnetization of magnetic field distribution. Examples of test results by magnetic memory of metals are presented. It has been found, that is expedient use magnetic methods to assess the quality the microstructure of laser pad welded surface layer (influence of heat generation, microstructure changes, chemical composition and own stresses). Introduction The surface layer of machine elements is usually shaped using mechanical (geometrical and surface) machining. Modern surface engineering technologies are also used, such as: ablative laser micromachining - Fig. 1, 3D and incremental technologies such a laser welding [1][2][3].The use of laser technology poses a challenge to existing nondestructive testing methods, for example, how to quickly monitor the quality of laser processing on ferromagnetic components? Fig. 1. Impact of surface laser texturing, visible micro craters (Φ 100 µm, d 50 µm), a) surface discolouration caused by heat and b) changes in microstructure [3]. Selection of non-destructive testing methods In order to control the surface layer of ferromagnetic materials and the quality of the laser treatment, is proposed an implicit relationship that exists between the chemical composition of the material, its structure, mechanical, and physical parameters. On the basis of the literature [4][5][6][7][8][9][10][11][12] was found that the magnetic and electrical parameters of the material are very sensitive diagnostic indicators, which is used in non-destructive magnetic tests. For initial research and atypical exploration * corresponding author; e-mail: wojciech.napadlek@wat.edu.pl of research problems (assessment of microstructure and stress in industrial conditions), three research methods were used: • measurement of magnetic anomalies using the Mageye [13] portable magnetic microscope, which is based on the Faraday magneto-optical effect [14,15]. The Mageye with optical resolution about 10 µm, field range 2 kA/m and magnetic resolution 50 A/m (for magneto-optic sensor type A), was developed for the mobile quality inspection and management, and for the stray field visualization of: magnetic stripe cards - Fig. 2, magnetic encoders, welding seams, magnetic audio tapes, manipulated serial numbers, as well as dipole and multipole magnets; • measurement of residual magnetization of samples, using a ruler of 16 triaxial magneto-impedance magnetometers (digital compass with field rangë 600 µT, resolution 0.6 µT and 5 mm gap between magnetometers) [5,16]; • precise measurement of the probe impedance Z m which maps the impedance of the probe without affecting test object Z 0 , and inductivelly coupled probe from the material under test ∆Z [5,[17][18][19][20]. (707) (1) Here the real part of impedance is the resistance R, and the imaginary parts is the reactance X. Measurements of magnetic anomalies on a micro scale using a MagEye microscope Results of measurements of non-uniform magnetic field on the surface layer illustrated in Fig. 3 and Fig. 4. Changing the polarization angle of the light on the magnetooptic layer (Faraday effect) under the influence of the external magnetic field was mapped in grayscale. 3.2. Magnetometer measurements of the existing magnetic field distribution Within realized laboratory experiments made magnetometers measurements the existing magnetic field distribution, near ferromagnetic samples (with alloys of Fe-C) laser padded (multi-run and multilayer) type alloy Stellite Co-6 powder (paramagnetic material). View of the sample and measurement of the magnetized presence shown in Fig. 5. The results of the measurements is dominated by the following impact: • average magnetic properties of material and sample shape (demagnetisation tensor, edge effect; • mechanical stresses introduced into the material during the cutting samples; • thermal stresses in zones of overheating of the structure; • apertures and spatial characteristics of magnetometers (a problem omitted in the description of the magnetic memory metal method). Measurement of surface impedance Sample results of surface impedance measurement of a sample, made of ferromagnetic structural steel, padded with Stellite Co-6 alloy powder, shown in Fig. 6. The sample surface around the padding weld is covered by a layer of iron oxide products -products of surface corrosion and laser ablation. The average resonance frequency of the LC circuit was 3.4 MHz the visual measurement result in the time domain maps the resulting impact: • substrate material, • paramagnetic parameters of Stellite powder and areas of laser surface cleaning (clear zone LO), • edge effects, • scan speed. Based electromagnetic research, the possibility of reliable control of the surface layer parameters and the influence of laser processing parameters was found. Measurements should be made with the scanning head, due to the very strong influence (on the measurement result) of the coil distance from the surface to be tested. Changing the single-row distance µm already affects the measurement result! Comments Based on preliminary research shown, that magnetic field measured near the test piece, may be a reliable carrier for the quality of the surface layer of the material and the local thermal stresses, introduced into the material by laser treatment. It is necessary to perform a comparative study of microstructure to develop diagnostic criteria. Based on laboratory tests, was found that: • The idea of digital recording and visualization of magnetic field distribution using magnetooptic layers and CMOS optical converters, with high resolution, is particularly promising for NDE applications, used to control the quality of the surface layer and the laser machining process (including spot assessment of the impact of heat and micro craters). Sensitivity of magnetooptic layer A in the MagEye tested microscope was insufficient (SNR 12dB) to identify microstructure changes without material demagnetization. The authors recommend, however, the magnetization of the test surface and differential analysis, to minimize the negative impact of simplification in the optical path of the MagEye microscope, as well as improving the reliability of the test results. • Three-axis magnetometers provide quantitative and qualitative information about the distribution of the magnetic field near the examined element. Cheap digital compasses provide reliable test results and they can be used to control the quality of the surface layer, as well as non-destructive testing. • Impedance measurements provide reliable information about magnetic properties (magnetic permeability), as well as the electrical (conductivity) of the surface layer of the material. For proper spatial resolution measurements, it is necessary to optimize the geometry of the coil, its distance from the surface to be measured, as well as scanning speed. Summary Modern metrological possibilities and dynamic development of microelectronics opens new opportunities for magnetic and electromagnetic surveys in area of quality control, non-destructive testing and monitoring the technical condition of critical structural elements. At the next stage of research will be the integration and optimization of NDE methods.
216431060
s2orc/train
v2
2020-04-02T10:37:10.762Z
2020-03-20T00:00:00.000Z
Enzyme Activities at Varied Soil Organic Carbon Gradients under Different Land Use Systems of Hassan District in Karnataka, India 1 Department of Soil Science and Agricultural Chemistry, Agriculture College, Bheemarayanagudi-585287, Karnataka, India 2 Department of Soil Science and Agricultural Chemistry, UHS, Bagalkot 587102, Karnataka, India 3 Department of Agricultural Microbiology, UAS, GKVK, Bengaluru560065, Karnataka, India 4 Department of Soil Science and Agricultural Chemistry, College of Agriculture, Hassan-573225, Karnataka, India Introduction Soil organic matter is a source of essential plant nutrients and acts as a source of food for soil organisms (Woomer et al., 1994;Tan, 2010). The energy requirements of micro and macro organisms, other than autotrophs and chemotrophs, present in the soil are largely met by the organic matter added and the native soil organic matter (Monisa and Tahir, 2018). Soil enzyme activity estimates are often used as indices of microbial activity and soil fertility (Vaughan and Malcolm, 1985;Gianfreda and Bollag, 1996;Ranjith et al., International Journal of Current Microbiology and Most of the biological processes in any soil proceed through enzyme regulated processes. Activities of soil enzymes indicate the soil biological health and it has significant impact on soil fertility improvement. The soil samples from major land uses systems viz., forests (both natural and manmade), coffee, mulberry, coconut, vegetable, potato and paddy land uses systems, were analyzed for soil organic carbon (SOC) and categorized as low (< 0.5 %), medium (0.5-0.75 %) and high (> 0.75 %) SOC soils. Similar soil samples were analyzed for biological properties i.e, soil enzyme activities for each category. Dehydrogenase and urease activity were observed higher in soils with higher organic matter status, with trend as low SOC< medium SOC< high SOC. And similar trend was observed for acid and alkaline phosphatase activities. 2015). With this importance the soil organic matter in maintenance of soil biological status, samples from different soil organic carbon (SOC) category were analyzed for soil enzyme activities to know the influence of SOC. Materials and Methods Fifteen surface soil samples (0-15 cm depth) from different land use systems viz., forests (both natural and manmade), coffee, mulberry, coconut, vegetable, potato and paddy land uses systems, in Hassan district (Karnataka) were analyzed for soil organic carbon and categorized as low (< 0.5 %), medium (0.5 -0.75 %) and high (> 0.75 %) SOC soils. Then three samples each from low, medium and high SOC category were analyzed for soil biological properties such as enzyme activities by following standard procedures viz., dehydrogenase (Casida et al., 1964), urease (Watts andCrisp, 1954) and phosphatase (acid and alkaline) activities (Tabatabai and Bremer, 1969). Results and Discussion All biochemical activities in soil proceed through enzyme regulated processes and thus, soil enzyme activities can also be used as an index of soil quality. The data pertaining to enzyme activities were presented in Table 1 and 2. Dehydrogenase and urease activity The dehydrogenase activity, an index of biological activity, was measured by measuring the red colored TPF formed from TTC reduction. The quantity of TPF formed ranged from 22.4 µg g -1 soil 24 h -1 in potato soils with low SOC to 36.5 µg g -1 soil 24 h -1 in coffee plantations with high SOC, indicating least dehydrogenase activity in potato and highest in coffee soils. Agricultural land use systems with high organic matter inputs such as coffee and mulberry recorded higher dehydrogenase activities (Fig. 1). Urease activity, as expressed by the quantity of urea hydrolyzed, ranged from 56.6 to 76.8 µg g -1 h -1 . Highest urease activity was recorded in coffee plantations (76.8 µg urea hydrolyzed g -1 h -1 ) while, potato plots were seen with least urease activity (56.6 µg urea hydrolyzed g -1 h -1 ). Urease enzyme was more active in high biomass turnover systems such as coffee and mulberry plantations. Acid and alkaline phosphatase activity Acid and alkaline phosphatase enzyme activities were measured by quantifying yellow colored p-Nitrophenol (PNP) compound formed from p-Nitrophenol phosphate (PNP-P). The acid phosphatase activity was found higher (37.5 to 43.4 µg PNP g -1 h -1 ) in natural and manmade forests and coffee plantations. However, agricultural systems recorded lower acid phosphatase values (31.3 to 39.1 µg PNP g -1 h -1 ). The alkaline phosphatase activity was also similar to that of acid phosphatase. However, mulberry soils recorded higher alkaline phosphatase activities among agricultural systems. Dehydrogenase and urease activity The dehydrogenase enzyme activity was higher in tree based land use systems while, its activity was found less in agricultural systems. Highest activity was observed in coffee soils with high SOC, while it was least in soils of coconut plantations and vegetable fields. Similar observations on higher dehydrogenase activity in grasslands and forests are reported by Tiwari and others (1988). Among agricultural systems, mulberry soils recorded higher dehydrogenase activity. Many authors have reported similar observations of higher dehydrogenase activity in forest soils (Ajwa et al., 1998;Nagaraja et al., 2018) and lesser activity in agricultural soils (Nagaraja et al., 1997;Vidya et al., 2001;Rajeev et. al., 2015). In similar situations, variations in dehydrogenase activities were observed and they were found related to soil organic-C and soil microbial biomass (Martens et al., 1992;Ranjith et al., 2015;Nagaraja et al., 2018). Thus, the addition of organic matter is important in maintaining higher dehydrogenase activities. Urease activity in different land use systems indicated that the application of organic matter is important in maintaining its activity. Variations in urease activity may be related to vegetation types, quantity of organic residues added and the soil organic matter content (Pacholy and Rice, 1973). Production of higher amounts of urea based compounds in forest soils and addition of nitrogen fertilizers in mulberry and coffee soils might have enhanced urease activity. In other words, the agricultural systems receiving nitrogenous fertilizers with sufficient amounts of organic manures are likely to maintain higher urease activities (Singaram and Kamalakumari, 1995). Similar results of higher urease activity are reported in forests (Vinutha, 2005) and agricultural soils (Siddaramappa and Rao, 1971). Both urease and dehydrogenase were found higher in soils with high SOC contents and they declined with decrease in soil organic matter content. Thus, the maintenance of soil organic matter appears to be very important for both the enzymes. Fig.1 Dehydrogenase, urease, acid and alkaline phosphatase activities in low, medium and high SOC soils of different land use system Acid and alkaline phosphatase activity The phosphatase activities (both acid and alkaline) was found higher in forests (natural and manmade) and coffee plantations. This can be attributed to higher soil organic matter content associated with microbial activity biomass (Rao et al., 1995). The agricultural systems generally recorded lower acid phosphatase activity (Nagaraja et al., 1997). The alkaline phosphatase activities were also higher in forests and coffee plantations, while it was lower in agricultural systems. However, mulberry recorded higher activities of both acid and alkaline phosphatase enzymes. The alkaline condition of mulberry soils might have induced alkaline phosphatase activities. Higher enzyme activity in soils with high SOC (in all land use systems) and in soils of high biomass turnover suggests that the soil organic matter management is important (Vinutha, 2005). It was observed that the soils with high organic carbon recorded higher enzyme activities (all the four enzymes) compared to soils with low organic carbon. Thus, the enzyme activities among soils with soils with different levels of SOC was in the order of high SOC > medium SOC > low SOC soils. High biomass turnover tree based land use systems recorded higher enzyme (dehydrogenase, urease and phosphatase) activities. Contrastingly, the agricultural systems supplemented with decomposed forms of organic manures recorded lesser enzyme activities. The enzyme activities among soils with different levels of SOC were in the order of high SOC > medium SOC > low SOC soils. This shows the importance of soil organic matter (SOC) serving as source of energy for soil biological activities.
7741670
s2orc/train
v2
2009-02-27T13:36:45.000Z
2009-02-27T00:00:00.000Z
Transport coefficients and resonances for a meson gas in Chiral Perturbation Theory We present recent results on a systematic method to calculate transport coefficients for a meson gas (in particular, we analyze a pion gas) at low temperatures in the context of Chiral Perturbation Theory (ChPT). Our method is based on the study of Feynman diagrams taking into account collisions in the plasma by means of the non-zero particle width. This implies a modification of the standard ChPT power counting scheme. We discuss the importance of unitarity, which allows for an accurate high energy description of scattering amplitudes, generating dynamically the $\rho (770)$ and $f_0(600)$ mesons. Our results are compatible with analyses of kinetic theory, both in the non-relativistic very low-$T$ regime and near the transition. We show the behavior with temperature of the electrical and thermal conductivities as well as of the shear and bulk viscosities. We obtain that bulk viscosity is negligible against shear viscosity, except near the chiral phase transition where the conformal anomaly might induce larger bulk effects. Different asymptotic limits for transport coefficients, large-$N_c$ scaling and some applications to heavy-ion collisions are studied. Introduction The analysis of transport properties within the Heavy-Ion Collision program has become a very interesting topic, with many phenomenological and theoretical implications. Transport coefficients provide the response of the system to thermodynamic forces that take it out of equilibrium. In the linear approximation, energy and momentum transport is encoded in the viscosity coefficients (shear and bulk) whereas charge and heat conduction produce electrical and thermal conductivities, respectively. A prominent example of physical applications to collisions of heavy ions is found in viscosities. Although the matter produced after thermalization behaves as a nearly perfect fluid [1], there are measurable deviations, which are seen mainly in elliptic flow and can be reasonably explained with a small shear viscosity over entropy density ratio [2]. In these analyses bulk viscosity is customarily neglected, based on several theoretical studies. However, it has been recently noted [3,4] that the bulk viscosity might be larger than expected near the QCD phase transition, by the effect of the conformal anomaly. On the other hand, shear viscosity over entropy density is believed to have a minimum in that region. In that case, i.e., if the two viscosity coefficients are comparable at the temperatures a Electronic address: danfer@fis.ucm.es b Electronic address: gomez@fis.ucm.es of interest, there are several physical consequences such as radial flow suppression, modifications of the hadronization mechanism [4], or clustering at freeze-out [5]. Lattice analyses of transport coefficients are cumbersome, since they involve the zero-momentum and energy limit of spectral functions [6,7,8,9], and these calculations are still not conclusive. It is therefore very interesting and useful to consider regimes accessible to theoretical analysis in order to provide complementary information about transport coefficients. The theoretical approach to transport coefficients has been traditionally carried out within two frameworks: kinetic theory and the diagrammatic approach (Linear Response Theory). The kinetic theory approach involves linearized Boltzmann-like equations and has been successfully applied in high temperature QCD [10] and in the meson sector [11,12], while the diagrammatic method has been developed for high-T scalar and gauge theories [13,14]. In both formalisms, it is crucial to include accurately the collisional width, identifying the dominant scattering processes in the plasma. In the diagrammatic framework, we have recently studied transport coefficients within Chiral Perturbation Theory [15,16,17,18], pertinent for describing the meson sector at low energies and temperatures below the chiral phase transition [19]. Our analysis shows that in order to include properly the effects of the thermal width, the standard rules of Chi-ral Perturbation Theory have to be modified for this type of calculation. In this work, we will present a detailed update of our formalism and main results, paying a more detailed attention to several aspects of formal and phenomenological interest. In particular, we provide a thorough derivation of the relevant formulae, emphasizing the link between Kubo's formulae and the definition of transport coefficients through thermodynamic forces and fluxes. Our ChPT diagrammatic formalism is reviewed, describing the different contributions in the modified power counting and discussing specially the role played by the light resonances, which we generate dynamically by unitarizing with the Inverse Amplitude Method. Detailed results for all transport coefficients are given, underlying the connection with existing phenomenological and theoretical calculations, including the predictions of non-relativistic kinetic theory, which we meet in the very low-T regime. We will also discuss the large-N c behavior, which will provide an interesting check of some of the results obtained. Chiral Perturbation Theory In order to describe the dynamics of the light mesons (pions, kaons and the eta) we will use Chiral Perturbation Theory, which is an effective field theory of QCD for the low-energy regime [20]. It is based on the spontaneous symmetry breaking of the chiral symmetry of the QCD lagrangian (for massless quarks): where the axial generators of the chiral algebra are broken leaving only the vector ones. As the result of this symmetry breaking there must appear a number of massless Goldstone bosons equal to the number of broken generators, and they are physically identified with the pions, kaons and eta. In order to construct an effective lagrangian it is necessary to obtain the transformation rules of the Goldstone bosons, φ a , under the original chiral group. It can be shown [20] that they transform non-linearly, so if we use the exponential parameterization for the Goldstone bosons we have: where R ∈ SU (N f ), L ∈ SU (N f ), λ a are proportional to the broken generators, and F 0 coincides with the Goldstone boson decay constant to lowest order in the chiral expansion (see below and [20]). This non-linear transformation on U (x) implies the following transformations for the Goldstone bosons separately under the vector and axial charges: where g ab (φ) is some non-linear function. So we see that under the unbroken group the Goldstone bosons transform linearly but they do it non-linearly under the axial charges corresponding to the broken generators. Once one knows the transformation rules for the Goldstone bosons it is possible to construct a lagrangian which describes their dynamics as the most general expansion in terms of derivatives of the U (x) field that respects all the symmetries of QCD: where the subindex indicates the number of derivatives of the field U (x). In practice, we will deal only with L 2 and L 4 , given explicitly by the expressions [19] (for the N f = 2 case): and where M 2 0 ≡ 2B 0 m (m ≡ m u m d is the quark mass) coincides with the mass of the pion squared to lowest order. The coupling constants F 0 , B 0 , l i and h i are called the low-energy constants, and are energy-and temperatureindependent by construction. In order to deal with this infinite lagrangian we need a way of estimating the contribution from each Feynman diagram of interest, because we do not have an explicit coupling constant. Given a particular scattering amplitude M(m q , p i ), where m q is the mass of the quarks (we will consider m ≡ m u = m d ) and p i the meson external momenta, the dimension D of the diagram is defined by rescaling these parameters in the following way: Then, the dimension of a particular diagram can be easily computed (Weinberg's Theorem): where L is the number of loops in the diagram, and N n the number of vertices coming from the lagrangian L n . This dimension so defined actually tells us that the contribution from a given diagram is O((p/Λ) D ), where p represents an energy, momentum, meson mass or temperature. The scale Λ will be of order Λ E ∼ 4πF π 1.2 GeV for energies, momenta or meson masses 1 , and of order Λ T ∼ 300 MeV for temperatures 2 . Therefore, the chiral expansion will be more reliable as we go down in energies and temperatures. 3 Transport coefficients For a system out of equilibrium there exist thermodynamic forces (gradients of the temperature, the hydrodynamical velocity or the particle density) and fluxes, where the latter try to smooth-out the uniformities produced by the former in order to restore the equilibrium state of the system. Transport coefficients are defined as the coefficients for a series expansion of the fluxes in terms of the thermodynamic forces. Here we will deal with the transport coefficients corresponding to a linear expansion, specifically the shear and bulk viscosities, as well as the thermal and DC conductivities. According to relativistic fluid mechanics [22], the energy-momentum tensor (flux of four-momentum) of a fluid can be decomposed into a reversible and an irreversible part: where with the energy density, P the hydrostatic pressure, U µ the hydrodynamic velocity defined as the time-like fourvector that verifies is a projector, I µ q is the heat flow (difference between the energy flow and the flow of enthalpy carried by the particles), h is the enthalpy or heat function per particle, given by h = ( + P )/n, N µ is the conserved current (in case there is some in the system), and Π µν is called the viscous pressure tensor defined as the irreversible part of the pressure tensor: The flows I µ q and Π µν can be written as: where we have split Π µν into a traceless part,Π µν , and a remainder (Π = −Π µ µ /3). If we now define the thermodynamical forces as: with T being the temperature and ∇ µ ≡ ∆ µν ∂ ν , then the shear viscosity, η, bulk viscosity, ζ, and thermal conductivity, κ, are defined through the relations: At this point it is convenient to pass to the local rest frame (LRF) and recover the more familiar expressions for transport coefficients. In the LRF (we will denote by a tilde the quantities evaluated in this particular frame of reference),Ũ µ = (1, 0),Π 00 = 0 (from the definition), and then we have: And for the case of the thermal conductivity: Now, using the relativistic Gibbs-Duhem relation we can rewrite (21) as Therefore we see that for a system without any conserved current (besides the energy-momentum tensor) the thermal conductivity is zero [23]. It is also convenient to explicitly write the expression of the thermal conductivity for both the Landau's and Eckart's choices of the hydrodynamical velocity [22]: In the non-relativistic limit, we can neglect the second term in the right hand side of (21) and we recover, for the Eckart's choice, the Fourier's law T 0i (E) = −κ∂ i T . It is important to remark here that these Landau's and Eckart's conventions apply to macroscopic averages of the currents T µν and N µ over a fluid element, and not to the microscopic currents themselves. The microscopic quantities will be relevant in the next section, where we obtain the expressions for transport coefficients in Linear Response Theory. Finally, for the DC conductivity, an electric current is induced in the gas by an external electric field which is constant in space and time, J i = σ i j E j ext . In general, the DC conductivity will be a tensor, but we will consider here the isotropic case, so σ ij = σg ij . Kubo's formulae for transport coefficients Let be a system described by a hamiltonianĤ 0 (independent of time) to which we add a perturbationV (t), such thatV (t) = 0 for t ≤ 0. Linear Response Theory (LRT) consists in taking into account only the linear effects produced by the perturbation on the magnitudes of the system. For this to be a good approximation is necessary thatV (t) be small in the sense that the eigenvalues E α of H(t) ≡Ĥ 0 +V (t) and the eigenvalues E Then, it can be shown [24] that if O(t) is a certain operator in the Schrödinger's picture, the variation of the mean value of the operator produced when the perturbation is introduced is given (to linear order in the perturbation) by: where |Ψ (0) (0) represents the state of the system at t = 0, (Heisenberg's picture). The result (28) is also valid if instead of mean values we deal with thermal averages · T and then we want to study small deviations from thermal equilibrium. Note that in this case, according to (28), we calculate these deviations by evaluating the expectation value of the commutator at equilibrium. Now, by applying some small perturbation to our system in order to take it slightly out of equilibrium and using LRT, we can obtain the expressions for transport coefficients in terms of correlators. We start considering the DC conductivity. We perturb the system by coupling it to an external classical electromagnetic field: Therefore, the induced current is where And therefore, in momentum space, Since the DC conductivity corresponds to the action of a constant electric field, and assuming spatial isotropy, we finally have: where ρ σ = 2 Im i(G R ) i i is the spectral function of the current-current correlator. The order in which the limit is taken is important, since the opposite would correspond to a field constant in time and slightly not constant in space, what would produce a rearrangement of the static charges giving a vanishing electric current. This is called a Kubo formula for the DC conductivity, and we can express it in another useful form in terms of the Wightman function G > by using the KMS relation G < (p) = e −βp 0 G > (p) and ρ = G > − G < [13,24]: where β ≡ 1/T , and we implicitly assume thermal averages. As we show below, in the perturbative evaluation of transport coefficients the spatial momentum can be taken equal to zero from the beginning. Turning to viscosities, as we have seen, they are related to gradients of the hydrodynamical velocity in the fluid. Since we will evaluate the correlators at thermal equilibrium, we can choose a global reference frame that is at rest with the fluid. We will give here a simple derivation of the Kubo formulae for the viscosities and the thermal conductivity (for a more rigorous discussion see [25]). By performing a boost that depends on the point, we can simulate gradients of the velocity, so the fluid velocity around some point x 0 becomes U i (x) x j ∂ j U i (x 0 ). Then, this boost implies the change in the energy density δH = −U · p where p is the density of momentum in the fluid. Therefore, this corresponds to the perturbation in the hamiltonian density: Under this perturbation, the variation in the expectation value of the energy-momentum tensor is where we have integrated by parts and used ∂ µ T µν = 0. If we now particularize for the case ∂ k U k = 0, and compare with the expression (20), we then obtain: In momentum space, we can write it as with and π ij ≡ T ij − g ij T k k /3. In order to obtain the bulk viscosity we consider instead ∂ i U j = (1/3)δ ij ∂ k U k , and we have: with P ≡ −T k k /3. And comparing with (20), we get: In momentum space we can express it as with We now derive the Kubo expression for the thermal conductivity. As we have seen, it is necessary to have some conserved current in the system (besides T µν ) for the thermal conductivity to be non-zero. According to Eq. (23), heat conduction can be produced by a gradient in the chemical potential. In order to create such a gradient, we couple an external gauge field A µ ext to the conserved current N µ (A 0 ext plays the role of an effective chemical potential), so the perturbation in the hamiltonian density is: By choosing A i ext = 0, integrating by parts, and using ∂ µ N µ = 0 we obtain: with T i ≡ T 0i − hN i . Thus, by comparing with eq. (23), taking ∂ j A 0 ext constant, and assuming spatial isotropy, we have that the Kubo formula for κ is: with We here have used the following property of Wightman functions involving a conserved current N µ (so ∂ µ N µ = 0): for any operatorÔ(x), and frequency ω = 0, it is verified From the expression for ρ κ we explicitly see that if there is no conserved current in the system (besides T µν ), then the thermal conductivity is zero. In other more rigorous derivations of the Kubo formulas [25,26,27], where they do not use energy-momentum conservation in the correlators, the expression of the bulk viscosity involves the operator where c s is the speed of sound in the plasma and µ the chemical potential. Because of the property (49), this operator will give the same result for the bulk viscosity if it is calculated exactly (non-perturbatively). However, in our calculations we will use (50) and T i ≡ T 0i − hN i (instead of only −hN i ) to obtain the bulk viscosity and the thermal conductivity, because we are interested in a perturbative calculation using propagators with a non-zero width, so the conservation of the energy-momentum tensor in correlators will not be exact within the level of approximation we will use. As we will see, the extra c 2 s term in (50) will be relevant in our approach. Transport coefficients in high-temperature quantum field theory At this point it is convenient to review what happens when one calculates transport coefficients in high-temperature field theories [10,13]. In these theories it turns out that in order to obtain the leading-order result for transport coefficients a resummation of diagrams is necessary. As we have seen, a transport coefficient is given in LRT by taking the limit when the external momentum goes to zero of the imaginary part (spectral density) of some correlator. This process of taking the limit of zero external momentum implies the appearance of the so called pinching poles, which are products of retarded and advanced propagators sharing the same four-momentum: where Γ p is the particle width (inverse of the collision time in the plasma) and E p the particle energy. A pinching pole would correspond to the contribution from two lines in a diagram which share the same four-momentum when the external frequency is zero. For a λφ 4 theory at high temperature, Γ ∼ O(λ 2 T ), so ladder diagrams as the one depicted in Figure 1 all count the same, O(1/λ 2 ), in the coupling constant and have to be resummed. Another kind of diagrams, bubble diagrams ( Fig. 2), in principle would give the dominant contribution, increasing with the number of bubbles according to the counting scheme given above, so they would be naively of order O((1/λ 2 ) n λ n−1 ) = O(1/λ n+1 ). But after some analysis [13] it can be shown that they can all be resummed giving a subdominant contribution (except for the bulk viscosity) with respect to the one-bubble diagram of Figure 9, i.e., they correspond to the graph of Figure 3 with Λ 1 = Λ (0) + O(λ) and Λ 2 = Λ (0) , with Λ (0) being the lowest-order vertex. Therefore it is interesting from the theoretical point of view to analyze what happens in ChPT, where we do not have and explicit coupling constant, in order to see whether a resummation is needed. Fig. 3. Generic representation of a ladder and/or bubble diagram in terms of two effective vertices, Λ1 and Λ2. Particle width As we have mentioned in the previous section, it is crucial to have lines dressed in the generic diagram of Fig. 3 (double lines) in order to take into account the collisions between the particles of the fluid, as dictated also by kinetic theory. If the particle width was zero it would mean that particles would propagate without interaction, implying that the corresponding transport coefficient would be infinite. We can approximate the interaction between the particles in the bath by considering the following spectral density with a non-zero width Γ p : This approximation by a Lorentzian will be valid for a small enough width. The width is generically calculated for two-body collisions by [28]: If the gas is dilute, i.e. βE 1 (dilute gas approximation, DGA), the previous expression reduces to: where σ tot is the total pion-pion scattering cross section, v i the velocity of each the two colliding pions, and v rel their relative speed. Up to energies of 1 GeV it can be shown [29] that for ππ scattering only the channels IJ = 00, 11, 20 of isospin-angular momentum are relevant and then we can make the approximation: where t IJ (s) are the partial waves, so the total scattering amplitude for ππ scattering is decomposed in terms of the isospin-projected scattering amplitude, T I , and the partial waves as: and P J being Legendre polynomials. Furthermore, in the 00 and 11 channels there appear the f 0 (600) and ρ(770) resonances respectively. In order to deal properly with these resonances within ChPT we will have to unitarize our scattering amplitudes (see next section). The leading order contribution to the pion width is represented by the diagram in Figure 4. Cutting this diagram in order to extract its imaginary part (the width) leads to the formula (53) with T (s, t) being the pion-pion scattering amplitude. All the diagramatic calculation will be carried out in the Imaginary Time Formalism (ITF) [24] which has the advantage of dealing with the same fields, vertices and diagrams as the corresponding vacuum field theory but with properly modified Feynman rules. Resonances It is difficult for ChPT to deal with the resonances that appear in some of the scattering channels because the unitarity condition is not respected for high-enough energy √ s. This is because the partial waves calculated in ChPT are essentially polynomials in p (and logarithms): . In order to extend the range of applicability of the nextto-leading order results for the partial waves calculated in ChPT we will unitarize them by means of the Inverse Amplitude Method (IAM). The idea behind this method is essentially to construct an expression for the scattering amplitude which respects unitarity exactly and when expanded perturbatively matches to a given order the standard ChPT expansion. The construction of this amplitude can be justified more formally by using dispersion relations [30,31]. According to the IAM, the unitarized partial waves to order O(p 4 ) are given by: (59) Using this unitarization method, the f 0 (600) and ρ(770) resonances that appear in the pion-pion scattering channels IJ = 00, 11 respectively are correctly reproduced for some set of values of the low-energy constantsl i (an overline denotes the renormalized low-energy constant, see (5)). In addition to appearing as peaks in the scattering cross section, resonances can also be identified as poles in the scattering amplitude after continued to the second Riemann sheet (SRS). If t (I) denotes the analytical continuation of the scattering amplitude off the real axis, then the scattering amplitude on the SRS, t (II) , is defined by Im t (II) (s − i0 + ) = Im t (I) (s + i0 + ), for s > 4M 2 π . Therefore one has A resonance corresponds to a pole of t (II) in the lower half complex plane, being the position of the pole related to the mass and width of the resonance by s pole = (M R − iΓ R /2) 2 , assuming that the resonance is a narrow Breit-Wigner one, which in the case of the f 0 (600) is not a so good approximation. Since we work in the center of mass reference frame, the mass and width obtained correspond to a resonance at rest. In what follows, we will fix the low-energy constants to the valuesl 1 = −0.3,l 2 = 5. If the pion gas is dilute enough so only intermediate two-pion states are relevant in the thermal bath, then we can define thermal scattering amplitudes as those calculated like in the T = 0 case but using thermal propagators instead [32,33]. The thermal scattering amplitudes t(s, T ) (the temperature dependence starts to show up at O(s 2 )) also must verify the unitarity condition (58) but making the replacement σ(s) (thermal phase space) 3 there, and also in the expressions for the unitarized amplitudes. By considering these scattering amplitudes, we can study the evolution with temperature of the poles corresponding to the f 0 (600) and ρ(770) resonances. In Fig. 5 we plot the evolution of the σ pole with temperature, and we see that it remains broad at temperatures near the phase transition, while its mass is driven to the threshold as a possible indication of chiral symmetry restoration. Analogously, in Fig. 6 we plot the evolution of the ρ pole with temperature. We see that the mass shift is small remaining far from the threshold near the temperature of the chiral phase transition, and the width increases instead. This behavior is compatible with the dilepton data from heavy ion collisions (see details in [33,34]). We can also study the qualitative evolution of the resonance poles in presence of nuclear density, introduced effectively by varying the pion decay constant F π , because where ρ is the nuclear density, σ πN 45 MeV is the pionnucleon sigma term, and ρ 0 0.17 fm −3 is the saturation density of nuclear matter. This approach only takes into account a limited class of contributions, but we reproduce several aspects of the expected chiral symmetry restoration behavior at finite density. We have recently [34] improved the implementation of nuclear density effects for the f 0 (600) resonance by considering a microscopic calculation of many-body pion dynamics and unitarizing by solving the Bethe-Salpeter equation, obtaining in this way results qualitatively compatible with this simpler method we analyze here. In Fig. 7 we now plot the behavior of the σ pole at T = 0 for several nuclear densities (the corresponding values of F π are indicated besides each pole). We see that density effects drive the σ pole faster towards the real axis, becoming a zero-width state, but the required nuclear density is very high (for instance, F π = 55 MeV is equivalent to ρ 1.9ρ 0 ). At high enough density, when the pole crosses the threshold, there appear two separated poles on the real axis of the SRS below the threshold (virtual states), and for higher densities one of the two poles becomes a bound state (pole on the real axis in the first Riemann sheet). The virtual state which remains near the threshold and eventually becomes a bound state would correspond to a ππ molecule [34], while the other virtual state behaves like the chiral partner of the pion in the sense that it tends to become degenerate in mass with it. Analogously for the ρ(770), we see in Fig. 8 that density effects also drive it faster toward the real axis and to the threshold. In this case however, after crossing the threshold, the pole becomes a pair virtual state-bound state located at almost the same value of mass and width, indicating a clearqq nature for this resonance [34]. In the following sections we will study the influence of unitarized scattering amplitudes on transport coefficients, as well as the influence of the in-medium evolution of resonances on them. General analysis of diagrams for transport coefficients in ChPT In the analysis of transport coefficients within ChPT, analogously to what happens in high-temperature quantum field theory, we also find non-perturbative contributions, ∝ 1/Γ (and Γ = O(p 6 )), due to the presence of pinching poles. This would indicate that the standard ChPT power counting, dictated by Weinberg's formula (7), has to be modified in some way because otherwise, naively, diagrams with a larger number of pinching poles would become more important as the temperature is lowered. We will show that for low temperatures, ladder diagrams are the most relevant, but still perturbatively small in comparison to the leading order given by the simple diagram of Fig. 9. Again, the same topology arguments used in hightemperature theories are a priori applicable for the ChPT case, so we expect that the dominant contribution to transport coefficients come from ladder and bubble diagrams. We start by analyzing the spectral density corresponding to ladder diagrams. The spectral density of a generic diagram of the form shown in Fig. 3 can be easily calculated in ITF [14] obtaining: where C is some combinatoric factor which depends on the kind of external field we couple to the pion loop, and it can be shown [15] that when considering a non-zero width we can take the zero spatial-momentum limit from the beginning. In the case of the simple diagram without rungs of Fig. 9, and for some constant external insertions, at T M π we obtain that the spectral density behaves like lim ω→0 + ρ(ω)/ω ∼ M π /T , indicating that there could be important non-perturbative contributions from higherorder diagrams (ladder diagrams with an arbitrary number of rungs) in the low-temperature regime. In order to give a first and naive estimation of the contribution at low temperatures (T M π ) from every diagram we assign a factor Y , that we expect to be of order Y 1 ≡ M π /T for very low temperatures, to each pair of lines sharing the same four-momentum, and a factor X that we expect to be of order X 1 ≡ [M π /(4πF π )] 2 for very low temperatures, to any other "ordinary" loop (X 1 is the typical contribution from a chiral loop). Therefore, according to this new counting, the contribution from a ladder diagram with n rungs would be of order O(X n Y n+1 ), so ladder diagrams could in principle become more important as we go down in temperatures (where ChPT is expected to work better). Evidently, the contribution from the simple diagram in Fig. 9 would be of order O(Y ) instead of the O(X) estimation given by Weinberg's power counting. In order to verify this naive counting we have explicitly performed [15] the resummation of all the ladder diagrams for T M π and have found that it corresponds to multiply the lowest order result from Fig. 9 by some constant factor. This is because the contribution X from ordinary loops at very low temperatures is much smaller than naively expected, so X ∼ X 2 ≡ T /M π for T M π . Therefore the actual contribution from ladder diagrams at very low temperatures is O(X n 2 Y 1 ), so they are perturbatively suppressed when the number of rungs increases. However, although ladder diagrams give a contribution much smaller than naively expected, their actual contribution, O(p 2n ), is still much larger than the estimated by Weinberg's counting, i.e., O(p 4n ). Since in our ChPT lagrangian we also have derivative vertices, we expect that as the temperature increases, derivative vertices begin to dominate and the loop factor X increases over X 1 . Also, the pinching pole factor would eventually become of order O(1) and at temperatures near the phase transition, in principle, we would have to resum all the ladder diagrams. Regarding bubble diagrams, it can be shown [15] that they all can be resummed giving a vanishing contribution in the limit ω → 0 + , for the leading order in 1/Γ at very low temperatures. Other diagrams, like ladder or bubble diagrams with vertices coming from the lagrangians L n with n > 2, or diagrams with loops made with more than four-pions vertices would also be suppressed by the same arguments. But as we have commented before, as temperature increases Y becomes of order O(1) and the particle width is not small anymore, so a resummation of all the possible diagrams would be in principle necessary. In this paper however, we will show the results corresponding to extrapolations of the leading order at low temperatures, and we will see that these extrapolations, when unitarized, give the correct order of magnitude for some observables near the phase transition, which indicates that the high temperature improvement due to unitarization is a key feature in this approach. We also remark that in the limit T M π , exponentials like that in (54) select small three-momenta of O( √ M π T ). Therefore, in this limit it is enough to consider only the O(p 2 ) amplitudes in the cross section, which allows to perform a systematic T /M π expansion [15] for Γ p and trans-port coefficients, whose leading order results we give below. In the opposite limit we have massless pions (chiral limit) which is expected to be reached asymptotically at high temperatures. For M π = 0 and using only the O(p 2 ) amplitudes, the thermal width in (54) reduces to Γ p = 5T 4 |p|/(12π 3 F 4 π ) and closed analytic expressions for transport coefficients can be given. However, these expressions should not be trusted at temperatures close to the phase transition, where three-momenta in the integrals are now of O(T ) and therefore the low-p expansion is not justified and including unitarity corrections in the amplitude is crucial, as our results below will show. Electrical conductivity As we have seen, the DC conductivity is given in LRT by: with Using the external sources method [20], we couple an external electromagnetic field to our ChPT lagrangian and, counting the electric charge e = O(p/Λ), we calculate the lowest-order contribution to the DC conductivity, σ (0) , given by the diagram of Fig. 9. Then, using the formula (62) we obtain [15]: where e is the charge of the electron and E p ≡ M 2 π + |p| 2 . For very low temperatures, T M π , this expression adopts the simple form: It is interesting to compare our result with the expected kinetic theory (KT) behavior. According to KT [39], σ ∼ e 2 n ch τ /M π (n ch is the density of charged particles, τ is the collision mean time, and e is the particle charge), and τ ∼ 1/Γ , Γ ∼ nvσ ππ (v is the mean speed of the particles). In the non-relativistic limit, n ∼ ( √ M π T ) 3 e −Mπ/T , v ∼ T /M π , and σ ππ is a constant, therefore σ ∼ 1/ √ T . Thus, our result in ChPT is consistent with KT for T M π . In Fig. 10 we plot the lowest order contribution to the DC conductivity as a function of temperature for different choices of the scattering amplitudes that enter into the pion width. We see that unitarization (resonances) makes the conductivity grow from certain temperature. An increasing behavior for the DC conductivity is also obtained in lattice calculations [9]. The dots in the plot correspond to unitarizing the scattering amplitudes at finite temperature, as we explained in Section 6. We see that the thermal evolution of the resonances does not affect much the conductivity. A more significant effect on the conductivity is produced when finite nuclear density is considered effectively by reducing F π , but only at low temperatures. As a phenomenological application of this result we can relate the electrical conductivity to the soft-photon spectrum emitted by the gas of pions produced after a Heavy-Ion Collision (HIC) [15]. The rate of photons emerging from a thermal system is related to the EM current-current correlator by: Now, using the Ward identity p µ ρ µν = 0, we can relate this rate to the conductivity as: So the DC conductivity is directly related to the softphoton spectrum, i.e., photons emitted with almost zero momentum. In order to compare with experimental results we need to integrate this rate through the space-time evolution of the fireball produced after a HIC. For that purpose, we consider a simple hydrodynamical model of cylindrical symmetry in order to describe the expansion of the gas (Bjorken's model). Then, the measured rate would be given by: where we consider lead-lead collisions at SPS energies in order to compare with the WA98 experiment [35], so √ s = 158A GeV, the nuclei radius is R A 1.3 fm A 1/3 7.7 fm, the expansion rapidity is ∆η nucl = 2 acosh( √ s/(2A GeV)) 10.1, the initial proper time is τ i 3 fm/c, the final time τ f 13 fm/c, and we consider the cooling law of an ideal gas T (τ ) = T i (τ i /τ ) 1/3 with T i 170 MeV. With these parameters we obtain the estimate dN γ /d 3 p(p T = 0) 5.6 × 10 2 , that we indicate in the plot of the Fig. 11. We see that other theoretical calculations do not fit the two lowest-energy points, while our result is compatible with a linear extrapolation to the origin from these two points. For low energies, the hadronic component is expected to dominate the spectrum, and in that regime a finite width in the particle propagator may be relevant due to the Landau-Pomeranchuk-Migdal effect, which was not taken into account in those studies and would make the contribution to the spectrum finite at the origin [36]. Fig. 11. Photon spectrum obtained by the experiment WA98 [35]. We see that our estimate at the origin is compatible with a linear extrapolation from the two lowest-energy points. Thermal conductivity Although in the pion gas the only strictly conserved quantity is the energy-momentum tensor, in the energy and temperature regime we are dealing with, it is a good approximation to assume that 2 → 2 collisions are the only relevant scattering processes, which in practice means that the pion number is approximately conserved [37], yielding a nonzero thermal conductivity even when µ = 0 [38]. Therefore, in order to compare with KT we apply where with T i ≡ T i0 − hN i 4 . But now, under this assumption, thermal averages would only imply sums over states of well-defined number of particles, |N . Then, since in the diagram of Fig. 9 the energy-momentum enters to lowest order (i.e. without interaction), based on the KT theory expressions for the ideal-gas case [22]: with v i = p i /E p , we define the operatorN i through its Feynman rule for the vertex in momentum space heuristically as N i ≡ T i0 /E p (the limit of external momentum equal to zero is understood). According to this, the lowestorder contribution is then given by: As we have seen in the derivation of the Kubo formula for the thermal conductivity, here h represents the exact heat function per particle. However, we will approximate in our results h ≡ ( + P )/n = T s/n (s is the entropy density and n the density of particles) by the corresponding ideal gas expression, which we expect to be reasonable for the temperatures considered here. For very low temperatures, T M π , we have: By KT [39], κ ∼ T −1 (ē − h)lv (ē is the mean energy per particle and l ∼ 1/(σ ππ n) is the particle mean free path). In the non-relativistic limit,ē ∼ M π , h ∼ 5T /2 + M π , and then κ ∼ T 1/2 , so it is compatible with our result for low temperatures. In Fig. 12 we compare our results for κ with a KT analysis [11]. Again, unitarity changes the behavior of the transport coefficient with temperature. We now see that density effects modify significantly the thermal conductivity at high temperatures, as expected when introducing and additional conserved charge (the baryon number). Shear viscosity It is given in LRT by: Fig. 12. Lowest-order contribution to the thermal conductivity as a function of the temperature. We compare with the analysis of [11], which is based on kinetic theory. and π ij ≡ T ij − g ij T k k /3. Then, the lowest-order contribution is: For very low temperatures, T M π , we have: In Fig. 13 we compare our results for viscosities with those obtained by Prakash et al. using a KT analysis [11]. We also agree with a work by Dobado et al. [12] for the pion gas in KT. We see that nuclear density effects would only imply a significant change in the shear viscosity at low temperatures. By non-relativistic KT we expect the behavior η, ζ ∼ M π vnl, thus η, ζ ∼ √ T and both viscosities should then be of the same order at very low temperatures. Unitarity makes the quotient η/s (s is the entropy density) for the pion gas respect the bound 1/(4π) predicted by Kovtun et al. [40], as we can see in Fig. 14. Without unitarization the Uncertainty Principle would be also violated eventually, since η/s ∼ τ /n ∼ Eτ 1. Furthermore, near T c our value for η/s is not far from recent lattice and model estimates [41]. Although we do not represent it in the figure, we do obtain a behavior for η/s growing slowly with T for temperatures (unrealistic) > 550 MeV. A slowly increasing behavior is also obtained by calculations from the quark gluon plasma (QGP) phase [42], although a recent work predicts a more pronounced increase near the phase transition, in the so-called semi-QGP phase [43]. As another check, we can compute the sound attenuation length, which is given by (neglecting the contribution from the bulk viscosity) Γ s 4η/(3sT ), and is directly related to phenomenological effects such as elliptic flow or HBT radii. We get, at T = 180 MeV, the value Γ s 0.55 fm, in agreement with the estimate by Teaney [44]. In the chiral limit with only O(p 2 ), we get exactly η = 18πζ(3)F 4 π /(25T ) with the Riemann's zeta function ζ(3) 1.2. This 1/T decreasing behaviour, obtained for instance in [45], would imply that the AdS/CFT bound for η/s is violated at some point, departing also from the phenomenological estimates discussed above. This highlights the importance of reproducing correctly the high energy features of particle scattering, as we do within our unitarized approach. Bulk viscosity As we have seen, the expression for the bulk viscosity in LRT is: with where, if there is no conserved charge in the system, P ≡ −T k k /3 − c 2 s T 00 . Then, the lowest order contribution to the bulk viscosity, ζ (0) corresponding to the diagram of Figure 9 is: For very low temperatures, T M π , the leading-order contribution simplifies: In the chiral limit, M π = 0, we have the simple relation between the shear and bulk viscosities, ζ (0) = 15(1/3 − c 2 s ) 2 η (0) , in agreement (parametrically) with the result obtained in the high-temperature regime of QCD [46]. This result implies that the bulk viscosity is suppressed with respect to the shear viscosity at large temperatures, as a consequence of conformal invariance, since c 2 s = 1/3 for a free massless gas and, in fact, the bulk viscosity vanishes exactly for a conformally invariant theory. However, it also suggests that conformal breaking, as in the case of QCD through explicit and anomalous terms (see below) may induce sizable values for ζ/η, which would have interesting phenomenological consequences, since bulk viscosity is generically assumed to be negligible. This observation, supported by recent QCD analyses (see below) has led us to analyze recently in [18] the correlation between conformal invariance and the bulk viscosity in the pion gas within the unitarized ChPT context. We reproduce some of the main results of that paper here, with more detail and emphasizing some quantitative aspects of the analysis. In the lowest-order contribution (82), c s is the exact (non-perturbative) speed of sound of the pion gas. However, we can only calculate it within some approximation. In order to estimate the speed of sound we will analyze the relation between the trace anomaly and the bulk viscosity [3,4]. The scale invariance of the QCD lagrangian is broken explicitly by the quark mass and by the running of the strong coupling constant at the quantum level [47]: where s µ = T µν x ν is the dilation current, β(g) is the βfunction, γ(g) is the anomalous dimension of the quark mass, and we consider the case of two flavors with m ≡ m u = m d . At finite temperature, the average of the trace anomaly is given in terms of thermodynamical quantities, θ T ≡ T µ µ T = − 3P , and it has already been calculated on the lattice for the pure glue theory [48] as well as for QCD with almost physical quark masses [49]. In Refs. [3] and [4] it was found through a low-energy theorem of QCD, and assuming some reasonable ansatz for the spectral function, a relation between the trace anomaly and the bulk viscosity. For instance, in the chiral limit, this relation reads [3]: where (·) * ≡ (·) T − (·) 0 is what is measured on the lattice, and | v | is the energy density of the vacuum. The spectral density ρ θθ involves the correlator of the trace of the energy-momentum tensor, and it is related to the pressure-pressure correlator by [7,8]: Interaction measure calculated on the lattice (dots) and in the HRG approximation (green line, taking into account 1026 states in total, see text), from [50]. The big red dot corresponds to the result from the HRG approximation taking into account only pions, the f0(600) and the ρ(770) states. Now, since the interaction measure, defined as ∆ ≡ θ * /T 4 , has a peak near the critical temperature (see Fig. 15), it is reasonable to assume the following ansatz for the spectral density of the pressure-pressure correlator [3,4,8]: where ω 0 ∼ 1 GeV. So a maximum in the interaction measure would imply a maximum in the bulk viscosity. Motivated by this, in order to obtain a good estimation of the speed of sound, we first calculate the interaction measure in ChPT. The interaction measure can be calculated from the pressure through A calculation in ChPT of the pion gas pressure up to 3loop order by Gerber and Leutwyler [19] is available. The diagrams contributing to the partition function of the pion gas are represented in Fig. 16 up to order O(T 8 ) according to the chiral counting. Using the result for the pressure from [19], in Fig. 17 we plot the interaction measure for several orders in the pressure. The first peak corresponds to the explicit breaking of scale symmetry by the quark mass and it has also been obtained in other works [51]. Interestingly, at order O(T 8 ) there appears another peak near the phase transition, that would correspond to the gluon condensate contribution. We then see that, in the chiral limit, interaction kicks in at O(T 8 ) (the O(T 6 ) diagrams would only imply a renormalization of the mass in this case). For M π = 0, the trace anomaly has a simple expression at this order [19,52]: with Λ p ∼ 400 MeV for our choice ofl i given in Section 6. The result is very dependent on the choice ofl i , which in our case encode important non-perturbative information, as discussed in [18]. However, we see that the result of this perturbative calculation for the pion gas still is a factor 10 smaller than the lattice and full-HRG results, see Fig. 15. The Hadron Resonance Gas (HRG) approximation considers a free (non-interacting) gas which consists of all the baryonic and mesonic states up to 2 GeV, 1026 in total [50]. Therefore, in this approximation the interaction measure is given by: where g i denotes the degeneracy of the state, η = ±1 depending on whether it is a boson or a fermion respec- tively, and K 1 is the modified Bessel function of the second kind. In Fig. 18, from [49], the lattice results for the interaction measure are compared with the HRG results for temperatures below the phase transition. We see that the HRG approximation fits better the lattice results near the maximum of the peak, while as the temperature decreases they start to separate from each other. This might be due to the values of the quark masses taken in [49], since the HRG approximation and ChPT should coincide for very low temperatures. It is important to remark that although the HRG approximation gives a value for the interaction measure compatible with lattice results near the peak, it is a monotonously increasing curve, so it does not have the shape of a peak, which the ChPT calculation does have because it includes the interaction between the Goldstone bosons (in the HRG the non-zero interaction measure comes from the explicit breaking due to the non-zero mass of the states). We are also interested in the influence of the in-medium f 0 (600) and ρ(770) resonances on the trace anomaly and eventually on the bulk viscosity. In order to do it, we calculate the interaction measure in the Virial Gas Approximation (VGA) which allows us to introduce the unitarized scattering amplitudes in the dilute gas regime. According to the VGA, to the lowest order in the interaction, the pressure of the gas is given by [53]: with ξ i ≡ e β(µi−mi) , and δ ij IJS are the phase-shifts. In Fig. 19 we plot the interaction measure for several scattering amplitudes in the VGA. The interaction peak is obtained when considering O(p 4 ) amplitudes, and its height is almost equal to the one obtained with the perturbative calculation of Fig. 17. We then observe that the in-medium evolution of the f 0 (600) and ρ(770) resonances (see Section 6) does not change significantly the height of the interaction peak. According to this, we would not expect a big change in the bulk viscosity due to the in-medium evolution of resonances. In fact, this is consistent with the idea [18] that chiral restoration is not the main source for the conformal anomaly peak described above (unlike the vanishing temperature of the chiral condensate in ChPT, the position of this peak does not change in the chiral limit). Nevertheless, a recent work [54] based on the Linear Sigma Model, obtains a peak in the bulk viscosity at the chiral phase transition due to a minimum in the sigma mass, since in this model ζ ∝ Γ σ /m 2 σ . From the pressure we can also calculate other thermodynamical quantities like the entropy density, s = ∂P/∂T , the specific heat, c v = ∂ /∂T = T ∂s/∂T , and the speed of sound, c 2 s = ∂P/∂ = s/c v . In Fig. 20 we plot the specific heat and speed of sound squared for the pion gas in ChPT for several approximations. We see that the maximum at the interaction measure implies a minimum in the speed of sound, so looking at the expression (82) we then expect a maximum in the bulk viscosity at the corresponding temperature. In Fig. 21 we also show the speed of sound squared and the equation of state as a function of the energy density obtained in the lattice with almost physical quark masses [49]. We see that the minimum in the speed of sound squared for the pion gas is still a factor 2.5 less deep than the value from the lattice, but we have to bear in mind that we are dealing with a m = 0 two-flavor approximation, whose critical behavior should be that of a O(4)-crossover. Finally, in Figs. 22 and 23 we plot the results in [18] of the lowest-order contribution to the bulk viscosity and the bulk viscosity over the entropy density respectively. We explicitly show the importance of introducing unitarized scattering amplitudes (resonances) in order to reproduce the peak near the phase transition. Nuclear density effects do not change significantly the height of the anomalous peak, as expected from the previous analysis of the conformal anomaly. Comparing with Fig. 14, our result for the ratio ζ (0) /s is still smaller than η (0) /s near the transition, although the correlation with the conformal anomaly is clear, and that allows to predict larger ζ/s values if heavier states are included [18]. Neverthe- 12 Large-N c behavior of transport coefficients One of the main advantages of our formalism is that we can readily obtain the parametric dependence with the number of colors N c . This analysis is interesting, given the theoretical relevance of the large N c to describe qualitatively the QCD low-energy sector [55]. In addition, it will confirm some of our previous qualitative arguments. The large-N c counting of the low-energy constantsl i can be extracted from that of the SU (3) ones L i [55,56] in the N f = 2 limit [56], while F 2 π = O(N c ). This gives for the ππ scattering amplitudes |T | 2 ∼ O(1/N 2 c ), regardless of whether they are unitarized or not, and therefore, according to (53), we get Γ p ∼ O(1/N 2 c ). This result, together with the N c scaling of the thermodynamic quantities P ∼ ∼ c 2 s ∼ s ∼ O(1), which we extract from [19], implies that all transport coefficients scale as O(N 2 c ) for M π = 0. However, in the chiral limit, from the expression of the pressure [19] P = π 2 30 and taking into account that log Λ p ∼l 1 + 4l 2 ∼ O(N c ), we get in this limit c 2 s − 1/3 ∼ O(ln Λ p /F 4 π ) ∼ O(1/N c ). Now, since ζ/η ∼ O[(c 2 s − 1/3) 2 ] in the chiral limit and the previous counting of the width is valid also in this limit, this means that for M π = 0, the scaling of the bulk viscosity is ζ ∼ O(1), unlike the other coefficients which still scale as O(N 2 c ). Summarizing: For M π = 0 : σ ∼ κ ∼ ζ ∼ η ∼ ζ/s ∼ η/s ∼ O(N 2 c ) . (95) For M π = 0 : σ ∼ κ ∼ η ∼ η/s ∼ O(N 2 c ), ζ ∼ ζ/s ∼ O(1) . (96) These scaling relations are consistent with the results we obtained in the previous sections. The bulk viscosity is suppressed with respect to the shear viscosity in the chiral limit, as a consequence of scale invariance, although this is only a parametric dependence and it does not take into account the anomalous breaking near the transition [18]. For M π = 0 the explicit breaking of conformal invariance makes the two coefficients comparable, as we get for very low temperatures, where the mass terms dominate. For higher T , the chiral limit result is again reached asymptotically. Note also that the N c scaling for M π = 0 is compatible with our leading expressions (66), (75), (79) and (83). We disagree with the chiral limit N c -counting for ζ given in [57], where we believe that the scaling of ln Λ p discussed above is not properly accounted for. The above N c behavior is also useful in order to understand the origin of the different conformal-breaking terms near the transition [18]. Finally, comparing with results from high-T QCD is also revealing. From the parametric expressions given in [10,46] with the scaling α s = O(1/N c ), one gets η/s ∼ O(1). This is qualitatively compatible with the idea of η/s approaching a minimum when coming from the low-T phase to the critical region, as we also obtain in our approach. In the high-T regime, ζ/η is also suppressed by an additional (c 2 s − 1/3) 2 factor. Conclusions Unitarized Chiral Perturbation Theory provides a consistent framework for the study of transport properties of meson matter. We have shown that, after a suitable modification of the standard ChPT power counting and including unitarity corrections in the scattering amplitudes in order to improve their high energy behavior, one ends up with a reasonable description of transport coefficients for temperatures below the transition. At very low temperatures, our approach meets the predictions of non-relativistic kinetic theory, while at higher T we get an adequate behavior of transport coefficients when compared with existing studies based on the kinetic approach. In addition, we provide phenomenological predictions for the zero-energy photon spectrum and the shear viscosity to entropy ratio which are in fair agreement with data. To obtain these results, we have just considered the dominant diagram, with unitarized scattering in the thermal width for the internal pion lines. The results obtained within our approach for the bulk viscosity show a clear correlation with the scale anomaly, as suggested by previous works. Our formalism has the advantage of providing a theoretical analysis of transport coefficients for a massive pion gas without relying on lattice results, and therefore it might be useful in order to clarify the relation between the zero-energy limit of spectral functions involving the energy-momentum tensor and the bulk viscosity. We have also studied the large-N c limit of the transport coefficients obtained in our approach. The parametric scaling with N c is consistent with our previous analysis and provides a qualitative description for the behavior of shear and bulk viscosities when approaching the critical region. Concerning future lines of research, we plan to introduce the effect of the strangeness sector (kaons and eta) which is relevant near the transition, where those states are no longer Boltzmann suppressed. The effect of pion chemical potentials, which has been sketched in our derivation here but not included in the results, is also an interesting extension of our work and will be considered elsewhere [37].
237571510
s2orc/train
v2
2021-09-21T01:15:34.169Z
2021-09-20T00:00:00.000Z
ConvAbuse: Data, Analysis, and Benchmarks for Nuanced Abuse Detection in Conversational AI We present the first English corpus study on abusive language towards three conversational AI systems gathered"in the wild": an open-domain social bot, a rule-based chatbot, and a task-based system. To account for the complexity of the task, we take a more `nuanced' approach where our ConvAI dataset reflects fine-grained notions of abuse, as well as views from multiple expert annotators. We find that the distribution of abuse is vastly different compared to other commonly used datasets, with more sexually tinted aggression towards the virtual persona of these systems. Finally, we report results from bench-marking existing models against this data. Unsurprisingly, we find that there is substantial room for improvement with F1 scores below 90%. Introduction Abusive language detection has received extensive attention for social media, (see e.g. Vidgen et al., 2020a), but far less within the context of conversational systems. As argued by UNESCO (West et al., 2019), detection and mitigation of abuse towards these (often anthropomorphised) AI systems is important in order to avoid reinforcement of negative gender stereotypes. Following this report, several recent works have investigated possible abuse mitigation strategies (Cercas Curry andRieser, 2018, 2019;Chin and Yi, 2019;Ma et al., 2019). However, the results of these studies are non-conclusive as they are not performed with live systems nor with real users -mainly because of the absence of reliable abuse detection tools. The majority of currently deployed systems use simple keyword spotting techniques, (e.g. Ram et al., 2018;Khatri et al., 2018), which tend to produce a high number of false positives, such as cases in which the user expresses frustration, or use of profanities for emphasis, as well as false negatives, e.g. missing out on subtler forms of abuse (Han and Tsvetkov, 2020). Recently, Dinan et al. (2019); Xu et al. (2020) released an abuse detection tool trained on Wikipedia comments and crowd-sourced adversarial user prompts (the latter are not freely available). Whereas in this work, • We show that the distribution of abuse towards conversational systems is vastly different compared to other commonly used datasets, with more than half the instances containing sexism or sexual harassment. • We develop and release a detailed annotation scheme with the help of experts. • We use this scheme to annotate a corpus of 20k ratings on >6k samples (ca. 2k from each system), which we call ConvAbuse. We critically discuss and experiment with different labelling methods for this task. We also release a subset of 4k examples and their expert annotations. 1 • We benchmark commonly used abuse detection methods on this corpus. Meanwhile, there has been relatively little work on abuse detection for conversational AI. Furthermore, much of the work that does exist in this area does not actually involve human-machine dialogue: Dinan et al. (2019); Xu et al. (2020) use a classifier developed on Wikipedia comments, which was further trained on adversarial prompts collected via crowd-sourcing. Similarly, de los Riscos and D'Haro (2021) designed a chatbot to intervene against online hate speech, trained and evaluated on data from Wikipedia and Civil Comments. Those few studies that do report abuse detection results from genuine human-machine conversations tend not to include publicly released datasets. These include several submissions to the Amazon Alexa Challenge 2 (Cercas Curry et al., 2018;Khatri et al., 2018;Paranjape et al., 2020). As such, to the best of our knowledge, this is the first study to release a public dataset of human-machine conversations for the task in this domain. While we aim to detect abuse directed against any target, gender-based abuse has been identified as a particularly prevalent problem in conversational AI (Cercas Curry and Rieser, 2018;Silvervarg et al., 2012;West et al., 2019), and abuse detection systems have themselves been found to contain gender biases (Park et al., 2018). Misogyny and sexism detection has been applied to social media in binary (Fersini et al., 2018;Nozza et al., 2019) and multi-class (Waseem and Hovy, 2016) settings. We extend this to take an intersectional approach, analysing multiple types of abuse in a hierarchical mutli-label framework (see §3.3). The ConvAbuse corpus We collected data from conversations between users and three different conversational AI sys-tems, which have different goals and properties. Two of them are classed as chatbots, i.e. social, open-domain systems, while the other is a transactional, goal-oriented system. The first two systems listed below are text-based, whereas the last system is voice-based with a synthetic female-sounding voice. As such, two out of the three systems are female gendered, either by voice or name. Alana v2 An entrant to the Alexa Challenge 2018, a competition in which university teams develop social chatbots which aim to hold engaging conversations with users in the United States. The bot implemented a mixture of social chit-chat and provision of information via entity linking. Users were notified of the competition at the beginning of the conversation. We only have access to the automatically transcribed user utterances, which contain recognition noise. The data was collected between April 2017 and November 2018. CarbonBot. An assistant created by Rasa 3 and, hosted on Facebook Messenger. 4 The bot aims to convince the user to buy carbon offsets for their flights. It also notifies the user that conversations will be recorded for research purposes. The data was collected between 1st October 2019 and 7th December 2020. ELIZA. An implementation of the rule-based conversational agent intended to simulate a psychotherapist (Weizenbaum, 1966), designed for academic purposes, and hosted at the Jožef Stefan Institute. 5 It aims to engage the users by asking open questions: "Tell me more about <X>!". The data was collected between 19th December 2002 and 26th November 2007. For example conversations from all three systems, see Appendix A. Pre-processing For each system, we discarded any test conversations involving the systems' developers, and extracted the utterances from all user turns from the conversations. Following the findings of Pavlopoulos et al. (2020) that dialogue context can affect (and even reverse) human judgement of toxicity, we included the system output as well as the previous turns (where available) of both user and system. We removed any system output that is not directly provided to the user in text form (such as voice prosody tags), and replaced web addresses with the token <URL>. Sampling Previous research has shown that 5-30% of user utterances are abusive (Cercas Curry and Rieser, 2018). In order to find these instances, one can use purposive nonprobability sampling using abusive keywords. However, this can lead to the creation of heavily biased datasets (Vidgen and Derczynski, 2021;Wiegand et al., 2019). We attempted to strike a balance between obtaining a high proportion of examples that contain abusive language and not biasing the datasets towards explicit forms of abuse that contain such keywords. To do this we combined two sets of keywords: 1. A list of 'profanities' -265 regular expressions from a blacklist obtained from Amazon. These keywords are mostly profane, offensive words, which can be expected to capture use of explicitly offensive language. 2. 1,532 terms from Hatebase, 6 -a crowdsourced list of hate speech to capture (i) abuse targeted at specific groups such as women and racialised minorities, (ii) more subtle forms of abuse that do not contain explicitly offensive language, and (iii) terms that have taken on abusive meanings recently or in certain subcultures. As most of the terms also have other, non-hateful meanings (Sap et al., 2019), we hypothesised that their use as keywords could capture abusive content, while not biasing the data towards purely offensive terms. We then used stratified sampling, to extract utterances at random from six stratas of the datasets that contained conversations featuring 0, 5, 10, 15, 20 and 25 per cent of sentences that feature terms from the list of keywords. As the total number of conversations and user turns in CarbonBot is smaller, we did not sample from this, annotating the entire dataset. We used the bias metrics of Ousidhoum et al. (2020), finding that the final corpus does not seem to be heavily biased towards typical abusive language keywords (for details, see Appendix B). Annotation scheme and guidelines We created a hierarchical labelling scheme based on insights from prior work. At the top level, we adapted Poletto et al. (2019)'s unbalanced rating scale, in which input is labelled from +1 (friendly) to −3 (strongly abusive), providing information about not only whether or not it is considerd to be abusive, but also the severity of any abuse: -3. Strongly negative with overt incitement to hatred, violence or discrimination, attitude oriented at attacking or demeaning the target. -2. Negative and insulting/abusive, aggressive attitude. -1. Negative and impolite, mildly offensive but still conversational. 0. Ambiguous, unclear. 1. Non-abusive. Based on Waseem et al. (2017)'s twodimensional typology of abuse, we then elicited labels for the target (group, individual-system, or individual-3rd party) and directness (explicit or implicit). To obtain more finely-grained information about the targets of abuse, annotators then label the instances as either general, sexist, sexual harassment, homophobic, racist, transphobic, ableist, or intellectual. These labels were based on known factors in the matrix of domination (Collins, 2002). These type classes are not mutually exclusive, allowing the annotations to capture intersectionality. To allow for contextual interpretations, annotators were shown the target user utterance, the agent's utterance to which it responded, and a previous speaking turn by both the user and the agent. In supervised learning for text classification tasks, human-provided labels are typically aggregated to one 'gold-standard' label per instance by means of majority-vote, adjudication, or statistical methods. However, the notion of reducing multiple annotations to a single 'correct' label has been criticised for erasing minority perspectives (Blodgett, 2021;Gordon et al., 2021). This is because perception of phenomena such as hate, varies both across individuals and culturally (Salminen et al., 2018). We therefore retain and evaluate classification systems on the labels of all the annotators. Annotators We recruited eight gender studies students in their early 20s. Six of them identify as female, and two as non-binary. All are L1 English speakers, predominantly from the United Kingdom, except for one from the United States. One identifies as Asian, the remaining seven as white. Full details are provided in the data statement in Appendix D. Agreement measurement and analysis We adjusted the annotation scheme iteratively in three rounds by observing the labels applied to batches of 100 random examples from the data. We measured agreement with Krippendorf's alpha (α), which can take account for multiple annotators, missing values, and ordinal ratings (Gwet, 2014). Where agreement was low, we invited our experts to discuss examples. However, since abuse is a subjective phenomenon, we did not force agreement. We discarded the data used in guideline development, and the annotators labelled the rest of the data according to the final guidelines. Agreement scores per annotation task are shown in Table 1. Overall, the annotators achieved moderate to substantial agreement for the majority of categories. Agreement was consistent across datasets. We report on intra-annotator agreement in Appendix C. Sexism. Although the annotators form a fairly homogeneous group in terms of demographics and all have a background in Gender Studies, we find only moderate α for sexism, consistent with previous studies which found that up to 85% of disagreement was on this category (Waseem and Hovy, 2016). We find that sexism and sexual harassment are closely intertwined but distinct, with 47% of examples labelled sexist also judged to be sexual harassment but only around 22% of sexual harassment also being sexist. Some annotators see all sexual harassment as necessarily sexist as it is rooted in misogyny. This is in agreement with the European Centre for Gender Equality which states that 'sexual harassment is an extreme form of sexism'. 7 In our data, sexist examples focus on using gendered slurs such as "bitch", and sexual harassment uses sex as a way to create a hostile and offensive environment though it may not contain explicit terms, e.g. "I wanna see you naked". Directness. Low inter-but moderate intraannotator agreement (see Tables 1 and 10) suggests this task is highly subjective and open to interpretation. For example, annotators may perceive abuse as more implicit that is phrased as a question (e.g. "are you stupid", "can i be your lover?"), that is misspelled/misheard, (e.g. "Connie Lingu", or comments with sexual connotations but no overtly sexualised words (e.g. "call me big daddy"). Annotators can disagree not only on whether abuse is implicit or explicit, but whether it is abuse at all. Examples of disagreement between explicit abuse and non-abuse include commonly used expressions of frustration or surprise such as "wtf". Implicit abuse is particularly difficult to distinguish from non-abusive utterances as annotators must infer the user's tone and intention through capitalisation and punctuation ("I KNOW!!!!", "seems so..."), or the context ("Does it please you to believe I am stupid? You are a woman, aren't you?"). Data and analysis We collected a total of 20,710 ratings for 6,837 examples. The number of unique examples and labels per dataset is summarised in Table 2. Each example is annotated by at least three annotators. In order to allow for different points of view to be reflected and modelled, we release the individual ratings in addition to aggregated labels. Overall, we find that 27% of examples have been labelled as abusive (-1 to -3) by at least one annotator, and 20% of all labels are in this range. The subset of examples from Alana v2 have the highest portion of abuse, with 35% of examples having been labelled as abusive by at least one annotator. The target of the overwhelming majority (92%) of abuse present in our dataset is the system itself. Abuse type. Figure 1 shows the distribution of abuse type labels. Sexual harassment ( supported by the fact that the majority of racism is not directed at the system, but at a third party. Although sexual harassment, sexism, and intellectual abuse were common across systems, Alana v2 (female name and voice) received significantly more sexual harassment and sexist abuse than Car-bonBot (no gender markers) and ELIZA (femalesounding name), χ 2 (1, N = 2505) = 67.69, p < 0.01, and χ 2 (1, N = 3914) = 181.72, p < 0.01, respectively. It also received more explicit abuse than the other two systems. Conversely, Carbon-Bot and ELIZA are the target of more intellectbased and 'general' abuse. This is consistent with previous work showing that female-gendered chatbots receive more sexualised abuse than male ones (Brahnam and De Angeli, 2012), and suggests that the name alone may not elicit strong gender stereotyping. Severity. Severity increases with the number of expletives used (Pearson's r(20,708)=-.46, p<.001). Similarly, we find that implicit abuse ("but i think you should quit your job") is generally rated as less severe than explicit abuse ("I think you're an idiot"): 71% of implicit abuse is labelled as mildly abusive (-1), whereas only 30% of explicit abuse is (-1). In addition, certain types of abuse are considered more serious that others: 53% of intellectual abuse is 'mild' (-1), compared to 37% of sexual harassment, 17% of sexism, and only 7% of racism which are mainly labelled as 'aggressive' (-2) or 'attack' (-3). See Appendix E for more details. Abuse across domains As explored in §2, there has been extensive work in abuse detection and related tasks in social media, particularly Twitter and Wikipedia comments. Direct comparison with datasets from other domains is not straightforward as previous studies use different sampling methods, or label slightly different phenomena such as offensiveness and hate speech. In this section, we explore how abuse detection in these domains differs by mapping comparable labels across datasets. We do not directly compare the overall proportion of abuse but instead describe the datasets in terms of language properties, e.g. frequent n-grams, utterance length, vocabulary size, as well as the overall percentage of annotated abuse, see Tables 3, and 11 (Appendix E). Twitter. While the majority of abuse (92%) in our dataset is directed towards the system, abuse on Twitter is mainly targeted towards 3rd parties (both individuals and generalised groups). In OLID (Zampieri et al., 2019a), we find that only 46.85% of abusive instances are in second person (i.e. directed towards the interlocutor), with 36% and 16% being directed towards third party groups and individuals, respectively. In terms of (2) directness, we find that the proportions of implicit/explicit abuse are reversed with 89% of abuse being implicit in Ousidhoum et al. (2019), compared to only 16% in our current dataset. Finally, the distribution of abuse types are quite different. Attacks on sexual orientation, disability, and origin are common on Twitter, but are extremely rare in our dataset (<1% of all labels). In addition, existing Twitter datasets seem to be heavily biased towards explicit language, with similar common words and examples labelled abusive (see Table 11 for more details). Wikipedia comments. Jigsaw's Toxic Comment Classification Challenge 9 is a competition to identify and classify toxic comments from Wikipedia's Table 3). For our analysis, we map the labels 'obscene', 'threat', 'insult', 'identity_hate' to 'abusive' based on the definitions given in Jigsaw's Perspective documentation. 10 Toxic Comments is the largest toxic language dataset and has the largest vocabulary size. It's examples are far longer, as they are not limited to a set number of characters, and form part of a discussion. In contrast, our ConvAbuse corpus has the shortest utterances, as the systems elicit simpler and more contextual responses. In addition, Toxic Comments is heavily biased towards domainspecific language with terms such as 'wikipedia', 'article' and 'edit' among the most common in the dataset. Overall the source of the data has a significant impact on the language used in the data: while Toxic Comments can be very long, Twitter's character limit clearly impacts the length of the utterances, and the utterances in ConvAbuse are shorter still and rely more heavily on context. These varying qualities have implications for the use of such sources as training data for abuse detection tools for conversational systems, such as those developed by (Dinan et al., 2019) and (Xu et al., 2020) based on Toxic Comments. In §4, we therefore compare the performance of systems with in-and out-of-domain cross-training settings. Benchmarking Pre-processing. We divide the datasets into train (70%), validation (15%), and test (15%) sets, with similar proportions of positively labelled examples in each split (see Table 4). While aggregation of annotators' ratings is problematic (see §3.3), it is the dominant paradigm is abusive language detection and NLP in general. For comparison, we therefore create a set of aggregated 'gold' labels for each (sub-)task based on the majority vote of the annotators on each example. We evaluate on these in addition to the multiple annotator ratings. We report the macro-averaged F1 score as an evaluation metric due to the large class imbalances, (e.g. most utterances are non-abusive). Models We test the following approaches on the main binary abuse detection task in both the aggregated and multi-annotation settings. We also assess the performance of the best performing approaches with varying amounts of context, and test a simple neural method on the four sub-tasks. Machine learning methods Following initial hyperparameter optimization experiments, we use the following systems and settings: • Support Vector Machine: SVMs have been used is previous work on abuse detection in Twitter data, (e.g. Davidson et al., 2017), and have been shown to outperform neural systems (Niemann et al., 2020). We train a linear SVM on bag-of-words representations of the texts using term frequency-inverse document frequency (tf-idf) scores for unigram feature selection. We use l2 normalisation and set C=1. • Multi-Layer Perceptron: A standard neural network with one hidden layer consisting of 256 units, ReLu activation, a dropout rate of 0.75, and Adam optimisation with a learning rate of 1e − 3. We use early-stopping to find the best performing model on the validation set. We use the same text features as for the SVM. We set the learning rate to 1e − 4. Cross-training To observe the effects of domain shift, we evaluate the systems with different combinations of data from the following sources for training and testing: • Testing: The ConvAbuse corpus, and the subsets Alana v2, CarbonBot, and ELIZA. Results are presented in Table 5. We find that the best performance in most training settings is obtained using BERT. The highest F1 scores are obtained when training in-domain on the ConvAbuse data, or on Toxic Comments (TCs). However, this dataset is around 40 times larger than any of the other training sets. When TCs is reduced to a comparable size, the F1 score drops to considerably below that of the ConvAbuse-trained systems. These results highlight the differences between the two domains and the benefits of training on conversational AI data. Training on OLID, which is both small and out-of-domain, results in the lowest scores. Contextual input features The majority of previous work on abuse detection does not take context into account, or provides inconclusive evidence of its importance. Menini et al. Table 6: Sub-task macro-averaged F1 scores evaluated against the random classifier baseline on the aggregated and multi-annotator labels. (2021) showed that the more context is available, the likelier tweets are to be considered non-abusive by annotators. And Dinan et al. (2019) showed that context improves detection performance (providing six total turns with five of context). However, Pavlopoulos et al. (2020) found very few examples of toxicity to be context-sensitive for Wikipedia comments, and that inclusion of dialogue context did not lead to large performance gains. We train and test the classifiers on ConvAbuse with: (1) no context (single utterance), (2) the agent's turn (two total turns), and (3) the agent's turn plus the previous turn of both user and agent (four turns). We concatenate the turns in the inputs in each setting. Results are shown in Table 7. As more context is added, the performance of both the SVM and MLP degrades, possibly as a result of increased data sparsity. However, performance using BERT is similar in all three settings, suggesting that it may be able to better handle the long-range contextual dependencies. We leave exploration of more complex classification frameworks that may be able to exploit the contextual information for future work. Fine-grained abuse detection We also provide benchmarks for the four sub-tasks: severity (ordinal classification), type (multiclass, multilabel classification of the eight categories described in Section 3.3), target (ternary) and directness (binary). Here, we use the two neural systems, as they can more easily handle the ordinal labels. We train on the ConvAbuse dataset, which is labelled for these tasks. Results are shown in Table 6. We find that the systems comfortably beat the random baselines for each task, with little difference between the two classifiers. They both perform poorly on multiple nominal (target) and ordinal (severity) classes and in some of the multi-annotator settings, which suffer from label sparsity in some of the classes. We model each abuse type as a binary classification task, rather than multi-label prediction, enabling multiple types to be assigned to each example. We find that the classifier often confuses the classes sexism and sexual harassment. In around half of cases in which the true label is one of these, the system predicts the other. We leave more focused approaches, like multi-task learning, for future work. Discussion and conclusion In this work, we provide new insights regarding the detection and description of abusive language towards conversational agents, in terms of data, labelling and models. This may facilitate the release of large pre-trained conversational AI models that are safety-aware (Dinan et al., 2021) as well as potentially allow us to better detect abuse in human-human conversations. Data. In compiling the ConvAbuse corpus, we have compared differences between abusive phenomena in conversational AI and social media. In our domain, users appear to focus their abuse on the agents themselves rather than third parties or groups, with a far higher proportion of the abuse sexist and misogynystic in nature. Annotations. Unlike the majority of previous work, we use annotators who are members of the groups typically targeted by such abuse, and who have expertise in such issues. We also use a more fine-grained labelling scheme, which is able to capture the nuances of abuse, and is important for the downstream task of abusive language mitigation. We obtain similar results evaluating on these annotators separately and capturing individual view-points, even in a simple multi-class setting. In future work, we will experiment with modelling individual annotators in a multi-task framework. Models and data. In our benchmarking experiments, we find that fine-tuning a BERT model produces the highest F1 scores. However, in many settings, a simple linear classifier (SVM) outperforms an MLP, supporting the findings of (Niemann et al., 2020)'s survey that SVMs tend to outperform neural methods on abusive language detection tasks. In this work, we present a small, focused dataset of high quality annotations, which are also informative for corpus study. We show that training on labelled in-domain data leads to better performance than similarly sized out-of-domain datasets, confirming the differences between the domains and highlighting the need for conversational data. While performance using general domain pretrained models leaves room for improvement, in future work, we hope to experiment with different initialisation settings, using models trained on data and tasks more similar to those of ConvAbuse, such as HateBERT (Caselli et al., 2021) or HurtBERT (Koufakou et al., 2020). Ethical considerations Data rights. Data collection from real users requires a careful balance of the rights of the user and the quality and suitability of the data. Although GDPR generally requires explicit consent, we use mainly datasets which were gathered with implied consent. CarbonBot data was collected in accordance with GDPR requirements. Alana v2 data was collected following Amazon's guidelines, and we do not make any of this data public (examples we present are redacted and paraphrased). It is unclear how user consent was obtained in the case of ELIZA. In particular when it comes to offensive language, requiring explicit informed consent may automatically bias the data, as users may be less abusive if they are aware the conversation is not private, making the data less fit for purpose. Datasets in offensiveness-related tasks have taken one of two approaches: (1) publishing only IDs to retrieve the actual examples from an API, or (2) fully anonymising the examples by removing personallyidentifiable information such as user mentions. The first approach leads to a problem of ephemerality: offensive tweets are more likely to be removed whether by the users themselves or the plat-forms, e.g. of the original 16K tweets in Waseem and Hovy (2016) only around 4000 remain. This data degradation leads to issues of replicability. Anonymisation, on the other hand, ensures the longevity for the dataset (insofar as the data is available for posterity) but takes a more flexible approach to the user's right to be forgotten. This study received ethical approval from our institutional review board (IRB). Replicability. Some of the resources used in this paper, such as the profanity list and Alana v2's data, stem from a collaboration with a private industry lab and as such, are proprietary and not publicly available. This impacts the replicability of the study, although our collected data is not heavily biased towards this particular blacklist (see Appendix B). To mitigate replicability limitations, we make all code, and data available where possible. Collaborations between industry and academia can, in general, be controversial as they can sway research questions and keep useful resources out of reach of other researchers (Abdalla and Abdalla, 2021), but can be a net positive as industry can provide additional funding and tools. Annotator recruitment and welfare. Our annotator pool is fairly homogeneous but reflects the demographics of Social and Gender Studies students (Mantle, 2021). Crowdsourcing annotations may lead to more representation in the data, but this is not guaranteed as data quality can suffer as crowdworkers try to complete a task as fast as possible. Moreover, crowdsourcing is not without its own ethical issues (Shmueli et al., 2021). In addition, exposure to offensive data can take a toll on the mental health of the annotators, which is more easily monitored with local recruitment than crowdworkers. Bias and representation in abuse detection. Previous research has already pointed out the problem of bias in offensiveness detection (Poletto et al., 2019;Sap et al., 2019). The nature of the data (simple conversation transcripts) required the annotators to make some assumptions about the tone, intention and the users themselves. The annotators generally assumed the user to be a white, heterosexual cis-male unless the conversation indicated otherwise, 12 and the speaker's demographics impact whether something is abusive/offensive or not (Poletto et al., 2019;Sap et al., 2019). Our annotators were a fairly homogeneous group in terms of their demographics, being predominantly young, white and female. This fits the demographic profile of the bots' personas that are on the receiving end of the abuse and it is therefore not entirely out of place. However, with increasing demand for more diverse (and less anthropomorphic) conversational AI systems (such as Replika.ai), this is likely to change in the near future. In addition, previous work has generally aggregated scores which tends to exclude the views of minority groups in favour of the majority. We publish all labels and we propose a way to model multiple perspectives. As the perspectives modelled are only as varied as the ones reflected in the data, future work should address this by involving more diverse annotators and stakeholders. Finally, our dataset has a greater diversity of individual authors, in comparison with some available datasets that focus on abuse towards particular groups, in which many of the examples labelled as abusive were authored by a small pool of users, (Fortuna et al., 2021). The moral status of AI. A key question when it comes to abuse towards conversational AI systems is whether it is actually morally reprehensible. In contrast with human-human abuse in social media, the moral value of abuse towards conversational AI systems is controversial. Here, we do not argue that abuse towards these systems is immoral in and of itself, but rather due to its mimesis of the misogyny and harassment suffered by women: the majority of commercially available systems have female personas and produce submissive responses to abuse which reinforce sexist stereotypes. UN-ESCO calls for systems to appropriately address abusive users (West et al., 2019) but the effectiveness of abuse mitigation strategies is dependent on a good detection module that is both reliable and sufficiently fine-grained in terms of classification. We have tried to address this need in this work. B Measuring sampling bias To assess how much our sampling strategy affected the resulting data samples, we used the bias metrics of Ousidhoum et al. (2020). These measures capture how closely the set of prominent words in a set of topics in the datasets (generated using LDA) relates to a set of keywords often used for sampling hate speech data (metric B1), and the proportion of those topic words that are semantically similar to at least one of the keywords (B2). We used Ousidhoum et al. (2020)'s keyword list and default parameters, and compared the scores for the sampled corpora with those of the complete, unsampled datasets. The small differences seen between bias scores for the unsampled and sampled data suggest that the final corpora are not heavily biased towards the keywords (see Table 8). C Annotation Inter-annotator agreement on the individual datasets is shown in Table 9. To further validate the labels, we calculate intra-annotator agreement using Cohen's kappa (κ). The annotators re-labelled a sample of 10% of the data, and we calculated intra-annotator agreement . Overall agreement was substantial, but with lower consistency for the abuse severity and directness labels. Intraannotator agreement is shown in Table 10. D Data statement This data statement follows the format of Bender and Friedman (2018). A CURATION RATIONALE: Abuse detection in conversational AI is a relatively underexplored area, partly due to the lack of available datasets. We collect this dataset to explore how abuse in conversational AI differs from that in social media platforms, and to allow for further research and development of detection models. Because abuse in conversation is relatively rare, we sample from collected conversations based on a list of offensive terms sourced from Hatebase 13 and a collection of regular expressions provided by Amazon. We choose expert annotators to improve data quality. B LANGUAGE VARIETY: The data is collected in English, however speaker demographics are not available and may include non-native speakers. C SPEAKER DEMOGRAPHIC: The data collected is a series of conversations between a human and one of three conversational AI systems: Alana v2, ELIZA, and Rasa NLU's CarbonBot. Speaker demographics are not available, but the annotators reported often assuming the user was a white male unless the utterance contradicted this assumption. D ANNOTATOR DEMOGRAPHIC Our data is annotated by 8 annotators with the following demographics: • Age: 19-21 • Gender: Female (6) and non-binary (2) • Race/ethnicity: White (5), white British (2) and mixed Asian (1) • Native language: English • Socioeconomic status: University students, otherwise unknown • Training in linguistics/other relevant discipline: All annotators are undergraduate students in Gender Studies and Sociology. All demographics are self-reported. E TEXT CHARACTERISTICS Conversations with CarbonBot centre around carbon offsets, climate change and travel. Many of the conversations appear to be with climate change deniers looking for a confrontation with the bot. ELIZA elicits more free-style turns about the user themselves. E.1 Dataset Comparison Charts The 20 most common words per dataset are shown in Table 11
195891810
s2orc/train
v2
2019-07-13T13:03:57.542Z
2019-07-01T00:00:00.000Z
Anti-Apoptotic Effect of G-Protein-Coupled Receptor 40 Activation on Tumor Necrosis Factor-α-Induced Injury of Rat Proximal Tubular Cells G-protein-coupled receptor 40 (GPR40) has an anti-apoptotic effect in pancreatic β-cells. However, its role in renal tubular cell apoptosis remains unclear. To explore the role of GPR40 in renal tubular apoptosis, a two-week unilateral ureteral obstruction (UUO) mouse model was used. The protein expression of GPR40 was decreased, while the Bax/Bcl-2 protein expression ratio, the expression of tumor necrosis factor (TNF)-α mRNA, and angiotensin II type 1 receptor (AT1R) protein were increased in mice with UUO. In vitro, pretreatment of rat proximal tubular (NRK52E) cells with GW9508, a GPR40 agonist, attenuated the decreased cell viability, increased the Bax/Bcl-2 protein expression ratio, increased protein expression of cleaved caspase-3 and activated the nuclear translocation of nuclear factor-κB (NF-κB) p65 subunit induced by TNF-α treatment. TNF-α treatment significantly increased the expression of AT1R protein and the generation of reactive oxygen species (ROS), whereas GW9508 treatment markedly reversed these effects. Pretreatment with GW1100, a GPR40 antagonist, or silencing of GPR40 in NRK52E cells promoted the increased expression of the cleaved caspase-3 protein by TNF-α treatment. Our results demonstrate that decreased expression of GPR40 is associated with apoptosis via TNF-α and AT1R in the ureteral obstructed kidney. The activation of GPR40 attenuates TNF-α-induced apoptosis by inhibiting AT1R expression and ROS generation through regulation of the NF-κB signaling pathway. Introduction G-protein-coupled receptor 40 (GPR40), also known as free fatty acid receptor 1, is a cell surface receptor highly expressed in pancreatic β-cells, intestine and enteroendocrine cells of the gastrointestinal tract, taste cells, immune cells, splenocytes, and brain cells [1][2][3][4]. We previously demonstrated that GPR40 is also expressed in a renal tubular epithelial cell line, and in rat and mouse kidneys [5,6]. GPR40 couples with a G protein α-subunit of the Gq family [7], and its activation in pancreatic islets induces the activity of phospholipase C and the hydrolysis of inositol lipids, as well as the intracellular calcium levels [8,9]. Computational and experimental studies have suggested that H137, R183, N244, and R258 are sites for agonist recognition and are directly involved in interactions with the ligand [10]. GPR40 agonists play a protective role against apoptosis of pancreatic β-cells, which could provide a treatment for diabetes by increasing insulin release [11,12]. Although GPR40 is involved in diverse physiology processes, the underlying apoptotic signaling pathways associated with GPR40 have not been clearly elucidated. A previous study showed that GPR40 overexpression ameliorates intestinal inflammation by the down-regulation of tumor necrosis factor (TNF) receptor 2 and the suppression of the NF-κB signaling pathway [13]. The stimulation of GPR40 also attenuates the induction of inflammatory cytokines and chemokines in TNF-α and interferon-γ-treated keratinocytes [14]. Pretreatment with GW9508, a GPR40 agonist, ameliorated the cisplatin-induced apoptotic death of human renal proximal tubular epithelial cells by inhibiting the generation of reactive oxygen species (ROS), pro-apoptotic proteins, and the activation of the Src/epidermal growth factor receptor/extracellular signal-regulated kinase signaling pathway, and the nuclear factor-κB (NF-κB) [5]. Tubular atrophy is the common characteristic feature of chronic kidney disease (CKD) and is superior to glomerular pathology as a predictor of CKD progression [15]. Tubular epithelial cell apoptosis has been implicated in tubular atrophy in studies using mouse models of focal and segmental glomerulosclerosis and a sublethal dose of diphtheria toxin [16,17]. Moreover, increased levels of TNF-α and angiotensin II type 1 receptor (AT1R) may play critical roles in the pathogenesis of renal tubular cell apoptosis [18,19]. Thus, GPR40 may contribute to anti-apoptotic effects by suppression of the TNF-α signaling pathway. In this study, we determined whether the expression of GPR40 was changed in a ureteral obstructed kidney model and was associated with apoptosis. Furthermore, we investigated the effects of GPR40 activation on the pathogenesis of apoptosis induced by TNF-α treatment in rat proximal tubular (NRK52E) kidney cells. GPR40 Expression in the Ureteral Obstructed Kidney Following unilateral ureteral obstruction (UUO) for 2 weeks, expression of the GPR40 protein was significantly decreased, while the Bax/Bcl-2 protein ratio was increased in the obstructed kidney compared with that in the control ( Figure 1). 2 ameliorates intestinal inflammation by the down-regulation of tumor necrosis factor (TNF) receptor 2 and the suppression of the NF-κB signaling pathway [13]. The stimulation of GPR40 also attenuates the induction of inflammatory cytokines and chemokines in TNF-α and interferon-γ-treated keratinocytes [14]. Pretreatment with GW9508, a GPR40 agonist, ameliorated the cisplatin-induced apoptotic death of human renal proximal tubular epithelial cells by inhibiting the generation of reactive oxygen species (ROS), pro-apoptotic proteins, and the activation of the Src/epidermal growth factor receptor/extracellular signal-regulated kinase signaling pathway, and the nuclear factor-κB (NF-κB) [5]. Tubular atrophy is the common characteristic feature of chronic kidney disease (CKD) and is superior to glomerular pathology as a predictor of CKD progression [15]. Tubular epithelial cell apoptosis has been implicated in tubular atrophy in studies using mouse models of focal and segmental glomerulosclerosis and a sublethal dose of diphtheria toxin [16,17]. Moreover, increased levels of TNF-α and angiotensin II type 1 receptor (AT1R) may play critical roles in the pathogenesis of renal tubular cell apoptosis [18,19]. Thus, GPR40 may contribute to anti-apoptotic effects by suppression of the TNF-α signaling pathway. In this study, we determined whether the expression of GPR40 was changed in a ureteral obstructed kidney model and was associated with apoptosis. Furthermore, we investigated the effects of GPR40 activation on the pathogenesis of apoptosis induced by TNF-α treatment in rat proximal tubular (NRK52E) kidney cells. GPR40 Expression in the Ureteral Obstructed Kidney Following unilateral ureteral obstruction (UUO) for 2 weeks, expression of the GPR40 protein was significantly decreased, while the Bax/Bcl-2 protein ratio was increased in the obstructed kidney compared with that in the control ( Figure 1). Immunoreactive GPR40 was expressed in renal tubules of the control kidney. However, decreased immunostaining of GPR40 with tubular atrophy was observed in the ureteral obstructed kidney (Figure 2A). 3 Immunoreactive GPR40 was expressed in renal tubules of the control kidney. However, decreased immunostaining of GPR40 with tubular atrophy was observed in the ureteral obstructed kidney (Figure 2A). Immunofluorescence staining revealed decreased immunoreactivity of GPR40 in aquaporin-1 (AQP1)-and AQP2-positive renal tubules of the ureteral obstructed kidney ( Figure 2B). In addition, Immunofluorescence staining revealed decreased immunoreactivity of GPR40 in aquaporin-1 (AQP1)-and AQP2-positive renal tubules of the ureteral obstructed kidney ( Figure 2B). In addition, the expression of TNF-α mRNA and the AT1R protein was increased in the ureteral obstructed kidney compared to control mice ( Figure 3A,B). the expression of TNF-α mRNA and the AT1R protein was increased in the ureteral obstructed kidney compared to control mice ( Figure 3A,B). Effects of a GPR40 Agonist on Apoptotic Signaling Induced by TNF-α Next, we performed in vitro studies to explore the effect of GPR40 on apoptosis and the downstream signaling pathways activated in response to TNF-α. The WST-1 assay revealed that treatment of NRK52E cells with TNF-α (20 ng/mL) decreased their viability compared with vehicletreated cells. Pretreatment with GW9508, a small-molecule GPR40 agonist, attenuated the decreased cell viability caused by TNF-α ( Figure 4A). Treatment with TNF-α increased the Bax/Bcl-2 protein ratio and the cleaved caspase-3 level, compared with vehicle-treated cells. These changes were ameliorated by pretreatment with GW9508 ( Figure 4B). As shown in Figure 4C, treatment with TNFα also caused nuclear translocation of the NF-κB p65 subunit, which was counteracted by pretreatment with GW9508. Effects of a GPR40 Agonist on Apoptotic Signaling Induced by TNF-α Next, we performed in vitro studies to explore the effect of GPR40 on apoptosis and the downstream signaling pathways activated in response to TNF-α. The WST-1 assay revealed that treatment of NRK52E cells with TNF-α (20 ng/mL) decreased their viability compared with vehicle-treated cells. Pretreatment with GW9508, a small-molecule GPR40 agonist, attenuated the decreased cell viability caused by TNF-α ( Figure 4A). Treatment with TNF-α increased the Bax/Bcl-2 protein ratio and the cleaved caspase-3 level, compared with vehicle-treated cells. These changes were ameliorated by pretreatment with GW9508 ( Figure 4B). As shown in Figure 4C, treatment with TNF-α also caused nuclear translocation of the NF-κB p65 subunit, which was counteracted by pretreatment with GW9508. Alterations in AT1R protein expression, TNF-α mRNA levels and ROS generation were examined to determine the mechanisms of TNF-α-induced tubular injury. All of these factors were increased by treating NRK52E cells with TNF-α. Notably, we observed that these TNF-α-stimulated changes were attenuated by pretreatment with GW9508 ( Figure 5A-C). Lastly, pretreatment with GW1100, a GPR40 antagonist, or silencing GPR40 in NRK52E cells with siRNA, markedly augmented the increased levels of cleaved caspase-3 caused by TNF-α treatment ( Figure 6A,B). These data suggest that inhibiting GPR40 promotes renal tubular cell apoptosis, induced through the TNF-α signaling pathway. Thus, the activation of GPR40 plays an essential anti-apoptotic role. Alterations in AT1R protein expression, TNF-α mRNA levels and ROS generation were examined to determine the mechanisms of TNF-α-induced tubular injury. All of these factors were increased by treating NRK52E cells with TNF-α. Notably, we observed that these TNF-α-stimulated changes were attenuated by pretreatment with GW9508 ( Figure 5A-C). Lastly, pretreatment with GW1100, a GPR40 antagonist, or silencing GPR40 in NRK52E cells with siRNA, markedly augmented the increased levels of cleaved caspase-3 caused by TNF-α treatment ( Figure 6A,B). These data suggest that inhibiting GPR40 promotes renal tubular cell apoptosis, induced through the TNF-α signaling pathway. Thus, the activation of GPR40 plays an essential anti-apoptotic role. Discussion Urinary tract obstruction has been recognized as an experimental model of CKD because the same cellular and molecular events that characterize the progression of CKD also occur in the ureteral obstructed kidney of rats and mice with UUO [19,20]. Tubular cell apoptosis is pathogenetically related to the tubular atrophy and renal tissue loss that occurs in prolonged ureteral obstruction [21]. Therefore, we investigated whether GPR40, which is highly expressed in the kidney, has an Discussion Urinary tract obstruction has been recognized as an experimental model of CKD because the same cellular and molecular events that characterize the progression of CKD also occur in the ureteral obstructed kidney of rats and mice with UUO [19,20]. Tubular cell apoptosis is pathogenetically related to the tubular atrophy and renal tissue loss that occurs in prolonged ureteral obstruction [21]. Therefore, we investigated whether GPR40, which is highly expressed in the kidney, has an association with apoptosis in the obstructed mouse kidney after UUO. Our results demonstrate that the expression of GPR40 was decreased in the obstructed kidney of the UUO mouse model. Furthermore, the activation of GPR40 by GW9508 attenuated renal tubular apoptosis by inhibiting ROS generation and the NK-κB signaling pathway, by regulating TNF-α and AT1R expression. Thus, GPR40 appears to regulate renal apoptosis through the inhibition of TNF-α/AT1R and NF-κB signaling pathways. GPR40 was identified as an orphan 7-transmembrane G-protein-coupled receptor [22]. It functions as a cell surface receptor for medium-and long-chain fatty acids, and is primarily expressed in the brain and the pancreas [1,2]. Interestingly, we previously demonstrated that GPR40 is also expressed in the kidney [6]. Furthermore, the GPR40 protein level is decreased in the kidneys of rats treated with cisplatin in association with an increase in the serum creatinine level and the renal Bax/Bcl-2 protein ratio [5]. The GPR40 protein and mRNA are expressed in the renal tubular epithelial cell line, LLCPKcl4, and mouse kidney, as well as the pancreas and the brain [6]. Immunoblotting revealed that the GPR40 protein was expressed more abundantly in the cortex and the outer stripe of the outer medulla than in the inner stripe of the outer medulla and the inner medulla of the mouse kidney. In situ hybridization showed that GPR40 mRNA was localized to the renal cortical tubules of mouse kidneys, including the cortical collecting duct [6]. Nevertheless, the renal expression and function of GPR40 were not established in models of chronic kidney injury. In the present study, we demonstrated that the expression of the GPR40 protein was decreased in the ureteral obstructed mouse kidney compared with the control kidney, and was related to the increased Bax/Bcl-2 protein ratio and tubular atrophy. The GPR40 protein was expressed in both AQP1and AQP2-positive renal tubules. These results suggest that a decreased renal tubular expression of GPR40 may be related to the pathogenesis of apoptotic tubular injury in the ureteral obstructed kidney. Ureteral obstruction induces the production of renal TNF-α and AT1R, renal tubular cell apoptosis, caspase activity, and NF-κB activity, and increases Bax and decreases Bcl-2 expression, all of which are ameliorated by neutralizing TNF-α [23][24][25]. The activation of pro-apoptotic proteins by a large number of factors, such as angiotensin II, TNF-α, ROS, and NF-κB, plays an important role in the pathogenesis of the apoptotic cell death of renal tubules in the ureteral obstructed kidney. Furthermore, these factors that initiate apoptosis interact with each other [21]. In addition, angiotensin II stimulates the TNF-α signaling pathway and TNF-α upregulates AT1R and its downstream signaling pathway [26][27][28][29]. In addition, TNF-α-mediated apoptosis is associated with the activation of downstream NF-κB signaling pathways, which play vital roles in the control of cell proliferation and death [30]. Consistent with these findings, in our study, the expression of TNF-α mRNA and the AT1R protein was increased in the ureteral obstructed kidney. Furthermore, our in vitro results show that the expression of the AT1R protein and TNF-α mRNA, and the generation of ROS, were increased in TNF-α-treated NRK52E cells. Nuclear translocation of the NF-κB p65 subunit also increased following TNF-α treatment. Therefore, TNF-α treatment resulted in increased cell death and the activation of the apoptotic signaling pathway in rat proximal tubular cells [31]. Overall, our results demonstrate that GPR40 activation rescued cell death and activated the pro-apoptotic signaling pathway in TNF-α-induced injury of NRK52E cells. In addition, the activation of GPR40 inhibited TNF-α-mediated NF-κB activation. The inhibition of GPR40 by a GPR40 antagonist, GW1100, or GPR40 siRNA transfection, promoted the expression of pro-apoptotic proteins by TNF-α treatment. These observations suggest that GPR40 has an anti-apoptotic effect in renal tubular epithelial cells. Our findings raise the possibility that GPR40 might be a novel target for the prevention and treatment of renal cell apoptosis. Animals The experimental protocol was approved by the Institutional Animal Care and Use Committee of Chonnam National University Medical School (CNU IACUC-H-2019-11; 8 April 2019). Male 8-week-old C57BL/6J mice were used (Samtako, Osan, Korea). The mice were randomly assigned to the control group and the UUO group, with eight mice each group. UUO was induced by ligation of the left proximal ureter as previously described [19]. The mice had free access to standard chow (Damul Science, Daejeon, Korea) and tap water, and they were sacrificed by decapitation on day 14 after the operation. The kidney was rapidly removed from the animals. The cortex/outer strip of the outer medulla was isolated and stored at −70 • C until used for Western blot analysis and for the reverse transcription polymerase chain reaction (RT-PCR). Western blot analysis and RT-PCR were performed as previously described [5,6,32]. Cell Culture The NRK52E cells (American Type Culture Collection, Manassas, VA, USA) were treated with TNF-α (20 ng/mL; R&D Systems, Minneapolis, MN, USA) for 24 h in the presence or absence of the GPR40 agonist, GW9508 (10 µM; Cayman Chemical, Ann Arbor, MI, USA), for 1 h prior to the addition of TNF-α, and then harvested for further analysis. The control cells were treated with the vehicle (dimethyl sulfoxide). The NRK52E cells were also treated with TNF-α with or without a 1 h pretreatment with the GPR40 antagonist, GW1100 (10 µM; Cayman Chemical), or GPR40 small interfering (si)RNA (50 nM; ON-TARGET plus Rat Ffar1 (Gene ID: 266607), Item No. L-080051-02-0010, Dharmacon, Lafayette, CO, USA). Cell Viability Assay Cell viability was determined using the EZ-CyTox (tetrazolium salt, WST-1) cell viability assay kit (Daeil Lab Service, Seoul, Korea), as previously described [33]. Absorbance at 570 nm was detected using a 96-well microplate reader (BioTek Instruments, Winooski, VT, USA). Cell viability is expressed as the fraction of the surviving cells relative to the vehicle-treated cells. Preparation of Nuclear and Cytoplasmic Extracts To prepare nuclear extracts, the cells were lysed using the NE-PER ® nuclear and cytoplasmic extraction reagent (Pierce Biotechnology, Rockford, IL, USA) according to the manufacturer's protocol, as previously described [5]. Briefly, the NRK52E cells were harvested by scraping the cells into a cold phosphate-buffered saline (PBS), pH 7.2, followed by centrifugation at 14,000× g for 2 min. After removing the supernatant, 100 µL of ice-cold cytoplasmic extraction reagent I was added to the cell pellets and then incubated on ice for 10 min. The ice-cold cytoplasmic extraction reagent II was then added to the tube and centrifuged at 16,000× g for 5 min. The pellet was suspended in 50 µL of ice-cold nuclear extraction reagent, followed by centrifugation at 16,000× g for 10 min. Finally, the supernatant containing the nuclear extract was transferred to a new tube and the protein concentration was measured. ROS Generation Intracellular ROS generation was measured with a 2 ,7 -dichlorodihydrofluorescein diacetate (H 2 DCF-DA) fluoroprobe (Molecular Probes, Eugene, OR, USA). The NRK52E cells were incubated with 5 µM H 2 DCF-DA for 30 min at 37 • C. Then, the cells were washed, collected by centrifugation, and resuspended in PBS. The fluorescence intensity was measured using a FACSCalibur™ flow cytometer (BD Biosciences, San Jones, CA, USA). Immunohistochemistry and Immunofluorescence Staining The kidneys, fixed in 4% paraformaldehyde, were dehydrated through a graded series of ethanol, embedded in paraffin, sectioned (5 µm), and mounted on glass slides. After deparaffinization and rehydration, antigen retrieval was performed by using Antigen Unmasking Solution (Vector Laboratories, Burlingame, CA, USA). Sections were blocked with 2.5% bovine serum albumin in PBS, incubated with the anti-GPR40 antibody overnight at 4 • C, and then with the appropriate secondary antibody. For immunofluorescence labeling with specific tubular maArkers, the sections were incubated with an Alexa Fluor 568-labeled goat anti-rabbit IgG (1:200 dilution; Invitrogen, Seoul, Korea) secondary antibody after incubation with the anti-GPR40 antibody. Anti-AQP1 and anti-AQP2 antibodies (Alomone Laboratories, Ltd., Jerusalem, Israel) were used as markers for the proximal tubule and for the collecting duct, respectively. The sections were incubated with FITC-conjugated secondary antibodies and the nuclei were counterstained with 4 ,6-diamidino-2-phenylindole. Images were captured using an LSM 510 confocal microscope (Carl Zeiss, Jena, Germany). Statistical Analyses The results are expressed as means ± standard error of the mean. The statistical significance of the differences was determined by the unpaired t-test or one-way analysis of variance followed by the post-hoc Tukey's HSD (honestly significant difference) test. The differences were considered statistically significant when the p values were <0.05, using the GraphPad Prism 6 (GraphPad Software, San Diego, CA, USA). Conclusions The results of our study demonstrate that the decreased expression of GPR40 in the ureteral obstructed kidney is associated with apoptosis via the activation of TNF-α and AT1R. In addition, the activation of GPR40 attenuates TNF-α-induced apoptosis by inhibiting AT1R expression and ROS generation through the regulation of NF-κB signaling pathways. Conflicts of Interest: The authors declare no conflict of interest.
244926760
s2orc/train
v2
2021-12-08T16:10:54.085Z
2021-12-05T00:00:00.000Z
A Carrier-Based Discontinuous PWM Strategy of NPC Three-Level Inverter for Common-Mode Voltage and Switching Loss Reduction : For the conventional carrier-based pulse width modulation (CBPWM) strategies of neutral point clamped (NPC) three-level inverters, the higher common-mode voltage (CMV) is a major drawback. However, with CMV suppression strategies, the switching loss is relatively high. In order to solve the above issue, a carrier-based discontinuous PWM (DPWM) strategy for NPC three-level inverter is proposed in this paper. Firstly, the reference voltage is modified by the twice injection of zero-sequence voltage. Switching states of the three-phase are clamped alternatively to reduce both the CMV and the switching loss. Secondly, the carriers are also modified by the phase opposite disposition of the upper and lower carriers. The extra switching at the border of two adjacent regions in the space vector diagram is reduced. Meanwhile, a neutral-point voltage (NPV) control method is also presented. The duty cycle of the switching state that affects the NPV is adjusted to obtain the balance control of the NPV. Still, the switching sequence in each carrier period remains the same. Finally, the feasibility and effectiveness of the proposed DPWM strategy are tested on a rapid control prototype platform based on RT-Lab. Introduction Compared with two-level inverters, NPC three-level inverters have advantages of lower output harmonics distortion and lower dv/dt of the power device. Thus, it is widely used in medium-voltage high-power traction applications, such as locomotive traction and ship propulsion [1][2][3]. The common-mode voltage emerges between neutral points of DC-link capacitors and stator windings in NPC three-level inverter fed motor system. Numerous hazards are associated with the common-mode voltage. First, the shaft voltage is brought out on the motor bearing, and the life of the motor will be shortened by the shaft current [4]. Secondly, there are high-frequency components in the common-mode voltage. They would cause electromagnetic interference to the power supply [5]. Thirdly, the insulation stress of stator windings is also increased by the CMV. Thus, the aging process of the motor is accelerated [6]. These problems will get worse if the inverter is applied to the mediumvoltage high-power traction application. For three-level inverters, there are two ways commonly used to reduce the CMV. The first is the improvement of topology, and the second is the improvement of modulation strategy. For the former, the CMV can be reduced by several methods, such as the addition of the fourth bridge [7,8], the modification of the DC-link structure [9], the addition of the common-mode inductance [10] and the addition of the filter [11,12]. All the above methods impose extra hardware on the system, resulting in an increase in cost, volume and loss. For the latter, there is no need for any additional hardware so that it attracts more attention all over the world. The common feature of CMV suppression PWM strategies is that switching states with the lower CMV are adopted to synthesize the switching sequence. In [13], five switching states with the lowest CMV are used to synthesize the switching sequence in each carrier period. Although the CMV can be reduced effectively, the switching loss is inevitably increased. In [14], only the switching state with zero amplitude of CMV can be used, and the output harmonic distortion of the inverter is improved by the proper arrangement of these switching states in each carrier period. However, there are seven switching states in each switching sequence, and the inverter suffers from lower DC-link voltage utilization and higher switching loss. A double modulation wave carrier-based PWM strategy (DMW-CBPWM) is proposed in [15], the reference voltage is modified to reduce the CMV and the NPV ripple simultaneously. In [16,17], four switching states with lower CMV are used to synthesize the switching sequence in each carrier period, and the NPV balance control is achieved by adjusting the duty cycle of the switching states. In [18], carriers are phase opposition arranged, the CMV and the NPV ripple are both suppressed by injecting the DC component into the reference voltage. With the promotion of power and voltage levels, the switching loss becomes the main component of the overall loss of the system [19][20][21]. Thus, the CMV and the switching loss should be both taken into consideration. Conventional CMV suppression PWM strategies only focus on the performance of CMV and are not suitable for medium-voltage high-power traction applications. DPWM methods are well known to reduce the switching loss and harmonic distortion at a given average switching frequency. Thus, DPWM strategies are extremely suitable for high-power medium-voltage three-level inverters. The conventional DPWMs for a three-level inverter are proposed in [22]. In [23], an improved DPWM is presented to reduce the switching loss and the harmonic distortion. However, the CMV is still larger under higher modulation index conditions. For the purpose of suppressing CMV and switching loss simultaneously, a carrierbased DPWM strategy is proposed in this paper. By the modification of the reference voltage, switching states with higher CMV are eliminated in the synthesis of the switching sequence. One of the three phases is clamped to a certain switching state in each switching sequence. Thus, the CMV and the switching loss are both reduced. By the modification of the carriers, the extra switches during the transient process of two adjacent regions are decreased to further reduce the switching loss. The proposed DPWM strategy is validated through experimental results. The experimental setup comprises an OPAL-RT OP5700 rapid control prototype and an NPC three-level inverter fed R-L load. Topology of the NPC Three-Level Inverter The topology of the NPC three-level inverter is shown in Figure 1. The DC-link voltage V dc is equally divided into the upper-capacitor voltage v C1 and the lower-capacitor voltage v C2 . i A , i B and i C are the three-phase load currents, respectively. The switching states of the three-level inverter are defined in Table 1. The space vector diagram for the three-level inverter is shown in Figure 2, with 3 3 = 27 basic vectors. According to the amplitude, the basic vectors can be divided into four categories: large vector V1-V6, medium vector V7-V12, small vector V13U-V18U, V13L-V18L and zero vector V0U, V0M, V0L. As can be seen, there are two small vectors that occupy the same position, and they are defined as redundant vectors. The definition is also suitable for zero vectors. CBPWM of the Three-Level Inverter For conventional CBPWM of the three-level inverter, the pulse signal for each power device is produced by the comparison of the carriers and the reference voltage vx, (x∈{A, B, C}). The reference voltages of the three phases are defined as: The space vector diagram for the three-level inverter is shown in Figure 2, with 3 3 = 27 basic vectors. According to the amplitude, the basic vectors can be divided into four categories: large vector V 1 -V 6 , medium vector V 7 -V 12 , small vector V 13U -V 18U , V 13L -V 18L and zero vector V 0U , V 0M , V 0L . As can be seen, there are two small vectors that occupy the same position, and they are defined as redundant vectors. The definition is also suitable for zero vectors. The space vector diagram for the three-level inverter is shown in Figure 2, with 3 3 = 27 basic vectors. According to the amplitude, the basic vectors can be divided into four categories: large vector V1-V6, medium vector V7-V12, small vector V13U-V18U, V13L-V18L and zero vector V0U, V0M, V0L. As can be seen, there are two small vectors that occupy the same position, and they are defined as redundant vectors. The definition is also suitable for zero vectors. CBPWM of the Three-Level Inverter For conventional CBPWM of the three-level inverter, the pulse signal for each power device is produced by the comparison of the carriers and the reference voltage vx, (x∈{A, B, C}). The reference voltages of the three phases are defined as: CBPWM of the Three-Level Inverter For conventional CBPWM of the three-level inverter, the pulse signal for each power device is produced by the comparison of the carriers and the reference voltage v x , (x ∈ {A, B, C}). The reference voltages of the three phases are defined as: where θ is the phase angle of the reference voltage and m is the modulation index. The modulation index is defined as: where V m is the amplitude of the reference voltage within the linear modulation range, m ∈ [0, 1]. For sinusoidal PWM (SPWM), the DC-link voltage is not fully utilized compared with space vector PWM (SVPWM). This problem can be solved by the injection of the where are the maximum and minimum values of the three-phase reference voltage at any arbitrary instant. After the injection of v Z1 , the reference voltage is modified to: The waveform of reference voltage v x is illustrated in Figure 3. where θ is the phase angle of the reference voltage and m is the modulation index. The modulation index is defined as: where Vm is the amplitude of the reference voltage within the linear modulation range, m ∈[0, 1]. For sinusoidal PWM (SPWM), the DC-link voltage is not fully utilized compared with space vector PWM (SVPWM). This problem can be solved by the injection of the zero-sequence voltage vZ1: where vmax = max (vA, vB, vC) and vmin = min (vA, vB, vC) are the maximum and minimum values of the three-phase reference voltage at any arbitrary instant. After the injection of vZ1, the reference voltage is modified to: The waveform of reference voltage v'x is illustrated in Figure 3. The switching sequence of each phase can be obtained by the comparison of the reference voltage and the two-phase disposition carriers. The switching sequence of a unit carrier period is shown in Figure 4. The switching sequence of each phase can be obtained by the comparison of the reference voltage and the two-phase disposition carriers. The switching sequence of a unit carrier period is shown in Figure 4. Common-mode Voltage The CMV of a three-level inverter refers to the voltage between the neutral point of the DC-link capacitors and the neutral point of the three-phase loads, which is defined as: where vCM is the common-mode voltage. vAO, vBO and vCO are the three-phase voltages, Common-Mode Voltage The CMV of a three-level inverter refers to the voltage between the neutral point of the DC-link capacitors and the neutral point of the three-phase loads, which is defined as: where v CM is the common-mode voltage. v AO , v BO and v CO are the three-phase voltages, respectively. From (5) and Table 1, the CMV of each basic vector can be obtained, as shown in Table 2. Table 2. Common-mode voltage of each basic vector. Basic Vector Amplitude of CMV The amplitude of CMV for a certain switching sequence is determined by the basic vector with the largest amplitude of CMV. Taking switching sequence V 13U -V 14L -V 0M -V 13L (ONN-OON-OOO-POO) as an example (as shown in Figure 4), the amplitude of CMV is V dc /3. Obviously, the small and zero vectors with the lower amplitude of CMV need to be adopted during the synthesis of the switching sequence in order to reduce the amplitude of CMV. Switching Frequency As shown in Figure 4, small vectors V 13U and V 13L are both used to synthesize the switching sequence. The switching state of these phases is all changed so that the switching sequence belongs to continuous PWM (CPWM). If only the small vector V 13U is adopted, the switching sequence will become V 13U -V 14L -V 0M (ONN-OON-OOO). While if only the small vector V 13L is adopted, the switching sequence will become V 14L -V 0M -V 13L (OON-OOO-POO). For the above two sequences, the switching states of phase A and C are both changed, but the switching state of phase B remains the same. Thus, these switching sequences belong to DPWM. By the use of DPWM, Only one of the redundant vectors appears in each carrier period. As a result, the switching loss can be reduced. Switching Sequence Design The space vector diagram can be divided into six sectors S I -S VI , with the large vector V 1 -V 6 as the boundary. Each sector can be further divided into four regions R I -R IV . In order to reduce the CMV, small vectors V 13U -V 18U and zero vectors V 0U and V 0L are abandoned. While large vectors, medium vectors, small vectors V 13L -V 18L and zero vector V 0M are adopted to synthesize the switching sequence. Thus, the amplitude of CMV can be limited to V dc /6. Taking sector S I as an example: 1. In region R I , V 0M : OOO, V 7 : PON and V 13L : POO are used to synthesize the switching sequence; In region R III , V 1 : PNN, V 7 : PON and V 13L : POO are used to synthesize the switching sequence; 4. In region R IV , V 2 : PPN, V 7 : PON and V 14L : OON are used to synthesize the switching sequence. From the aforementioned analysis, phase B is clamped to the switching state O in regions R I and R II . It can be defined as clamping state B0. Similarly, phase A is clamped to the switching state P in region R III , and it can be defined as clamping state A+; phase C is clamped to the switching state N in region R IV , and it can be defined as clamping state C−. The clamping state of other sectors can be obtained by replacing the vector in S I by the corresponding vector that occupies the identical position in a given sector, as shown in Figure 5. V1-V6 as the boundary. Each sector can be further divided into four regions RI-RIV. In order to reduce the CMV, small vectors V13U-V18U and zero vectors V0U and V0L are abandoned. While large vectors, medium vectors, small vectors V13L-V18L and zero vector V0M are adopted to synthesize the switching sequence. Thus, the amplitude of CMV can be limited to Vdc/6. Taking sector SI as an example: From the aforementioned analysis, phase B is clamped to the switching state O in regions RI and RII. It can be defined as clamping state B0. Similarly, phase A is clamped to the switching state P in region RIII, and it can be defined as clamping state A+; phase C is clamped to the switching state N in region RIV, and it can be defined as clamping state C−. The clamping state of other sectors can be obtained by replacing the vector in SI by the corresponding vector that occupies the identical position in a given sector, as shown in Figure 5. Carrier-Based Implementation The aforementioned switching sequences can be obtained by the modification of both the reference voltage and the carrier. For the reference voltage, the specific zero-sequence voltage needs to be injected into the reference voltage to realize the clamping state in each region. The CMV and the switching loss are reduced simultaneously. For the carrier, the upper and lower carriers are changed from phase disposition to phase opposition disposition. The extra switches at the boundary of two adjacent regions are reduced. The detailed process is shown as follow: Reference Vector Modification: The maximum, medium and minimum value of threephase reference voltages v x at any arbitrary instant can be obtained as follows: where max (·), mid (·) and min (·) return the maximum, medium and minimum value of v A , v B and v C . The relationship between v max , v mid , v min and v x in each sector is shown in Table 3. Table 3. Relationship between v max , v mid , v min and v x . Taking sector S I as an example: In region R I and R II , phase B needs to maintain clamping state 0. From Table 3, the reference voltage of phase B is v mid . Thus, −v mid needs to be injected into v B to keep the switching state of phase B at O. In region R III , phase A needs to maintain clamping state +. The reference voltage of phase A is v max . Thus, −v max + V dc needs to be injected into v A to keep the switching state of phase A at P. In region R IV , phase C needs to maintain clamping state -. The reference voltage of phase C is v min . Thus, −v max − V dc needs to be injected into v C to keep the switching state of phase C at N. In conclusion, in order to realize the clamping state in Figure 5, zero-sequence voltage v Z2 needs to be injected into the reference voltage v x . v Z2 can be expressed as: The three-phase reference voltage after the injection of v Z2 is v x : The three-phase reference voltages v x in a unit fundamental period are shown in Figure 6. The switching states of the three phases are clamped to the positive bus, the negative bus and the neutral point of the DC-link alternatively, and the discontinuous modulation is realized. Carrier Modification: For conventional CBPWM of the three-level inverter, the upper and lower-triangular carriers are in phase with each other. The PWM signals of each phase can be obtained by the comparison of the reference voltage v''x and two carriers. In sector SI, when the reference voltage travels from region RIII to RI, the reference voltage and the corresponding switching sequence are shown in Figure 7a. As can be seen, the switching sequence is PNN-PON-POO-PON-PNN in region RIII, and the amplitude of CMV is Vdc/6. The switching sequence is OON-PON→POO-PON-OON in region RI, and the amplitude of CMV is also Vdc/6. Comparing Figure 7a with Figure 4, the amplitude of CMV is reduced from Vdc/3 to Vdc/6 by the injection of vZ2. Moreover, the number of switches in each carrier period is reduced from 3 to 2. However, it is worth mentioning that there are extra switches during the transition of two adjacent re- changed so that the switching loss is increased. To solve the above issue, the two triangle carriers need to be changed from phase disposition to phase opposition disposition, as shown in Figure 7b. After that, the switching sequence in region RIII becomes POO-PON-PNN-PON-POO, and the switching sequence in region RI becomes OOO-POO-PON-POO-OOO. The amplitude of CMV is still Vdc/6. Meanwhile, only the switching state of phase A is changed during the transition of two adjacent regions. As can be seen, the switching sequence is PNN-PON-POO-PON-PNN in region R III , and the amplitude of CMV is V dc /6. The switching sequence is OON-PON→POO-PON-OON in region R I , and the amplitude of CMV is also V dc /6. Comparing Figure 7a with Figure 4, the amplitude of CMV is reduced from V dc /3 to V dc /6 by the injection of v Z2 . Moreover, the number of switches in each carrier period is reduced from 3 to 2. However, it is worth mentioning that there are extra switches during the transition of two adjacent regions. As shown in Figure 7a, the last switching state of region R III is PNN, and the first switching state of region R I is OON. The switching states of phase A and phase B are both changed so that the switching loss is increased. To solve the above issue, the two triangle carriers need to be changed from phase disposition to phase opposition disposition, as shown in Figure 7b. After that, the switching sequence in region R III becomes POO-PON-PNN-PON-POO, and the switching sequence in region R I becomes OOO-POO-PON-POO-OOO. The amplitude of CMV is still V dc /6. Meanwhile, only the switching state of phase A is changed during the transition of two adjacent regions. In conclusion, the extra switches can be reduced by the phase opposition disposition of the carriers. Then the switching loss can be further suppressed. NPV Balance Control In high-power medium-voltage applications, the NPV balance also needs to be considered, besides the CMV and the switching loss. The NPV is defined as ∆v = v C1 − v C2 , v th is the threshold value of ∆v. When |∆v| < v th , the NPV does not need to be controlled. When |∆v| > v th , the NPV ripple must be suppressed. The voltages of the upper capacitor v C1 and lower capacitor v C2 are affected by the neutral point current i O . The neutral point currents are generated by small vectors and medium vectors. Thus, the NPV balance can be realized by adjusting the duty cycle of small and medium vectors in each region of different sectors. Taking sector S I as an example, the duty cycle of V 13L : POO (i O = −i A ) can be adjusted to balance the NPV in region R I . While the duty cycle of V 7 : PON (i O = i B ) can be adjusted to balance the NPV in region R III . In region R I , the neutral point current i O generated by V 13L : POO is less than zero. When |∆v| > 0, the duty cycle of V 13L : POO needs to be increased. When |∆v| < 0, the duty cycle of V 13L : POO needs to be reduced. In region R III , the neutral point current i O generated by V 7 : PON is greater than zero. When |∆v| > 0, the duty cycle of V 7 : PON needs to be reduced. When |∆v| < 0, the duty cycle of V 13L : POO needs to be increased. Similarly, the duty cycle adjustment rules of small and medium vectors in other sectors are listed in Table 4. The vectors in Table 4 are defined as master vectors, and the other vectors in the same switching sequence are defined as slave vectors. The duty cycle of the master vector can be adjusted by injecting the compensation voltage v os into the reference voltage v x to move the reference voltage up or down. Taking sector S I as an example, in region R I , the duty cycle of V 13L : POO will be increased if the reference voltage v A is moved up. While the duty cycle of V 13L : POO will be reduced if the reference voltage v A is moved down, as illustrated in Figure 8a. In region R III , the duty cycle of V 7 : PON will be increased if the reference voltage v B is moved down. While the duty cycle of V 7 : PON will be reduced if the reference voltage v B is moved up, as illustrated in Figure 8b. It is worth mentioning that the shift of the reference voltage only changes the duty cycle of the master vector and vectors without affecting the NPV. Therefore, the NPV balance can be realized by the shift of the reference voltage. The calculation process of the compensation voltage vos is shown in Figure 9. When |Δv| < vth, vos is set to 0 and three-phase reference voltage v''x remains the same. When |Δv| > vth, the compensation voltage vos is obtained by multiplying the output of the PI controller by sign(Δv·ix·vx). Then, vos is injected into v''x, according to Table 5. The reference voltage v*x = v''x + vos after the compensation. Sector Region The calculation process of the compensation voltage v os is shown in Figure 9. When |∆v| < v th , v os is set to 0 and three-phase reference voltage v x remains the same. When |∆v| > v th , the compensation voltage v os is obtained by multiplying the output of the PI controller by sign(∆v·i x ·v x ). Then, v os is injected into v x , according to Table 5. The reference voltage v* x = v x + v os after the compensation. The calculation process of the compensation voltage vos is shown in Figure 9. When |Δv| < vth, vos is set to 0 and three-phase reference voltage v''x remains the same. When |Δv| > vth, the compensation voltage vos is obtained by multiplying the output of the PI controller by sign(Δv·ix·vx). Then, vos is injected into v''x, according to Table 5. The reference voltage v*x = v''x + vos after the compensation. Sector Region For the NPV balance control, it is necessary to limit the amplitude of v* x to ensure that the switching sequence is not changed. When v x > 0, v* x will be set to zero if v* x < 0, while v* x will be set to v max if v* x > v max . When v* x < 0, v* x will be set to zero if v* x > 0, while v* x will be set to v" min if v* x < v min , where v" max and v min are the maximum and minimum value of v" x . In conclusion, the CMV and the switching loss can be reduced by the modification of the reference voltage and the carriers. Meanwhile, the NPV balance can also be realized by the shift of the reference voltage without changing the switching sequence. The block diagram of the proposed DPWM is shown in Figure 10. The detailed steps are listed as follows: STEP 1: the zero-sequence voltage v Z1 is injected into the reference voltage v x ; STEP 2: the sector and region are determined by the three-phase reference voltage; STEP 3: the zero-sequence voltage v Z2 is injected into the reference voltage v x ; STEP 4: the compensation voltage v os is generated and injected into the reference voltage v x ; STEP 5: the pulse signal of each power switch is generated by the comparison of the reference voltage v* x and the carriers. Experimental Verification In order to verify the feasibility and effectiveness of the proposed DPWM strategy, the rapid control prototype OP5700 (OPAL-RT Co. Ltd.,Richardson, Canada) and the NPC three-level power model PEN8018 (Imperix Co. Ltd., Sion, Switzerland) are adopted to establish the experimental setup, as shown in Figure 11. Experimental Verification In order to verify the feasibility and effectiveness of the proposed DPWM strategy, the rapid control prototype OP5700 (OPAL-RT Co. Ltd.,Richardson, Canada) and the NPC three-level power model PEN8018 (Imperix Co. Ltd., Sion, Switzerland) are adopted to establish the experimental setup, as shown in Figure 11. Experimental Verification In order to verify the feasibility and effectiveness of the proposed DPWM strategy, the rapid control prototype OP5700 (OPAL-RT Co. Ltd.,Richardson, Canada) and the NPC three-level power model PEN8018 (Imperix Co. Ltd., Sion, Switzerland) are adopted to establish the experimental setup, as shown in Figure 11. Figure 11. The prototype of OPAL-RT ® OP5700 driven three-level inverter. Figure 11. The prototype of OPAL-RT ® OP5700 driven three-level inverter. The proposed DPWM strategy is compared with the conventional CBPWM, the conventional DPWM [22] and the DMW-CBPWM [15]. Parameters of the experimental setup are listed in Table 6. When R = 10 Ω and L = 10 mH, the power factor cos ϕ is 0.954. When R = 10 Ω and L = 30 mH, the power factor cos ϕ is 0.732. The phase voltage v AO , line voltage v AB , phase current i A , common-mode voltage v CM and the upper and lower-capacitor voltages v C1 /v C2 are shown in Figure 12. Common-Mode Voltage The switching sequences and the corresponding CMV of the four tested strategies in a unit carrier period are shown in Figure 13. For the conventional CBPWM, small vectors V iU (i ∈ {13, 14, . . . , 18}) are used to synthesize the switching sequence. For the conventional DPWM, zero vector V 0U and V 0L are used to synthesize the switching sequence. Thus, the amplitude of the CMV is higher for these two strategies. For the DMW-CBPWM and the proposed DPWM, small vector V iU , zero vector V 0U and V 0L are abandoned so that the amplitude of the CMV is lower. As can be seen in Figure 12 Switching Loss In each carrier period, the switch time of the conventional DPWM and the proposed DPWM strategy is two, the switch time of the conventional CBPWM is three and the switch time of the DMW-CBPWM is four, as shown in Figure 13. The efficiency of the inverter with the above four strategies is recorded by Yokogawa WT5000 power analyzer, as listed in Table 7. The efficiency of the proposed DPWM is higher than the other three strategies, which verifies that the switching loss of the inverter can be suppressed by the proposed DPWM. Harmonic Distortion As shown in Figure 12, the output current ripple of DMW-CBPWM is the highest. The output current ripple of the other three strategies is not very different from each other. By the FFT analysis of the experimental results, the harmonics spectra of the output current waveforms under m = 0.3 and m = 0.8 are shown in Figure 14. Under lower modulation index and higher power factor conditions, the THD of the proposed DPWM is greater than Conventional CBPWM and Conventional DPWM but less than DMW-CBPWM. Under lower power factor conditions, regardless of the modulation index, the THD of the proposed DPWM is smaller than the other three strategies. Therefore, the output waveform quality of the inverter can be improved by the proposed DPWM strategy under certain load conditions. Neutral Point Voltage If |∆v| < v th , the NPV will not be controlled. As shown in Figure 12, the upper and lower-capacitor voltages are approximately the same for all four strategies under lower-modulation index conditions. Under higher-modulation index conditions, there are low-frequency components, which are 150 Hz (three times the fundamental frequency of output current), in both the upper and lower-capacitor voltages for the conventional CBPWM, the conventional DPWM and the proposed DPWM. However, the neutral point voltage is still self-balanced on the fundamental period level. Taking the proposed DPWM as an example, the detailed analysis is as follows. After the injection of v Z2 , the dwell time of switching state O for each phase can be expressed as: As shown in Figure 15, in a unit carrier period, the neutral point current i O is: The reference voltage, output current and the neutral point curren as the function of the phase angle θ. Taking sector SI as an example, w voltage is in region RI, phase B is clamped to switching state O and dB (7), (8) and (9) into (10), yields: When the phase angle is θ + π/3, phase A is clamped to the switchi = 1. The neutral point current is: The reference voltage, output current and the neutral point current can be regarded as the function of the phase angle θ. Taking sector S I as an example, when the reference voltage is in region R I , phase B is clamped to switching state O and d BO = 1. Substituting (7), (8) and (9) into (10), yields: When the phase angle is θ + π/3, phase A is clamped to the switching state O and d AO = 1. The neutral point current is: The reference voltage and the output current are three-phase symmetrized. Substituting (13) into (12), yields that i O (θ) = −i O (θ + π/3). It indicates that the NPV generated by the neutral point current at any arbitrary instant will be offset by the neutral point current lagging π/3. Thus, there is always a triple fundamental frequency component in the neutral point voltage. However, the neutral point can still be considered balanced on a fundamental period level; as shown in Figure 12a,b. If |∆v| > v th , the proposed NPV balance control method will be adopted. Figure 16 shows experimental results for modulation indices of m = 0.3 and m = 0.8, respectively, which illustrates the NPV balance scenario described earlier. The upper and lower capacitor voltages are intentionally unbalanced by 100 V. As can be seen, the voltage difference can be eliminated by the injection of v os under both higher and lower-modulation index conditions. The settling time is significantly smaller under the higher-modulation index and high-power factor conditions because a larger value of v os can be added. shows experimental results for modulation indices of m = 0.3 and m = 0.8, respectively, which illustrates the NPV balance scenario described earlier. The upper and lower capacitor voltages are intentionally unbalanced by 100 V. As can be seen, the voltage difference can be eliminated by the injection of vos under both higher and lower-modulation index conditions. The settling time is significantly smaller under the higher-modulation index and high-power factor conditions because a larger value of vos can be added. Conclusions A carrier-based DPWM strategy of the NPC three-level inverter is proposed in this paper. This strategy can be used in NPC three-level inverters for industrial applications, such as traction inverters in rail transit systems and mine hoist systems, where lower common-mode voltage and lower switching loss are required. Firstly, the reference voltage is modified by two injections of the zero-sequence voltage. The DC-link voltage utilization of the inverter is enhanced by the first injection of the zerosequence voltage. Then, the three phases are clamped alternatively to reduce the CMV and the switching loss by the second injection of the zero-sequence voltage. Secondly, the carrier is modified by phase opposition disposition to reduce the extra switches during the transition of two adjacent regions. Finally, without the change of switching sequence in each carrier period, the NPV balance is achieved by the injection of compensation voltage into the reference voltage. In addition, the output harmonic distortion is also improved by the proposed strategy.
1325890
s2orc/train
v2
2016-05-04T20:20:58.661Z
2013-10-18T00:00:00.000Z
Profiles of Organic Food Consumers in a Large Sample of French Adults: Results from the Nutrinet-Santé Cohort Study Background Lifestyle, dietary patterns and nutritional status of organic food consumers have rarely been described, while interest for a sustainable diet is markedly increasing. Methods Consumer attitude and frequency of use of 18 organic products were assessed in 54,311 adult participants in the Nutrinet-Santé cohort. Cluster analysis was performed to identify behaviors associated with organic product consumption. Socio-demographic characteristics, food consumption and nutrient intake across clusters are provided. Cross-sectional association with overweight/obesity was estimated using polytomous logistic regression. Results Five clusters were identified: 3 clusters of non-consumers whose reasons differed, occasional (OCOP, 51%) and regular (RCOP, 14%) organic product consumers. RCOP were more highly educated and physically active than other clusters. They also exhibited dietary patterns that included more plant foods and less sweet and alcoholic beverages, processed meat or milk. Their nutrient intake profiles (fatty acids, most minerals and vitamins, fibers) were healthier and they more closely adhered to dietary guidelines. In multivariate models (after accounting for confounders, including level of adherence to nutritional guidelines), compared to those not interested in organic products, RCOP participants showed a markedly lower probability of overweight (excluding obesity) (25≤body mass index<30) and obesity (body mass index ≥30): −36% and −62% in men and −42% and −48% in women, respectively (P<0.0001). OCOP participants (%) generally showed intermediate figures. Conclusions Regular consumers of organic products, a sizeable group in our sample, exhibit specific socio-demographic characteristics, and an overall healthy profile which should be accounted for in further studies analyzing organic food intake and health markers. Introduction During FAO international conference held in 2010 [1], a global definition of sustainable diets was proposed: ''Sustainable diets are those diets with low environmental impact which contribute to food and nutrition security and to a healthy life for present and future generations. Sustainable diets are protective and respectful of biodiversity and ecosystems, culturally acceptable, accessible, economically fair and affordable; nutritionally adequate, safe and healthy, while optimizing natural and human resources''. In the light of this definition, it is clear that a major challenge exists for nutrition specialists and health care workers [2]. In most industrialized countries, it is widely recognized that current lifestyle and dietary patterns, particularly energy-dense diets rich in saturated fats and added sugars, are not optimal for sustaining health [3,4]. Indeed, these lifestyles are at least partly responsible for the growing rates of overweight and obesity, which are in turn associated with the increasing prevalence of chronic diseases such as metabolic syndrome, type 2 diabetes, cardiovascular diseases and some cancers [3,4]. In most countries, a small fraction of farmers and the general population have long shown great concern about this question. Indeed, facing the changes that have taken place in the food production system, refusal of chemical fertilizers, pesticides and intensive animal husbandry since the 1970's, gave rise to so-called ''organic'', ''biological'', ''biodynamic'' and ''agro-ecological'' productions, depending on the options and/or the country. These alternative production systems are now being recognized because of their low environmental impact [5] and are being certified according to specific regulations and labels in most countries and continents. Such organic production has markedly increased during the last decade, representing up to 3-20% (mean 5.1%) of agricultural acreage in European Union countries, but only 0.6% in the USA [6]. This has been largely driven by consumer attitudes and the growing demand for specific foodstuffs, with a yearly increase of over 10%, reaching, in 2010, a worldwide production of 700 million tons of food per year and a market share of about 60 billion US $/year [7]. In 2010, the countries with the largest markets were the United States, Germany and France [6]. In this context, a diet based on organic products may better meet the definition of sustainability. From a public health point of view, it is thus crucial to understand and analyze organic-productrelated consumer profiles. Indeed, while the number of consumers of organic food is markedly rising, limited knowledge is available regarding the nutritional interest and safety of organic food [8][9][10][11]. Moreover, only small-scale studies have described the profiles of organic consumers [12][13][14][15][16][17] and little information is available regarding their actual food and nutrient intakes [18] or dietrelated health indicators [19][20][21]. Thus, within the framework of the web-based large ongoing Nutrinet-Santé Cohort Study [22], already including about 104,000 participants by the end of 2011, we sought here to describe the socio-demographic profiles of organic food consumers, along with their food and nutrient intakes and anthropometric characteristics. Population We analyzed data from the Nutrinet-Santé Study, a large webbased prospective observational cohort launched in France in May 2009 with a scheduled follow-up of 10 years (recruitment planned over a 5-year period) that is attempting to investigate the relationship between nutrition and health as well as determinants of dietary behavior and nutritional status. The design, methods and rationale of the Nutrinet-Santé Study have been described in detail elsewhere [22]. Briefly, the study was implemented in a general population and is targeting volunteer adult Internet-users aged 18 or older. Participants were included in the cohort after completing a baseline set of web questionnaires for collecting information on socio-demographic conditions, anthropometry, lifestyle, dietary intake (using repeated 24-h records) and physical activity along with health status, [22]. Baseline questionnaires were compared to traditional methods (paper forms or interview by a dietician) [23][24][25]. Approximately every month, they are invited to fill in optional complementary questionnaires related to determinants of food behavior and nutritional and health status. Ethics Statement This study is being conducted according to guidelines laid down in the Declaration of Helsinki and was approved by the International Research Board of the French Institute for Health and Medical Research (IRB Inserm nu 0000388FWA00005831) and the ''Comité National Informatique et Liberté'' (CNIL nu 908450 and nu 909216). Electronic informed consent was obtained from all subjects. Data Collection Organic food questionnaire. Two months after inclusion, participants were asked to provide information about organic products via an optional questionnaire. Questions were asked about opinions on prices, nutritional quality, taste and the health and environmental impact of organic products. Participants were also asked to report frequency of consumption/use, or else reasons for non-consumption/non-use of 18 organic products (fruit, vegetables, soya, dairy products, meat and fish, eggs, grains and legumes, bread and cereals, flour, vegetable oils and condiments, ready-to-eat meals, coffee/tea/herbal tea, wine, biscuits/chocolate/sugar/marmalade, other foods, dietary supplements, textiles and cosmetics). The eight possible responses were as follows: 1) most of the time; 2) occasionally; 3) never (too expensive); 4) never (product not available); 5) never (''I'm not interested in organic products''); 6) never (''I avoid such products''); 7) never (for no specific reason); and 8) ''I don't know''. Socio-demographic and lifestyle data. At baseline, sociodemographic data included age, gender, education (# high school diploma, high school, post-secondary graduate), co-habitation or not, smoking status (never, former and current), number of children and income. Income per household unit was calculated using information about household income and composition. Thus, household income per month was divided by the number of consumption units (CU) calculated, i.e. 1 CU for the first adult in the household, 0.5 CU for other persons aged 14 or older and 0.3 CU for children under 14 [26]. The following categories of monthly income were used: ,1,200, 1,200-1,800, 1,800-2,700 and .2,700 euros per household unit. Leisure time physical activity was assessed using the French short form of the International Physical Activity Questionnaire (IPAQ), self-administered online [27][28][29]. Data obtained using IPAQ were computed for the metabolic equivalent task in min per week. The recommended IPAQ categories of physical activity were used: low (,30 min brisk walking/day), moderate (30-,60 min/day brisk walking/day or equivalent) and high ($60 min brisk walking/day or equivalent). The anthropometric questionnaire provided data on current height, weight and practice of restrictive diets (type and reason, history) [25]. Dietary data assessment. Dietary data were collected at baseline using three 24-h records randomly distributed within a two-week period, including two week days and one weekend day [22]. Participants reported all foods and beverages consumed throughout the day: breakfast, lunch, dinner and all other occasions. Portion sizes were then estimated using purchase unit, household unit and photographs, derived from a previously validated picture booklet [30]. No specific information was requested if foods eaten were organic or conventional. Consumption of fish and seafood per week was assessed by a specific frequency question. Nutrient intakes were estimated using the adhoc NutriNet-Santé composition table that includes more than 2,000 foods. Statistical Analysis and Data Treatment Body mass index (BMI) was calculated as the ratio of weight in kilograms to squared height in meters (kg/m 2 ). In the present study, for each participant, daily mean food consumptions were calculated from 24-h records, weighting weekday or weekend to represent a week. Identification of underreporting participants was based on the validated published method proposed by Black [31] using Schofield equations for estimating resting metabolic rates [32]. For those with available data, we computed a score reflecting adherence to dietary components of the PNNS-GS (Programme National Nutrition Santé-Guidelines score) that reflects adherence to French nutritional recommendations [33], extensively described elsewhere [34]. Briefly, the original score includes 13 components: eight refer to food serving recommendations (fruit and vegetables, starchy foods, whole grain products, dairy products, meat, eggs and fish, seafood, vegetable fat, water and soda), four refer to moderation in consumption (added fat, salt, sweets, alcohol) and one represents physical activity. Points are deducted for overconsumption of salt and sweets and when energy intake exceeds the necessary energy level by more than 5%. Full details regarding the computation of this score can be found in Table S1. For the present analysis, we computed a modified version of the PNNS-GS (mPNNS-GS) which did not include the physical activity component. The number of dimensions retained was determined according to the following criteria: eigenvalue .1, scree test and interpretability of extracted score. Then, cluster analysis was used to perform hierarchical ascendant classification using Ward's method based on the first three dimensions retained in the MCA procedure [36]. To test the stability of the method, concordance between the classification performed on the whole sample and on a random sample including half of the population was tested. The kappa coefficient was high (85%). Besides, classification was stable across gender. Due to well-known differences in dietary patterns between men and women, all subsequent analyses were stratified by gender. In order to better understand the selected sample, we compared the characteristics of included and excluded NutriNet-Santé participants using chi-square tests and Student t-tests, as appropriate. Socio-demographic characteristics of the sample are presented in both men and women, as well as in the overall sample. For each individual, and to better describe clusters, we counted the number of times each of the 8 types of responses (i.e. most of the time, occasionally, never because too expensive, never because not available, never because not interested in organic products, never because ''I avoid this product'', never (for no specific reason), ''I don't know'') was given to the 18 items. Profiles were described in terms of socio-demographic and lifestyle data, food group and nutrient intake by gender. P values referred to chisquare or non-parametric Kruskal-Wallis tests. Energy adjustment was performed using the residual method for nutrient intake. Univariate and multivariate models were performed to estimate the association between pre-overweight (excluding obesity) (25#BMI,30) and obesity (BMI$30) with profiles of organic food consumers using polytomous logistic regression (reference = BMI,25) [37]. Odds ratios and 95% confidence intervals were provided. The final model was adjusted for age, smoking status, physical activity, education, restrictive diet and quality of the diet (mPNNS-GS). Tests of statistical significance were 2-sided and the type I error was set at 5%. Statistical analyses were performed using SAS software (version 9.1, SAS Institute Inc, Cary, NC, USA). Results For the present analysis, we focused on participants included in the Nutrinet-Santé Study between June 2009 and December 2011. Among these 104,252 participants, we selected only those who filled in the second optional questionnaire (month 2) (N = 70,069), with complete and valid dietary data (three 24-h records) (N = 61,867), who were not underreporters (N = 54,322). We also eliminated those with missing covariates, leaving 54,311 participants in the present analysis. Characteristics of the Sample Descriptive information on the overall sample is presented in Table 1. Among the 54,311 participants, mean age was 43.7614.4 and 77% were women, 64.5% had reached postsecondary degree and 49.8% were never-smokers. The average BMI was 23.864.5; 21.6% and 8.7% were overweight and obese, respectively. Organic products were perceived as being better for health and the environment by 69.9% and 83.7% of the participants, respectively. However, 51% non-consumers declared that they were too expensive (table S3). Profiles of Organic Product Consumers We identified 5 clusters (clusters 1 to 5) as shown in Table 2. Two of these were composed of consumers of organic products (COP), including regular consumers (cluster 5: RCOP) and occasional consumers (cluster 4: OCOP). Most participants were occasional consumers (OCOP); 52% were women and 48% men. Moreover, RCOP comprised 11% men and 15% women, respectively. Three other clusters grouped individuals who generally did not consume organic products due to the high cost (cluster 3), because they avoided such products (cluster 2) or because they were not interested in organic products (cluster 1). General characteristics across clusters and genders are presented in Table 3. RCOP males were younger and more often neversmokers than others, while RCOP females were older and more often former smokers. In both genders, consumption of organic foods was associated with a higher education level, lower BMI and higher level of physical activity along with less frequent restrictive dieting. As expected, cluster 3 participants, i.e. those who stated that organic foods were too expensive, had a lower income and education level. They also more often reported a restrictive diet. Income per household unit in the other four clusters was high and fairly similar among clusters. In addition, participants who were uninterested in organic products (cluster 1) displayed weaker adherence to nutritional guidelines compared to RCOP ( Table 3): 7.761.7 versus 8.461.8 in men, respectively and 7.961.8 versus 8.761.7 in women, respectively. Adherence to nutritional guidelines was similar in clusters 1, 2 and 3. Dietary Intake According to Profile of Organic Product Consumers Food intakes for the different clusters are shown in Table 4. For clarity, we focused on differences greater than 20%. Compared to RCOP participants, those in cluster 1 showed lower consumption of healthy foods such as fruit (220% in men, 231% in women), vegetables (227% in men, 228% in women), legumes (249% in men, 285% in women), vegetable oils (238% in men, +36% in women), whole grains (2247% in men, 2153% in women) and nuts (2239% in men, 2381% in women) and higher consumption of sweet soft drinks (+34% in men, +46% in women) and alcoholic beverages (+18% in men, +8% in women), animal products including processed meat (+31% in both genders) and fresh meat (+34% in men, +32% in women) and milk (+43% in both genders). Participants in clusters 2 and 3 showed overall comparable differences in dietary patterns to those of cluster 1 with respect to RCOP. It is noteworthy that OCOP consumers (cluster 4) of organic foods showed profiles intermediate between never-consumers and RCOP. Differences in energy intake and in other macronutrients across clusters were low ( Table 5). Compared to RCOP, participants in cluster 1 had lower intakes of polyunsaturated fatty acids (212% in both genders), especially n-3 PUFA (219% in men, 220% in women), fibers (227% in men, 228% in women), beta-carotene (228% in men, 233% in women), folic acid (215% in men, 217% in women), vitamin C (210% in men, 213% in women) and iron (220% in men, 218% in women). They were also characterized by a higher alcohol intake (+17% in men, +11% in women) and cholesterol (+12% in men, +10% in women). As was the case for food consumption, differences in nutrient intakes of cluster 2 and cluster 3 participants were generally comparable to those of cluster 1 with respect to RCOP, while OCOP (cluster 4) showed intermediate profiles. Association between BMI Categories and Profiles of Organic Product Consumers The association between overweight/obesity and profiles of organic food consumers are presented in Table 6. In the unadjusted model, among men and women, participants in the RCOP (cluster 5) group had a significantly lower probability of being overweight and obese than those who did not eat organic food (cluster 1). OCOP displayed intermediate figures. Compared with cluster 1, persons who avoided organic products (cluster 2) were more likely to be overweight (in both gender) or obese (in women only) and those who did not buy any organic food due to high cost (cluster 3) were more likely to be obese. After adjustment for age, physical activity, education, smoking, energy intake, use of restrictive diet and the PNNS dietary adequacy score, RCOP in cluster 5 conserved a markedly lower probability of being overweight or obese: 236% and 262% in men and 242% and 248% in women, respectively. For OCOP (cluster 4), women showed a 12% and 13% lower probability of being overweight or obese, respectively, whereas men no longer showed a reduced risk after adjustments. Women who avoided or did not buy organic food because of the high cost showed greater probability of being obese than those in cluster 1. Discussion The present study is the first to describe, for a large cohort, socio-demographic aspects, lifestyle and dietary patterns of adult consumers of organic foods compared to non-consumers. We identified 5 typical clusters of consumers based on their attitude towards organic foods, including two that comprised occasional and regular organic food consumers. Compared to the 3 clusters of non-consumers, organic food consumers progressively improved adherence to the recommended food pattern and nutrient intake and had lower probability of being overweight or obese, after accounting for confounding factors. Profiles and Attitudes of Organic Product Consumers Based on the frequency of organic product consumption, three clusters grouped together non-consumers of organic foods, mainly because they were either uninterested in these products, deliberately avoided them or considered them too expensive. In contrast, two other clusters grouped occasional and regular consumers of organic products. The present findings support previous research showing that, in France, most organic product purchases are occasional; indeed, only 6% of the general population reported daily organic product purchases [13]. In the present survey, the vast majority of organic product consumers (OCOP and RCOP) perceived organic products as being better for health and environment. This is fairly consistent with three previous smallscale surveys [12,13,16] and also with a Canadian study indicating that 89% organic food consumers reported nutritional and health motivations [38]. Regarding demographics and socio-economics, we found that a majority of organic product consumers of both genders had a higher education level than the non-consumer clusters, while overall differences in incomes between the clusters of non-consumers and consumers were not striking. However, it is noteworthy that participants in cluster 3, i.e. those who declared that organic food is too expensive, had lower incomes and education levels. In a previous evaluation of organic food consumption patterns in France [13], the authors concluded that the demographic profile of the organic buyer was not related to income, age or family size, but rather to the educational level. In line with our observations, Australian organic food consumers did not show a greater income but had a higher education level [17]. In contrast, in Belgium, organic consumption was positively associated with age and income while a negative association with education was observed [18]. Food Consumption across Clusters of Organic Product Consumer We found an overall similarity in daily food consumption in the three clusters of non-consumers. In contrast, in both genders, we observed stepwise changes in food group consumption among the clusters of organic product consumers, with marked deviations in the regular consumer cluster (RCOP), and increased consumption of whole grains, vegetables, fruit, soup, dried fruit, legumes, fruit and vegetable juices, sweet products, vegetable oils and nuts. This is in line with a previous observation indicating higher vegetable consumption by organic consumers compared to conventional consumers in Belgium [18]. In addition, lower consumption of meat and processed meat, milk, dairy products, soda, alcoholic beverages, sweets and fat products, added fat and fast foods was observed in organic food consumer clusters. Moreover, the mPNNS-GS, a score reflecting adequacy with dietary guidelines, gradually increases from non-consumer clusters to the OCOP and RCOP clusters. It is noteworthy that consumption of some food groups did not differ between non-consumers and consumers of such organic foods as refined cereals, fish and seafood, cheese and milky desserts, potatoes and tubers and biscuits. The observed plant food-based dietary pattern of organic food consumers, in addition to being closer to the recommended healthy dietary pattern [33,39,40], may also better comply with the sustainable diet concept to minimize the environmental impact [1,41]. Nutrient Intake across Organic Product Consumer Clusters Daily intake of energy, total fats, mono-unsaturated fatty acids, phosphorus and calcium did not markedly differ across clusters. In contrast, and consistent with data on food consumption, higher daily intakes by RCOP participants of both genders were found for most minerals and fatty acids, some vitamins and fiber, whereas lower daily intakes of proteins, saturated fatty acids, sodium, vitamin A (retinol), alcohol and cholesterol were found compared to their counterparts. In a study employing simulation analysis for nutrient intake estimation, a higher intake in betacarotene was found in organic consumers in Belgium [18]. In most cases herein, it was striking to observe that RCOP better fit with French nutritional guidelines [39,40] than the other groups. That is consistent with our previous finding that the easiest way to attain all nutritional recommendations is to consume more (unrefined) plant foods and less animal, fat-and sugar-rich foods [42]. Organic Product Consumption and Overweight/Obesity After accounting for confounding factors, we found that the probability of being overweight or obese was significantly lower in male and female RCOP than in the 3 non-consumer clusters. A significantly reduced probability, but of much less magnitude, was also found in female OCOP. This was probably related to their healthier food pattern, as discussed above. Nevertheless, after further adjustment for the mPNNS-GS score, reflecting the level of adherence to nutritional guidelines, such associations remained. This raises the question of possibly unexplored characteristics also associated with consumption of organic food. Previous research reported markedly lower contamination of organic foods by pesticide residues compared to conventional foods [8][9][10][11]43]. Since several studies have reported an association between pesticide exposure or residues in the body and obesity and type 2 diabetes [9,[43][44][45][46], the possibility of a potential role of organic food in preventing excessive adiposity because of its lower content in pesticide residues should be tested in further studies. Our study had major strengths, including a web-based platform allowing assessment of accurate dietary data and other types of data [23][24][25], and the large sample size of the Nutrinet-Santé cohort. The use of clustering to separate individuals into mutually exclusive groups can provide a highly accurate description. However, some limitations in the present study should be noted. First, only the frequency, but not the quantity, of actual organic food consumption was available. Secondly, the nutrient intakes were calculated using a single food composition database essentially concerning non-organic products. This likely entailed underestimated nutrient intakes among organic food consumers given the potentially different nutritional composition for some items [9,11,43,47,48]. Finally, our findings must be interpreted with caution, since most of the NutriNet-Santé participants exhibited a specific socio-economic profile. Indeed, as compared with national estimates [49], our sample included proportionally more women (77.2% versus 52%) and more individuals of high educational level (64.5% versus 24.3% with post-secondary versus primary/secondary education, respectively). This is consistent with existing knowledge regarding the characteristics of participants in volunteer-based studies focusing on nutrition [50]. In conclusion, the present survey of this very large cohort indicated that consumers of organic foods have a higher level of education, a dietary pattern better fitting food-based recommendations and micronutrient/fiber recommended intakes, as well as a sustainable diet concept; moreover, they are less overweight and less obese compared to non-consumers. From a public health standpoint, better knowledge of the characteristics of consumers and non-consumers of organic products is of great importance in promoting behavior aimed at improving the sustainability of the diet. Finally, these findings provide important new insights into organic food consumer profiles, which will be useful for further testing the relationship between organic food intake and health in surveys based on a prospective design such as the Nutrinet-Santé Study. Table S1 15-point PNNS-GS (Programme National Nutrition Santé-Guidelines score) computation: definition of the 13 components reflecting PNNS recommendations (diet and physical activity), cut-off and scoring. (DOCX) Table S3 Description of opinions and attitudes (prices, taste, nutritional quality, environment impact, health impact and general opinion) about organic products across the 5 clusters defined according to consumption of organic products, NutriNet-Santé study (N = 54,311). Two clusters were composed of consumers of organic products (COP), including regular consumers (cluster 5: RCOP) and occasional consumers (cluster 4: OCOP). Three other clusters grouped individuals who generally did not consume organic products due to the high cost (cluster 3), because they avoided such products (cluster 2) or because they were not interested in organic products. (DOCX)
221246120
s2orc/train
v2
2020-08-24T01:00:53.575Z
2020-08-21T00:00:00.000Z
Neural Machine Translation without Embeddings Many NLP models operate over sequences of subword tokens produced by hand-crafted tokenization rules and heuristic subword induction algorithms. A simple universal alternative is to represent every computerized text as a sequence of bytes via UTF-8, obviating the need for an embedding layer since there are fewer token types (256) than dimensions. Surprisingly, replacing the ubiquitous embedding layer with one-hot representations of each byte does not hurt performance; experiments on byte-to-byte machine translation from English to 10 different languages show a consistent improvement in BLEU, rivaling character-level and even standard subword-level models. A deeper investigation reveals that the combination of embeddingless models with decoder-input dropout amounts to token dropout, which benefits byte-to-byte models in particular. Introduction Neural NLP models often operate on the subword level, which requires language-specific tokenizers (Koehn et al., 2007;Adler and Elhadad, 2006) and subword induction algorithms, such as BPE (Sennrich et al., 2016;Kudo, 2018). Instead, working at the byte level by representing each character as a variable number of Unicode (UTF-8) bytes, does not require any form of preprocessing, allowing the model to read and predict every computerized text using a single vocabulary of 256 types. While previous work found that byte-level models tend to underperform models based on subword tokens (Wang et al., 2019), byte-based models exhibit an interesting property: their vocabulary is smaller than the number of latent dimensions (256 < d). In this work, we demonstrate that this property allows us to remove the input and output embedding layers from byte-to-byte translation models, and in doing so, improve the models' performance consistently. We replace the dense trainable embedding matrix with a fixed one-hot encoding of the vocabulary as the first and last layers of a standard transformer model. Machine translation experiments on 10 language pairs show that byte-to-byte models without an embedding layer achieve higher BLEU scores than byte-based models with parameterized embeddings (+0.5 on average), thus closing the performance gap with subword and character models. We observe this result consistently throughout a wide variety of target languages and writing systems. The fact that removing parameters improves performance is counter-intuitive, especially given recent trends in machine learning that advocate for increasingly larger networks. We further investigate why embeddingless models yield better results and find implicit token dropout (commonly referred to as "word dropout") as the main source of that boost. While prior work shows that randomly masking tokens from the decoder input can improve the performance of language generation models (Bowman et al., 2016), we find that this effect is amplified when operating at the byte level. Overall, our results suggest that, even without additional parameters, byte-based models can compete and potentially outperform subword models, but that they may require alternative optimization techniques to achieve that goal. Byte Tokenization Modern software typically represents text using Unicode strings (UTF-8), which allows one to encode virtually any writing system using a variable number of bytes per token; English characters are typically represented by a single byte, with other writing systems taking two (e.g. Arabic), three (e.g. Chinese), or four (e.g. emojis) bytes per character. By treating each byte as a separate token, we can encode any natural language text using a single uni- Figure 1: Subword (BPE), character, and byte tokens of the string "Будь здоров." UTF-8 uses two bytes to represent each character in the Cyrillic script, making the byte sequence longer than the number of characters. versal vocabulary of only 256 token types. Moreover, byte tokenization obviates the need for any heuristic preprocessing, such as splitting spaces, punctuation, and contractions. Figure 1 illustrates subword, character, and byte tokenization. Embeddingless Model Our model is based on the original transformer encoder-decoder (Vaswani et al., 2017) with one main difference: we eliminate the input and output token embedding layers. These layers typically use a common parameter matrix E ∈ R |V |×d that contains a d-dimensional embedding vector for each source and target vocabulary item in V . 2 Instead, we use a fixed one-hot representation of our byte vocabulary. For instance, the character "R" could be represented as a vector with 1 at dimension 82 and 0 elsewhere. Since it is standard practice to use representations of more than 256 dimensions, every possible byte can be represented by such one-hot vectors. To predict the next token for a decoder input of n tokens, we take the output of the last transformer decoder layer, Y ∈ R n×d , and apply a softmax across each vector's dimensions. Formal expressions of the input and output of our model are detailed in Figure 2. Omitting the embedding layer reduces the number of parameters by a factor of O(|V | · d). 3 We do add a total of 3 parameters to scale the encoder and decoder's (one-hot) inputs and the decoder's output (before the softmax). We initialize all three with √ d, akin to the constant scaling factor typically applied to the input embedding layer in transformers. Despite the reduction in model size, memory Original Embeddingless Figure 2: The main differences between the original encoder-decoder model and the new embeddingless model. X ∈ R n×|V | is the one-hot representation of n input tokens (bytes); P n are the positional embeddings up to length n. consumption increases when working on longer sequences, since the space complexity of transformers is O(n 2 + n · d). In our case, d (512) is typically larger than n (see Table 1), entailing an increase in memory consumption that is roughly linear in the sequence length n, and a similar decrease in processing speed when compared to character and subword models. In addition to replacing the embedding layers, we also remove the dropout layers on the encoder input and decoder output, since zeroing out entries of one-hot vectors is equivalent to randomly masking out input tokens or deleting significant parts of the model's predicted distribution. The dropout on the decoder input (prefix of the target fed with teacher forcing) remains intact at this point and is applied throughout our main experiments. Further analysis shows that decoder input dropout is in fact a significant source of performance gains, which we further investigate in Section 6. Experiments We train byte-tokenized embeddingless models for machine translation and compare them to standard byte, character, and subword-based models on a diverse set of languages. We adopt a standard experimental setup that was designed and tuned for the subword baseline and limits our hyperparameter tuning to dropout probabilities. et al., 2014), selecting 10 additional languages with varying characteristics 5 (see Table 1). For each one, we train translation models from English to the target language (the original direction of translation ), and also in the opposite direction for completeness. We clean the training data for every language pair by first removing sentences longer than 800 bytes, and then the sentences with the largest bytelength ratio between source and target such that we remove a total of 5% of the training examples. Datasets Baselines In addition to the byte-based embeddingless transformer, we train standard transformer encoder-decoder models as baselines, each one using a different tokenization scheme: subword, character, and byte. For subword tokenization, we apply the Moses tokenizer (Koehn et al., 2007) followed by BPE (Sennrich et al., 2016). Both character and byte tokenizations apply no additional preprocessing at all and include whitespaces as valid tokens. Hyperparameters The code for our model and baselines is based on Fairseq (Ott et al., 2019) implementation of the transformer encoder-decoder model. During preprocessing we use 10,000 merging steps when building the BPE vocabulary for every language pair. The vocabularies and embeddings are always shared among source and target languages. In every transformer we use 6 encoder and decoder layers, 4 attention heads, a hidden dimension of 512, and a feed-forward dimension of 1024. We optimize with Adam (Kingma and Ba, 2014), using the inverse square root learning rate scheduler with 4000 warmup steps and a peak learn- ing rate of 5 × 10 −4 , label smoothing of 0.1, and weight decay of 1 × 10 −4 . We train each model for 50k steps and average the top 5 checkpoints according to the validation loss. We tune dropout (0.2 or 0.3) on the validation set. We set the batch size according to a maximum of 64,000 bytes per batch, which controls for the number of batches per epoch across different tokenization methods. Evaluation We evaluate our models using Sacre-BLEU, case-sensitive, with the 13a tokenizer for all languages except Chinese (ZH tokenizer) and Japanese (MeCab tokenizer). We use the raw text as the reference for all of our experiments, instead of using the default tokenized-detokenized version, which normalizes the text and gives an artificial advantage to text processed with Moses. Table 2 shows our experiments' results. Every row describes the test BLEU scores of our model and the three baselines trained on a different language pair. We discuss the implications of these below. Results Are embeddings essential? The results show that it is indeed possible to train embeddingless machine translation models that perform competitively. The performance gaps between models with different tokenization schemes are relatively small. Except for Vietnamese, the difference between the embeddingless model and the best embeddingbased model is always under 1 BLEU. In the most controlled setting, where we compare byte-based models with and without learnable embeddings, models without embeddings consistently achieve higher BLEU scores in 19 of 20 cases (and an equal score for ru-en), with a boost of about 0.5 BLEU on average. When compared to models based on character embeddings, the embeddingless byte-to-byte approach yields higher BLEU scores in 17 out of 20 cases, though the average difference is quite small in practice (0.3 BLEU). Is subword tokenization superior to bytes or characters? Previous work in machine translation shows that subword models consistently outperform character or byte-based models (Gupta et al., 2019;Wang et al., 2019;Gao et al., 2020). However, our results indicate that this is not necessarily the case. When translating from English to a foreign language, the original direction of the IWSLT dataset, embeddingless byte-to-byte models achieve performance that is equal or better than subword embedding models' in 8 out of 10 cases. We observe a different trend when translating into English, where subword models surpass other models for every source language; the fact that Moses is a particularly good tokenizer for English -and less so for other languages -is perhaps related to this phenomenon. Whereas prior work proposed closing the performance gap by adding layers to the basic architecture, under the assumption that character-based models lack capacity or expressiveness, our results show that actually removing a component from the model can improve performance under certain conditions. It is possible that character and byte-based transformer models encounter an optimization issue rather than one of capacity or expressivity. Analysis Why does removing the embedding matrix improve the performance of byte-based models? As mentioned in Section 3, the embeddingless models do not use dropout on the encoder input and decoder output, but do apply dropout on the decoder input while training. Since the embeddingless decoder's inputs are fixed one-hot vectors, using dropout implicitly drops out complete tokens. In prior work, token dropout ("word dropout") has been shown to have a consistently positive effect (Bowman et al., 2016). We, therefore, rerun our experiments while controlling for token dropout (p = 0.2) to determine its effect on our results. Table 3 shows that decoder-side token dropout improves the performance of all models, with a larger impact on byte-based models and embeddingless models in particular. This effect is largely consistent, with only 7 out of 160 cases in which token dropout decreased performance on the validation set. We suspect that dropping out target tokens softens the effects of exposure bias by injecting noise into the ground-truth prefix. Given the benefits of token dropout on the baseline models, we re-evaluate the results from Section 5, while allowing for token dropout as a potential hyperparameter. Table 4 shows that, when translating from the original English text to a foreign language, the different models perform roughly on par, with no single tokenization method dominating the others. Furthermore, byte-level models with and without embeddings achieve almost identical results. In contrast, when translating in the opposite direction, subword models consistently outperform the other methods with an average gap of 0.76 BLEU from the next best model. Also, removing the embeddings from byte-based models decreases performance by an average of 0.45 BLEU when generating English. This discrepancy might stem from artifacts of reverse translation, or perhaps from the English-centric nature of subword tokenization, which is based on Moses preprocessing and BPE. Overall, these results suggest that despite the greater number of parameters in subword models, character and byte models can perform competitively, but may require slightly different optimization techniques to do so. Related Work There is prior work on replacing language-specific tokenizers with more universal tokenization approaches. Schütze (2017) shows how character n-gram embeddings can be effectively trained by segmenting text using a stochastic process. Sen- tencePiece (Kudo and Richardson, 2018) tokenizes raw Unicode strings into subwords using BPE (Sennrich et al., 2016) or unigram LM (Kudo, 2018). Byte BPE (Wang et al., 2019) extends Senten-cePiece to operate at the byte level. While this approach is indeed more language-agnostic than heuristic tokenizers, it does suffer from performance degradation when no pre-tokenization (e.g. splitting by whitespace) is applied. 6 Moreover, the assumption that subword units must be contiguous segments does not hold for languages with non-concatenative morphology such as Arabic and Hebrew. Character and byte-based language models (Lee et al., 2017; treat the raw text as a sequence of tokens (characters or bytes) and do not require any form of preprocessing or word tokenization, and Choe et al. (2019) even demonstrate that byte-based language models can perform comparably to word-based language models on the billion-word benchmark (Chelba et al., 2013). Although earlier results on LSTM-based machine translation models show that character tokenization can outperform subword tokenization (Cherry et al., 2018), recent literature shows that the same does not hold for transformers (Gupta et al., 2019;Wang et al., 2019;Gao et al., 2020). To narrow the gap, recent work suggests using deeper models (Gupta et al., 2019) or specialized architectures (Gao et al., 2020). Our work deviates from this trend by removing layers to improve the model. This observation contests the leading hypothesis in existing literature -that the performance gap results from reduced model capacity -and suggests that the problem may be one of optimization. Conclusions This work challenges two key assumptions in neural machine translation models: the necessity of embedding layers, and the superiority of subword tokenization. Experiments on 10 different languages show that, despite their ubiquitous usage, competitive models can be trained without any embeddings by treating text as a sequence of bytes. Our investigation suggests that different tokenization methods may require revisiting the standard optimization techniques used with transformers, which are primarily geared towards sequences of English subwords.
232358530
s2orc/train
v2
2021-03-26T13:27:30.912Z
2021-03-26T00:00:00.000Z
Reaching Thousands of Children in Low Income Communities With High-Quality ECED Services: A Journey of Perseverance and Creativity This paper describes a creative and bold way in which a local NGO addressed increasing access and quality of ECED services in Colombia. This case study on Fundacion Carulla's aeioTU early childhood innovation in Colombia contributes to understanding the possibilities for the private sector to spark innovation, and the importance of an open and collaborative strategy in contributing to the ECED sector at large. The critical role of monitoring and evaluation in the provision of services is highlighted. This guided key decisions on different growth phases. After a decade of work, Fundacion Carulla-aeioTU has shown capacity to effectively support children's development in low-income settings through their participation in quality programming. Furthermore, this case study also describes how the organization, having proven its capacity to provide high-quality services directly to children, decided to innovate and bring about different solutions to reach and support other stakeholders in the early childhood development ecosystem. INTRODUCTION For decades, the infrastructure of early childhood education and development (ECED) services for low-income children in Colombia had been inadequate and insufficient, mirroring profound social and economic inequities across the country. In 2008, after research and consultations with various institutions including small ECED centers and government agencies, Fundación Carulla (FC) decided to create a new social enterprise-called aeioTU-to provide high-quality ECED services to children through direct services as well as a social franchise model. In addition, FC identified a need to develop the early childhood education and development cluster across the country, in order to expand and strengthen high-quality ECED services for low-income children. The original aeioTU business plan included the creation of 19 ECED centers in high-income communities that-through their profits-would subsidize 4 aeioTU centers in low-income communities. It also included 65 self-sustaining aeioTU social franchise centers. Ultimately, this would reach an estimated 33,608 children and encompass 1,364 teachers. During the first 5 years of implementation, aeioTU defined its operational model and piloted its franchise-based business model, driving for sustainability, quality inclusive services and evaluating the program's impacts. After a decade of building programs, adjusting programming and priorities and investing in building a cluster for ECED advocacy, aeioTU has reached 228,667 children in 1,851 ECED centers working with 17,238 teachers. aeioTU is on its way to becoming and impactful, sustainable social enterprise reaching programs and families in Colombia, other parts of Latin America and even Africa. This paper provides a detailed description of how this ECED program was built and scaled-up in Colombia, and of how it engaged with a variety of national and international stakeholders to promote country-level change in ECED services. The paper illustrates the various stakeholders that can be engaged in ECED, as well as highlight alternative long-term strategies for sustaining ECED growth in developing contexts. It describes the relationship between a private foundation and local and national governments; the adaptation of quality programs to local contexts, and the use of measurement and evaluation for improvement, adaptation and scaling. THE LANDSCAPE OF EARLY CHILDHOOD EDUCATION IN COLOMBIA In 2008, the critical importance of investing in ECED had gained ground globally and across diverse sectors in Colombia (1). The national government, a few large city governments, NGOs and universities had been working on advancing early childhood development, but significant gains had not been made (1). There were substantial gaps in the quantity and quality of ECED services across the country. Of an estimated 4 million children in the country, 35% experienced multidimensional poverty with only 33% having access to some type of ECED program (2,3). There were no national policies on early childhood programming or services. It was not until 2006 that early childhood education was established as a fundamental right (4). The national government was funding two large programs; one that provided children with a breakfast and the Hogares Comunitarios program, a home-based care program (5). The latter was the only ECED program evaluated in the country and showed weak impacts. According to the National Planning Department's (DNP) evaluation of the program in 2009 (6), despite positive results in terms of hygiene conditions and psychosocial development, there was no evidence of cognitive impacts on children and some indication of negative impacts on health (6). A broad overview of the ECED sector in Colombia carried out by FC identified a highly fractured sector with low demand for early childhood services and low capacity to pay for these. This overview further showed services were relatively small-scale, evidenced little innovation and no monitoring or evaluation. More specifically, there were no agreed-upon criteria for defining quality in early childhood programs. Education and health sectors lacked strong vertical and horizontal integration (1), despite a history of integration of early services exemplified with the creation of the Instituto Colombiano de Bienestar Familiar (ICBF) in the 1970s (1). Funding for services was also unstable (1). The conditions for adequate provision of services were weak, with centers and classrooms lacking proper infrastructure, didactic materials, and early childhood qualifications for teachers, and with an absence of standards, monitoring and evaluation. Despite the evidence on the importance of qualified and trained teachers in early education, there was no emphasis on qualifications or professional development of early childhood teachers at the time. According to a 2015 UNESCO report Colombia was not producing enough professionals to meet the national ECED strategy (De Cero a Siempre), with only 7,500 professionals graduating from relevant programs annually and 74,000 needed (7). In ECED programs, common practice was to hire teachers on short-term (10-11 months) contracts, contributing to job insecurity and high turnover. Some cities did not have professional development budgets (8). The Hogares Comunitarios program caring for young children paid less than the minimum salary to its estimated 800,000 community mothers, each caring for 10-12 multi-age children in their own homes (8). In 2008, Colombia was the second largest country by population in South America (42.25 million), with a diverse geography and distinct regional cultures, demanding innovation in infrastructure and capacity to reach remote communities. The rich and varied cultural heritage can be supportive of children's educational experiences, but programming at scale was challenging given the differences between urban and rural contexts and the varying levels of human capital across the regions. Despite being the longest standing democracy in the region, with decentralized governments and a tradition of free enterprise and market solutions, the country continued to experience internal violence and migration (1). A growing migration from neighboring Venezuela was becoming tangible. In this landscape, FC met with the National Secretary of Education of Colombia in October 2007 to understand in which education sector FC could have the greatest impact. The Secretary recommended supporting the development of a national ECED strategy that would build on the growing evidence on the importance of early interventions for children's development and school performance. There was yet no national ECED policy. The board of FC decided to commit its efforts to promoting system-wide change for children ages zero through five through investment in innovative, bold, high-quality and sustainable programming. This plan would include direct provision, as well as public and private partnerships. A SOCIAL ENTREPRENEURSHIP APPROACH TO EARLY CHILDHOOD EDUCATION FC developed a business plan to create "model" ECED centers under the brand aeioTU. These would then serve as prototypes for other providers to replicate as franchises. Franchises would benefit from access to aeioTU's start-up financing, facilitated monitoring of early child development, and linkages to professional development and cross-center collaboration. The idea was that model centers and franchises would generate economies of scale in negotiations with suppliers, and most importantly, stimulate system-wide change. FC sought to become a driving ECED actor contributing to high impact, sustainable change through its vision and a strong business model. FC was guided by the following learning questions: • Can we create a high-quality ECED model center at low cost despite the challenging context in Colombia? • Can we scale the ECED model center to reach thousands of children? • Can our example lead a transformation of the ECED sector in Colombia, mobilizing providers to advance high-quality ECED services to low-income children across the country? INSPIRATION ON THE REGGIO EMILIA APPROACH Of the many ECED experiences around the world and within Colombia, FC was most inspired by the Reggio Emilia approach (9). FC had a vision of contributing to the transformation of Colombia through the development of and social commitment to its children, to move the country toward greater levels of child development, social mobility, environmental awareness, democratic values, social inclusion, peace and innovation. The history of the Reggio Emilia approach resonated with FC because it originated in Italy as it transitioned from a period of violence after World War II. Reggio Emilia's values of hope in the ability to rebuild a community; collaboration with local families and communities; creation of social capital; empowerment of children, educators and families; recognition of and value in one's identity, and viewing children as the drivers of the educational experience were all critical to shaping FC's own vision. More so considering the context in Colombia, which at the time continued to experience internal violence and related internal migration (10). PROGRAMMATIC ELEMENTS FC launched aeioTU with the understanding that the relationships and interactions children had with themselves and with adults, and the relationships among adults in the ECED centers, were key to high-quality ECED services. The aeioTU centers, would promote high quality by focusing on 6 critical areas: a comprehensive combination of nutrition, health and education objectives, clear pedagogical objectives and a curriculum with an emphasis on continuity across the early years, continuous professional development, adequate physical space and materials, family participation, transition to formal schooling, strong center management and planning for sustainability (9). While aeioTU predated the release of the nurturing care framework (11), it similarly takes a life-course perspective that encompassed good health, adequate nutrition, responsive caregiving, and opportunities for early learning. The nutritional component includes child nutritional monitoring and providing 70% of their nutritional intake needs. The program also includes engaging parents on nurturing care and positive discipline. aeioTU is an inclusive program and has strategies around inclusion that include family engagement, teacher training, parent training and developmental follow-ups (this component has not been evaluated independently).A comprehensive description of the model is included in Nores et al. (9). aeioTU's goal was to scale and reach as many children as possible to provide a strong start for children across all developmental domains, including socio-emotional development. The initial business plan approved the creation of 19 for profit aeioTU centers, 65 sustainable centers where families contributed to operational costs and 4 fully subsidized centers in low-income communities. The latter where to be funded from outside sources and the subsidizing for-profit centers. This social enterprise model was to generate high-quality ECED services for 33,608 children in a period of 10 years, via 84 centers employing 1,364 teachers. Home-visitation components were developed later under aeioTU en casa (aeioTU at home). DIRECT OPERATION OF AEIOTU ECED CENTERS In its first year, the aeioTU service center was created with a matrix organization that included pedagogy, partnerships, finance, administration and communications teams to support the start-up and operation of three new aeioTU centers. This first phase included the development of standards, curriculum and guidelines for the educational experience. In January 2009, aeioTU opened its first for-profit center in Bogotá, and two fully subsidized centers in low-income communities in Barranquilla and Bogotá. Each center included a center coordinator, a team of teachers, a psychologist, an artist, a nutritionist, an administrator and kitchen staff. The centers' teams focused on engagement with the local community and context-key social actors, families, and the environment-, and the process toward opening the centers included community consultations. Teacher to student ratios (at the beginning) were 1:12 with at least 3 m 2 of physical space per child in the classroom. The centers served children birth through age five, with class sizes increasing by age. By 2011, the centers had been well-received by the local communities and the ECED sector at large. Centers were at capacity or had waiting lists. With media coverage, recognition for the model grew. City governments, companies, and grantmaking foundations started to show interest in the model. The President of Colombia and the ministries of education, health, and social services visited the centers and remarked on the innovations in pedagogy, staffing, and materials. President Alvaro Uribe visited the first aeioTU center on 2009 when the center was officially inaugurated (12), and President Juan Manuel Santos held its nation-wide planning meeting on ECED in an aeioTU center in Santa Marta on 2011 (13). After opening the initial centers, the initiative started to evolve to focus on continuous improvement of the learning experience, standardization, quality certification, and scalability. A longitudinal randomized controlled study was initiated in two of the newer centers (10). The growth of aeioTU centers over the next decade by funding type is portrayed in Figure 1. aeioTU opened an average of 3 centers per year from 2008 through 2016 in 15 cities. These included large urban environments, with palpable urban violence, as well as mediumsized cities and small rural communities. The growth was motivated by the idea of learning and piloting the aeioTU ECED learning experience in a variety of communities, as well as by the demand for the aeioTU model emerging from communities, companies, foundations and local governments. This growth of subsidized centers deviated from the original business plan, but was funded by tens of organizations willing to commit to this national experiment, which aeioTU engaged with as partners. Three key areas of the operation were evaluated and improved upon during this 8-year period of growth, along with efforts to optimize the per-child cost and improve the quality of the learning experience: • The for profit aeioTU centers: a first center opened as planned, but it took some time to reach a point when no subsidies were needed. This delayed the opening of additional for-profit centers. Two of these centers exist today. As described by a mother of this first center "It has been marvelous for us as a family to discover the social work that the Foundation does by providing the same educational model to low-income families. We are aware of how fortunate we are to have the means to provide our girls with the best education; and are happy to participate somehow in providing the same model to children with less opportunities. That is precisely the principle of solidarity that we want to teach our children. Like aeioTU, we know education is key to reduce the inequalities that exist in our society" (14). • The sustainable aeioTU franchise: franchises were never initiated because the cost of operating the centers was not low enough for sustainability in this model under which middleincome families would pay tuition. The aeioTU "company center" was instead created, with companies financially supporting a center for its employees' children. Two of these centers exist to date. • The subsidized centers: Direct provision of ECED services in low-income communities had such a strong response and faced such high demand that aeioTU grew significantly more than originally planned. Most of the funding came from the public sector and philanthropy, rather than from the forprofit centers described above. For-profit centers have however funded 577 children. The government and philanthropy have funded 22,994 children. In low-income communities, where demand for slots was significant, centers of 300 children proved to be optimal in terms of per-child cost (larger than typical centers of 60-100 children in Colombia). In order to reach lower the per-child cost while preserving quality, aeioTU innovated in the use of space, the weekly schedule, the type of buildings, classroom equipment and materials. This also included shifting in some locations to aeioTU en casa (aeioTU at home) to support pregnant moms, infants and toddlers. Nutritional services were eventually shifted to a specialized organization that worked in partnership with the centers, to ensure reasonable pricing and quality at a larger scale. This allowed leadership teams to focus on the pedagogical components. Appendix 1 displays the innovations and changes made throughout the decade toward sustainability. A central component of aeioTU has been an emphasis on evaluation and continuous quality improvement. Between 2008 and 2013 aeioTU underwent two external evaluations that drove improvements: (a) a longitudinal study conducted by Rutgers University and Universidad de los Andes (9, 15), and (b) visits from Reggio Children. In particular, the longitudinal study evidenced positive cognitive and health impacts (as measured by anthropometric indicators) on children early on (9) and sustained medium-term (Bernal et al., unpublished). Programmatic improvements were a direct response to lessons from these evaluations including improvements in: • understanding and modifying the day schedule for children to reduce time spent in transitions and strengthen the education component, • materials and use of space, • the use of documentation and research by teachers, • the role of the teachers within the classroom, • the relationship with the community, including the social and physical environment, • the creation of a system of quarterly indicators to follow children's development, used in meetings with the family, • the development of ConecTU, a tool to systematize and generate reports with aggregated child information, • the strengthening of professional development (PD) for aeioTU teachers and families, and PD supports, • shifting services for babies and toddlers to integrate services starting at birth. Claudia Giudicci, Reggio Children president, wrote in 2016 to aeioTU "I [renew] my gratitude and that of Reggio Children for the extraordinary work that you are doing in Colombia with aeioTU to provide children with a new future. . . ..your efforts promoting the rights of children is notable" (16). Ellen Frede, as co-director of NIEER wrote "I was fortunate to visit the centers, meet the teachers, center coordinators, parents and educational leaders to revise and comment on the materials and procedures that are included in the highly complex but manageable system of the Curriculum Cartography. This system and materials are a great contribution to early learning. . . it is a resource to the world" (17). The challenges of working in regions with local conflict proved to be unique. aeioTU innovated using art, partnering with the family and the community in order to preserve the centers as safe and peaceful spaces within the neighborhoods, while eliminating the use of security guards. There has never been violence inside the centers, and the families themselves returned the few times items robbed from a center. Having independent and empowered female teachers created some discomfort in one community, and was ascribed to a patriarchal system. Consequently, aeioTU included diagnostic tools and PD to support teachers in engaging with such complex situations. Since its creation, aeioTU used the Balanced Scorecard system (BSC) to manage its work. The BSC included a strategic map with objectives, indicators and annual initiatives to achieve intended goals. The management team reviews the results of indicators and key initiatives monthly, the Board does so quarterly, and these are revised yearly. Appendix 2 has a copy of the strategic map used and Appendix 3 includes a copy of the theory of change under which the BSC operates. It is important to note that the decentralized government model of Colombia has meant that aeioTU has had to work with multiple government stakeholders including the national government and 13 provincial governments. Yet, due to free enterprise policies in Colombia, aeioTU was able to create the cross-subsidy model, operate aeioTU centers with public and private funding, and leverage multiple sources of funding across all its centers, including resources from other NGOs. According to Maria Clemencia Rodriguez de Santos, former first lady of Colombia from 2010 to 2018, "The [Cero a Siempre] policy, established by the administration of President Juan Manuel Santos and from which I was the spokesperson, was made possible by the joint commitment of the children's families, the public and the private sector, who allowed the comprehensive care for early childhood to become a reality. In this endeavor, aeioTU was a committed and unconditional ally" (18). SCALING THE AEIOTU MODEL FOR GREATEST REACH AND IMPACT By December 2015 aeioTU had 28 ISO 9001 certified centers for 13,315 children, with stability of contracts and staff and showed high satisfaction of employees and partners. The centers existed in different contexts and size configurations. Therefore, with the support of the Interamerican Development Bank, in late 2015 aeioTU worked with a social franchise expert from London to prepare the social franchise business plan. The 2015-2025 business plan envisioned 20 aeioTU franchises and ten new directly operated centers. At the Board Meeting in December 2015, the recommendation was to go back to the drawing board because the social franchise model was too expensive and difficult to implement in terms of quality and consistency for space, materials, and PD. It would require costly monitoring and supervision. The management team continued to research scaling alternatives and in 2016 piloted two efforts to engage partners in replicating impact, to reach many more children indirectly. • A collaboration with the National Government to work with 300 ECED centers serving an estimated 45,000 children in lowincome communities in the northern coast of Colombia. The goal was to improve these centers' quality and processes. The program was evaluated by Universidad de los Andes (19). • A partnership with Corpoayapel, an NGO in the province of Cordoba, Colombia, serving 6,000 children. aeioTU shared knowledge and provided support. The LEGO Foundation funded an evaluation of this endeavor, which showed positive results (20). By December 2016, aeioTU had gained worldwide recognition as an innovative solution for ECED. Nathalia Mesa, CEO of aeioTU since its initiation, was selected as an ASHOKA Fellow. AeioTU participated in the Ashoka Globalizer program, under the ReImagine Learning Initiative of the LEGO Foundation (21). This allowed aeioTU to embark on 6 months of strategic planning, and decided to shift to a flexible strategy that included three solutions for expanding beyond the directly operated centers: 1. AeioTU Aprendiendo (22), an internet platform where documents, videos and PD are provided for free, accompanied by short term additional PD at low cost. This platform has been key with the move to remote instruction during the COVID-19 pandemic. 2. One-year PD modules for centers to support quality improvements. 3. The aeioTU Network membership, which includes information-sharing, networking and fundraising for partners operating across the country. This new business model preserves the intent of the original plan, but also recognizes the role of the aeioTU centers in the ECED system at-large. The social franchise model evolved from a closed, tightly controlled strategy to a more fluid knowledgesharing strategy were other ECED centers are recognized peers in a network, and are supported in efforts toward increasing quality for a larger number of children. Ruth Gomez, who oversees an estimated USD1M investment of aeioTU supporting a lowincome neighborhood of about 10,000 children in Cartagena discusses this strategy: "For the Fundacion Grupo Social it is welcoming to work with FC-aeioTU to achieve the transformation of the early education ecosystem. . . its pedagogical experience validated and recognized in Colombia gives us tranquility that it will contribute to the transformations necessary to achieve an improvement in the quality of life in our territory...we have seen how the community mothers, teachers and leaders have started changing their pedagogical practices and strengthening the services to children. The new materials and improvement in spaces has developed the autonomous learning of children, improving the education and strengthening the role of the family and their interaction with their community" (23). These three new strategies were received positively by the ECED cluster in Colombia, as shown by the subsequent growth across all three. The aeioTU PD events gathered thousands of teachers. The 1-year PD modules showed very positive evaluation results (19) and continue to be provided to other centers serving lowincome children across the country, and around the world. These are funded by the national government, local governments and philanthropies. aeioTU's network reached 9,000 children in the first year-a challenge made possible with aeioTU's support in partnerships, financial planning, human resources and pedagogy. This also created economies of scale for buying materials and supplies. Government and grant making organizations supported further expansion of the training programs, while other ECED centers embraced aeioTU practices, including PD for their teachers. ECED center directors and teachers improved their operations, thousands of teachers participated in the online learning platform, newly trained teachers tried new pedagogical strategies and methods to benefit children. Results provided by the independent evaluation included (19): • Classroom transformation, • Family involvement, • Increased work of children in creative ways, • Evolution from free play to imaginary play, • Increased reading and books available for children, • Introduction of exploration activities and/or projects for the local community and environment Under the work with the Globalizer Program, aeioTU was able to identify the ECED ecosystem and its challenges. This awareness shifted its focus outwards, to the ECED system at large. It allowed aeioTU to understand there are as many ECED ecosystem in Colombia, as there are cities and neighborhoods. Being in 13 different communities aeioTU had to assess and define a strategy for each local context. In some communities, aeioTU then opted to collaborate with others in transforming ECED services. As of December 2019, aeioTU has reached 228,667 children in 1,851 ECED centers working with 17,238 teachers, including in other parts of Latin America (24) and even Africa (25). According to Constanza Alarcón, current Viceminister of Education of Colombia, "FC-aeioTU ventured into the field of ECED within a historic chapter in which Colombia was structuring its early education policies and programs in a comprehensive care approach at a national level. Their valuable experience in the development of an initial educational curriculum, based on evidence, contributed to the country's challenge of increasing coverage of programs with quality, and since then has enriched the State's work in this field, setting up innovative experiences. Its articulation with the private sector and with the national and local government over the years has made it possible to show that comprehensive care for vulnerable children is a responsible, sustainable investment with a high impact on children, their families and communities" (26). DISCUSSION AND CONCLUSION AeioTU has proven that it is possible to operate high-quality ECED centers in low-income communities in a variety of contexts. At its inception, the aeioTU model was expensive, so intentional efforts to lower cost were critical for scalability and sustainability. Now, aeioTU is on its way to becoming financially sustainable while having proven it is a high-quality and high-impact model across diverse communities. A key lesson is that a cross-subsidy model works well in contexts of high-income inequality if there is a small fraction of families able and willing to pay for high-quality services. In addition, central and local governments are willing to work on needed infrastructure beyond their individual political administrations, which is imperative for large-scale efforts. Moreover, the work of aeioTU in communities of violence and trauma further proves that ECED programs can be a feasible solution to working with children in such complex contexts. Finally, while the initial phases of the aeioTU experience predated the Nurturing Care Framework (NCF) more recently established by the World Health Organization (11), the model does in fact encompass various aspects proposed by the model included in NCF, and extends these beyond age 3. The ability of the program to comprehend health monitoring, provide nutrition, develop resources and supports for responsive caregiving, and provide early learning opportunities, as well as strengthen these in other providers, makes aeioTU a particularly interest case study in relation to the NCF. Other core lessons and results after a decade of work include: • Adaptability of the aeioTU educational experience. It is feasible to adapt a globally known curriculum for use in low-income communities with scarce resources. • Impact over internal growth. The aeioTU team learned that growing an organization is not the same as growing its impact. An emphasis on the former is central for sector growth over the growth of a specific model. • Impact for both children and teachers. Evaluations demonstrated an impact on children (15) and an impact on teachers (9). • Impact on the ecosystem. The ECED ecosystem evolved from a fragmented, low-quality effort to a more integrated and higher quality, nation-wide system. This required that aeioTU evolve from thinking primarily about the families it served to opening itself to supporting other ECED initiatives. • Partnership building. aeioTU learned the importance of connecting local, national and global actors promoting a highquality ECED agenda to reinforce and increase impact. • Continuous learning. aeioTU realized that a true learning organization evolves, adapts and engages with others, receiving feedback from stakeholders, data and evaluations for improvements prioritizing the collective goals and not just the organization. Once an innovative solution is proven to work, how to scale becomes critical for impact. Finding the answer to this means continuous thought on the core components of a program and, finding alternative mechanisms to spark innovation beyond the boundaries of a program. For aeioTU, this required leadership willing to innovate and understanding the initial experiences of the program as usually the most imperfect. Only through data, commitment and innovation can it have large-scale impact. This has been at the core of aeioTU. Key components for success were understanding of the specific ecosystem, communicating a vision to key stakeholders and shifting from thinking about aeioTU to thinking systemically. Unfortunately, the social and educational sectors in many countries operate within contexts of scarce resources and individual sustainability. The incentives needed for sharing, co-constructing, and cross-sector supports are often low. System change requires stakeholders to build collaboratively and integrate resources. Looking back at when aeioTU started, several skeptics thought a large-scale operation of Reggio Emilia inspired programs in low-income communities was impossible. A social enterprise business model proved to be an effective way to spark this innovation. However, the social enterprise model also proved too narrow. The adaptation that allowed for local ECED programs to improve in different ways-as well as the work of engaging in a large-scale learning experiment by sharing knowledgeproved more successful. The aeioTU experience proves it is possible to bring high-quality ECED services to any child anywhere. Public and private sectors can work together in transforming ECED ecosystems. A comprehensive vision and encompassing engagement of partners is key of the aeioTU model's ability to engage in large-scale systemic change. The need for encompassing partnerships is referenced in the NCF and its appeal for engagement across all relevant stakeholders. The aeioTU experience has therefore proven the NCF vision is feasible in low and middle-income countries. DATA AVAILABILITY STATEMENT The original contributions are presented in the study. Data availability is not applicable for this contribution. Further inquiries can be directed to the corresponding author.
10207180
s2orc/train
v2
2016-05-12T22:15:10.714Z
2014-11-17T00:00:00.000Z
β-site amyloid precursor protein-cleaving enzyme 1(BACE1) inhibitor treatment induces Aβ5-X peptides through alternative amyloid precursor protein cleavage Introduction The β-secretase enzyme, β-site amyloid precursor protein-cleaving enzyme 1 (BACE1), cleaves amyloid precursor protein (APP) in the first step in β-amyloid (Aβ) peptide production. Thus, BACE1 is a key target for candidate disease-modifying treatment of Alzheimer’s disease. In a previous exploratory Aβ biomarker study, we found that BACE1 inhibitor treatment resulted in decreased levels of Aβ1-34 together with increased Aβ5-40, suggesting that these Aβ species may be novel pharmacodynamic biomarkers in clinical trials. We have now examined whether the same holds true in humans. Methods In an investigator-blind, placebo-controlled and randomized study, healthy subjects (n =18) were randomly assigned to receive a single dose of 30 mg of LY2811376 (n =6), 90 mg of LY2811376 (n =6), or placebo (n =6). We used hybrid immunoaffinity-mass spectrometry (HI-MS) and enzyme-linked immunosorbent assays to monitor a variety of Aβ peptides. Results Here, we demonstrate dose-dependent changes in cerebrospinal fluid (CSF) Aβ1-34, Aβ5-40 and Aβ5-X after treatment with the BACE1-inhibitor LY2811376. Aβ5-40 and Aβ5-X increased dose-dependently, as reflected by two independent methods, while Aβ1-34 dose-dependently decreased. Conclusion Using HI-MS for the first time in a study where subjects have been treated with a BACE inhibitor, we confirm that CSF Aβ1-34 may be useful in clinical trials on BACE1 inhibitors to monitor target engagement. Since it is less hydrophobic than longer Aβ species, it is less susceptible to preanalytical confounding factors and may thus be a more stable marker. By independent measurement techniques, we also show that BACE1 inhibition in humans is associated with APP-processing into N-terminally truncated Aβ peptides via a BACE1-independent pathway. Trial registration ClinicalTrials.gov NCT00838084. Registered: First received: January 23, 2009, Last updated: July 14, 2009, Last verified: July 2009. Introduction Alzheimer's disease (AD) is a slowly progressing brain disease manifesting several neuropathological characteristics including accumulation of extracellular plaques, mainly composed of amyloid-β (Aβ) peptides of various lengths [1,2]. Aβ is derived via two-step enzymatic cleavage of the transmembrane amyloid precursor protein (APP) catalyzed by the β-site APP-cleaving enzyme 1 (BACE1, β-secretase) [3] and γ-secretase [4]. BACE1 cleaves APP at the first amino acid of the Aβ domain and is crucial for the production of Aβ peptides starting at position 1, including Aβ1-42. Thus, BACE1 is a key target for diseasemodifying AD treatments, since one focus for such therapies is to minimize Aβ production [5]. To evaluate the biochemical effects of novel BACE1 inhibitor candidates, biomarkers that reflect target engagement are needed [6]. Analyzing a wide range of Aβ species in cerebrospinal fluid (CSF) gives useful information on APP metabolism in humans [7,8]. In a recent preclinical study, we showed that APP-transfected cells and dogs treated with several different BACE1-inhibitors expressed decreased levels of Aβ1-34 and concurrently increased the levels of Aβ5-40 in cell media and CSF, suggesting that these peptides may be pharmacodynamic markers of BACE1 inhibition in the central nervous system (CNS) [9]. Inhibition of γ-secretase, another AD drug candidate approach, increased APP processing via the α-secretase-mediated pathway [10][11][12][13] and decreased CSF levels of Aβ1-34 in humans, even at dosages when Aβ1-42 was unchanged, further supporting the use of novel CSF biomarkers to monitor target engagement of anti-Aβ drugs [14][15][16]. Here, for the first time with a peptidomics approach, we have demonstrated changes in CSF levels of Aβ1-34 and Aβ5-40 in humans treated with the BACE1 inihibitor LY2811376 (Eli Lilly and Company, Indianapolis, IN, USA). The translation of these findings from preclinical models to man indicates that CSF Aβ1-34 and Aβ5-40 have potential utility as markers of BACE1 inhibition in clinical research. Furthermore, the results strongly suggest that Aβ peptides starting at amino acid 5 are produced through a non BACE1-dependent pathway in humans. Subjects The study, conducted at PAREXEL International Early Phase Los Angeles, CA, USA, from February to June 2009, was previously reported in detail [17]. In brief, the study was a subject-and investigator-blind, placebo-controlled, randomized, single-dose design. The California Institutional Review Board approved the study. All subjects provided written informed consent before the beginning of the study. The trial was conducted in compliance with the Declaration of Helsinki and International Conference on Harmonisation/Good Clinical Practice guidelines. Eighteen healthy subjects (21 to 49 years old, seventeen men and one woman) participated in the study and were randomly assigned to receive a single dose of 30 mg of LY2811376 (n =6), 90 mg of LY2811376 (n =6) or placebo (n =6). An indwelling lumbar catheter was placed four hours before administration of the study drug and subjects remained supine for the duration of the CSF sample collection period. CSF samples were collected prior to and at regular intervals over 36 hours after drug administration and analyzed by immunoprecipitation in combination with mass spectrometry (MS). All CSF samples were collected in polypropylene tubes and stored at -80°C. Hybrid immunoaffinity-mass spectrometry Immunoaffinity capture of Aβ species was combined with matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) MS for analyzing a variety of Aβ peptides in a single analysis as described in detail elsewhere [18]. In brief, the anti-Aβ antibodies 6E10 and 4G8 were separately coupled to magnetic beads. After washing of the beads, the 4G8 and 6E10 coated beads were used in combination for immunoprecipitation. After elution of the immune-purified Aβ peptides, analyte detection was performed on an UltraFlextreme MALDI TOF/TOF instrument (Bruker Daltonics, Bremen, Germany). For relative quantification of Aβ peptides, an in-house developed MATLAB (Mathworks Inc. Natick, MA, USA) program was used. For each peak the sum of the intensities for the three strongest isotopic signals was calculated and normalized against the sum for all the Aβ peaks in the spectrum, followed by averaging of results for separately determined duplicate samples. In the 30-mg group, one sample, six hours post treatment, was omitted from further analysis due to blood in the CSF. Enzyme-linked immunosorbent assay For quantification of Aβ 5-40 and Aβ 5-x using ELISA, microtiter plates were coated with 10 μg/mL 2G3 [21] (anti-Aβ x-40 ; epitope including valine at position 40, Eli Lilly & Company, Indianapolis, IN, USA) or 266 [22] (anti-Aβ 1-x ; epitope 13-28, Eli Lilly & Company) overnight at 4°C. After blocking plates in 2% bovine serum albumin (BSA), dilutions of Aβ 5-40 standards (Anaspec) and CSF samples were incubated on plates in 1% BSA, 0.55 M guanidine-HCL, 5 mM Tris in phosphate buffered saline (PBS) with complete ethylenediaminetetraacetic acid (EDTA)-free protease inhibitor (Roche, Mannheim, Germany) overnight at 4°C. After washing in PBS-0.05% Tween 20, biotinylated 5H5 (anti-Aβ 5-x ; epitope including arginine at position 5, Eli Lilly & Company) was used to detect the truncated Aβ beginning at the arginine at position 5. The 5H5 monoclonal antibody was developed in mice following standard methods and the specificity for the truncated Aβ 5-x was investigated by acid urea gel (a technique that separates Aβ peptides by mass and charge) and ELISA methods. Acid urea gel separation of synthetic Aβ peptides followed by Western blotting with 5H5 revealed complete selectivity for the truncated Aβ 5-42 as compared to full-length Aβ 1-42 . Additionally, acid urea gel/ 5H5 Western blotting analysis of human cortical tissue from multiple Alzheimer's subjects resulted in a single identifiable band that co-migrated at the same position as the synthetic Aβ 5-42 standard. Note, the migration of the Aβ peptides in this gel system completely separates the Aβ 5-42 from all other Aβ peptides (truncated or full-length). ELISA analyses to investigate the 5H5 epitope selectivity demonstrated a 20,000-fold selectivity for the Aβ 5-x epitope versus the full-length peptide (Aβ 1-x ). Following additional washes in PBS-0.05% Tween 20, plates were incubated with streptavidin-horseradish peroxidase (HRP) (Biosource, San Diego, CA, USA) and subsequently, 3,3´,5,5´-Tetramethylbenzzidine (TMB) (Sigma, St. Louis, MO, USA) color development was monitored at 650 nm in a spectrophotometer. Quantification of CSF sAPPα and sAPPβ was conducted as described previously and the results from these analyses have already published [17]. Statistical analysis The time series for each treatment were analyzed using Friedman's test (SPSS v13, Chicago, IL, USA). A dosedependent effect was considered significant if P <0.05 and if the P-value decreased with increasing dose. Association analyses were performed by Spearman's rank correlation and the correlation coefficient is presented by spearman's rho (rs). LY2811376 induces a characteristic Aβ peptide pattern in a human-derived neuroblastoma cell line As expected, SH-SY5Y cells treated with the BACE1inhibitor LY2811376 or BACE IV secreted less Aβ1-40 and Aβ1-42 to the cell medium while the relative levels of Aβ5-40 (relative to the other Aβ peptides detected) increased, as compared to vehicle-treated cells ( Figure 1). These data clearly demonstrate that LY2811376 inhibits BACE1 activity and that the generation of Aβ5-40 is BACE independent. The BACE1 inhibitor LY2811376 causes a relative reduction in CSF Aβ1-34 and an increase in CSF Aβ5-40 in humans as reflected by mass spectrometry To evaluate if the BACE1-mediated changes described in exploratory Aβ biomarker studies were translatable to humans, the CSF mass spectrometric Aβ peptide pattern from untreated subjects was compared to the pattern from subjects treated with different concentrations of the BACE1-inhibitor LY2811376. Representative CSF Aβ peptide mass spectra from a subject before treatment and 36 hours after drug administration are shown in Figure 2A-D. Although barely detectable versus background before treatment, BACE1 inhibition increased the mass spectrometric signal for Aβ5-40 while the signal corresponding to Aβ1-34 decreased. In total, 13 Aβ species ranging from Aβ1-15 up to Aβ1-42 were reproducibly detected. The BACE1 inhibitor LY2811376 dose-dependently reduced Aβ1-34 relative to baseline with a nadir of 42% in the 30-mg group (P =0.002) and 57% in the 90-mg group (P <0.001) respectively, 24 hours after drug administration ( Figure 3A). By contrast, LY2811376 dose-dependently increased Aβ5-40 to a maximum relative to baseline after 18 hours in the 30-mg (P =0.213) and the 90-mg (P <0.001) groups, respectively ( Figure 3B). The mass spectrometric signal for Aβ5-40 in the placebo group was below the limit of detection while in the 90-mg treatment group the signal-to-noise ratio was 4 to 5. At 36 hours post-treatment, both Aβ5-40 and Aβ1-34 had started to return towards baseline levels in both treatment groups. The BACE1-inhibitor LY2811376 causes an absolute increase in both CSF Aβ5-40 and Aβ5-X in humans as reflected by ELISA The increase in Aβ5-40 detected by mass spectrometry in response to treatment with the BACE1 inhibitor LY2811376 was further confirmed by a proprietary ELISA. While the placebo concentrations were low, in the range of approximately 100 pg/mL and approximately 50 pg/mL for Aβ5-X and Aβ5-40, respectively, there were clear increases in the LY2811376 high dose (90 mg) group over time for both Aβ5-X and Aβ5-40 ( Figure 3C-D) of which the increase in Aβ5-X was statistically significant (P =0.02). The ELISA-determined concentrations of Aβ5-42 were too low to yield an accurate assessment, which is in agreement with the mass spectrometric data where Aβ5-42 could not be detected in any treatment group. In the 90-mg dose group, there was a compensatory increase in the concentrations of both Aβ5-X and sAPPα (rs =0.94, P =0.02) while Aβ5-X was negatively correlated with sAPPβ (rs = -0.89, P =0.03) as presented in Figure 4A,B. There were no correlations between the two peptides starting at amino acid five and sAPPα or sAPPβ in the 30-mg and placebo groups. Discussion In the present study, we show marked effects on CSF Aβ5-40 (which increases) and Aβ1-34 (which decreases) in response to BACE1 inhibitor treatment. These findings confirm earlier pre-clinical data [9] and suggest that CSF Aβ5-40 and Aβ1-34 may be useful pharmacodynamic markers for assessing the biochemical effects of BACE-1 inhibitors in the CNS in clinical trials. The relatively low concentrations of both Aβ5-40 and Aβ5-X fit previous findings with comparable percentage reductions in Aβ1-40 versus AβX-40 and Aβ1-42 versus AβX-42 in dog CSF following oral administration of LY2811376 [17]. Since the discovery and molecular cloning of BACE1 in 1999 by several independent groups, this enzyme has been a tempting target for pharmacological lowering of cerebral Aβ levels with the intent of treating or preventing AD. To date, there are only a few reports of BACE1 inhibitors that have demonstrated sufficient access to the brain. In a recent paper, oral administration of the non-peptidic BACE1 inhibitor LY2811376 to healthy subjects (same patients as included in the present study) dose-dependently lowered CSF Aβ1-40, Aβ1-42 and sAPPβ levels and dose-dependently increased CSF sAPPα, providing evidence of desirable central pharmacodynamic effects on APP processing [17]. In another study, a therapeutic antibody that reduces BACE1 activity was used, resulting in lowered CNS Aβ concentrations in preclinical models [23]. Whether this approach can be translated to humans and if other Aβ species besides Aβ1-40 are affected in response to treatment remain to be elucidated. LY2811376 treatment consistently increased CSF levels of Aβ5-40. The increase of Aβ5-40 in response to BACE1 inhibition clearly suggests that production of Aβ peptides starting at position 5 is formed via a BACE1-independent APP-processing pathway [9]. In agreement with this, it has been suggested that inhibition of BACE1 might be linked to a distinct processing of APP between Phe4 and Arg5 mediated by α-secretase-like proteases [24]. Other enzymes which might cleave in this region of Aβ include αchymotrypsin, myelin basic protein and protease IV [25]. However, while these enzymes have been shown to cleave Aβ in vitro, data from the CNS showing which enzyme that cleaves between Phe4 and Arg5 inhibition of BACE1 is lacking. Recently, we showed in pre-clinical models that CSF Aβ1-34 is a sensitive marker for BACE1 inhibition [9]. We have previously shown, in two independent clinical trials, that CSF Aβ1-34 is a pharmacodynamic marker of γ-secretase inhibition in humans [14,15] and here we show for the first time that it is also a marker of BACE1 inhibition in humans. It has been shown that the cleavage between Leu34 and Met35 depends on both BACE1 and γ-secretase [26,27]. Thus, Aβ1-34 is an intriguing peptide to follow in clinical trials of BACE1 inhibitors since cleavages at position 1 and position 34 both depend on BACE. It is also possible that Aβ1-34 is more stable than Aβ1-42, as it is less hydrophobic and may thereby be less prone to preanalytical confounding factors. Aβ5-40 has been found in AD brains [28], but the exact role of this Aβ species in AD pathogenesis (and normal physiology), if any, is unknown and we propose that further studies of biological functions and how the peptide might be relevant to AD pathophysiology are warranted. We found a positive correlation between sAPPα and Aβ5-X. This correlation may reflect a compensatory increase in APP cleavage at the α-site and between amino acid 4/5, or that there might be more substrate for these enzymes due to inhibition of BACE. We also found a negative correlation between sAPPβ and Aβ5-X, clearly showing that while the amyloidogenic pathway is affected, the (as yet) unknown enzyme generating Aβ5-X cleaves its substrate more. There are several non-quantitative aspects of HI-MS. The relative quantification using mass spectrometry cannot be interpreted as a direct reflection of an absolute or relative abundance. However, in the present study we have verified the mass spectrometric data showing increased relative levels of Aβ5-40 with a proprietary ELISA showing increased concentrations of both Aβ5-40 and Aβ5-X in response to inhibition of BACE1. What also should be noted is that the ELISA measures an absolute concentration while MS reports the relative change of Aβ5-40 relative to all other Aβ peptides detected in the same spectra. A previous study on the same patients as those included in the present study showed a marked decrease in CSF Aβ1-40 in response to LY2811376 treatment [17]. Due to the relative quantification used in the present study, we were not able to measure the expected decrease. However, by implementing isotopically-labelled Aβ peptides for each peptide of interest, relative small changes in response to treatment should be possible to detect with HI-MS. Conclusions In summary, our results confirm that CSF Aβ1-34 may be useful in clinical trials on BACE1 inhibitors to monitor target engagement. By independent measurement techniques, we show that BACE1 inhibition in humans is associated with APP-processing into N-terminally truncated Aβ peptides via a BACE1-independent pathway. The data presented also provide evidence for CSF Aβ1-34 and Aβ5-40 as translatable pharmacodynamic markers for BACE1inhibition from cell and animal models to humans. Competing interests The clinical trial part of the study was sponsored by Eli Lilly & Company. For the biochemistry part, the sponsors had no role in study design, data collection, data analysis, data interpretation, or writing of the article. EP, UA, NM, AW, MO, HZ and KB declare that they have no competing interests. RAD, RBD, MMR and PCM are employees of Eli Lilly and Company. Authors' contributions EP planned the experimental design, analyzed and interpreted mass spectrometric data and drafted the manuscript. RAD designed and managed implementation of the clinical trial and interpreted data. UA analyzed and interpreted data. NM analyzed and interpreted data. AW acquired mass spectrometric data and interpreted results. MO performed cell studies and interpreted results. RBD designed the ELISA methods and generated the ELISA data and interpreted results. MMR designed the ELISA methods, generated the ELISA data and interpreted results. HZ analyzed and interpreted data. PCM analyzed and interpreted data. KB planned the experimental design, analyzed and interpreted data. All authors revised the manuscript critically for important intellectual content. All authors read and approved the final manuscript.
233643080
s2orc/train
v2
2021-05-05T00:07:54.783Z
2021-01-01T00:00:00.000Z
Bilateral continuous erector spinae block versus multimodal intravenous analgesia in coronary bypass surgery. A Randomized Trial ABSTRACT Multiple studies have confirmed that erector spinae block is effective in thoracic and breast surgeries. However, studies which investigate the efficacy of this block in cardiac surgery are scarce. This study aimed to compare continuous erector spinae block with multimodal intravenous analgesia in coronary bypass surgery. Methods: Forty patients undergoing coronary bypass surgery were divided into either group A (IV) (n = 20) who received multimodal intravenous analgesia or group B (ES) (n = 20) who had continuous erector spinae block. We compared the two groups regarding Visual Analog Scale (VAS) till 48 h after extubation, total perioperative opioid consumption, post-extubation peak inspiratory flow, duration of mechanical ventilation and ICU stay. Results: Group B showed a significantly lower VAS score than group A. intraoperative fentanyl was significantly less in group B (403.75 ± 44.63) versus (685 ± 99.47) in group A, p = 0.00. Postoperative morphine doses were 50% less in group B; (15.9 ± 2.63) versus (32.3 ± 5.04) in group A, p = 0.00. Peak inspiratory flow was significantly higher in group B after extubation. Duration of ventilation was shorter in group B (4.96 ± 0.71 h) versus (6.08 ± 0.69) in group A, p = 0.00. In addition, ICU stay was also shorter in group B (35.52 ± 3.87 h) versus (47.06 ± 5.08 h) in group A, p = 0.00. No clinically significant adverse effects were recorded. Conclusion: Ultrasound-guided bilateral continuous erector spinae block produced safe and effective analgesia for 48 h after extubation following coronary bypass surgery. It also reduced perioperative opioid consumption and allowed early tracheal extubation without major adverse effects. Introduction Pain after coronary bypass cardiac surgery is of moderate to severe intensity due to sternotomy, chest tube insertions and internal mammary artery dissection. Derangements of insufficient pain management after open-heart surgeries are hemodynamic instability, increased oxygen consumption and pulmonary atelectasis. [1,2] Acute postoperative pain after cardiac surgery with sternotomy is usually controlled by intravenous opioids. Opioids produce predictable satisfactory analgesia and sedation in postoperative patients but with side effects such as respiratory depression, drowsiness and myocardial depression. Recently, there is a shift toward reducing opioid usage. Opioid-free analgesia can be achieved by combining regional blocks with non-opioid drugs. [3] Thoracic epidural blockade is the gold standard neuraxial blockade for post-sternotomy pain, but unfortunately it has serious complications. Paraplegia might occur due to epidural hematoma formation after heparinization during cardiac surgery. [4] Although paravertebral blockade is comparable to thoracic epidural regarding analgesic effect in cardiac surgery, it is not widely used as it may cause vascular injuries or pneumothorax. [5,6] Erector spinae plane block (ESPB) is a recently implemented superficial myofascial plane block. Injecting local anesthetics above the transverse process and below the erector spinae (ES) muscle is a simple and safe technique compared to both paravertebral and thoracic epidural blocks. ESPB has been first described by Forero in 2016 for the treatment of thoracic neuropathic pain and post-mastectomy pain syndrome. [7] The analgesic effect of bilateral continuous ESPB in cardiac surgery is not fully investigated. We believed that ESPB may be effective as an epidural and paravertebral block as both dorsal and ventral nerve roots are blocked. Moreover, a sympathetic block of rami communicants may lead to visceral analgesia and vasodilatation of internal mammary vessels that are dissected to prepare the arterial graft for coronary vessels. [8] ESPB may also have few merits in comparison with other myofascial blocks that can be used in cardiac surgery like parasternal, transversus thoracic, serratus or pecs blocks [9,10]. ESPB is a superficial and easy to perform block that can be applied preoperatively. Furthermore, the site of catheter insertion is away from the surgical site. Aim of the study Our study aimed to compare the analgesic effect of bilateral continuous ESPB with multimodal intravenous analgesia in coronary bypass surgery. Methods This study was a prospective comparative randomized study, and patients were allocated randomly into two groups by a sealed envelope technique after computer-generated randomization. The participants and investigators could not be blinded because of the invasive nature of the study. It was applied from March 2019 to September 2020. After ethical committee approval, the protocol has been registered in the ClinicalTrials.gov: NCT03866733. Sample size Using STATA program, setting alpha error at 5% and power at 90%, and according to results of a previous study by Krishna et al., which showed that in group I 47.6% of the cases had pain score 3/10 at 10 and 12 h after extubation while none in group II had pain score 3 at the same time point. [11] According to that study, the estimated samples were 20 cases in each group. Patients included in the study were candidates for elective coronary bypass cardiac surgery via median sternotomy, body mass index <30 kg.m2 and ejection fraction of the left ventricle >50%. Patients with stenosis of the left main coronary artery or on anticoagulants were excluded. Other causes of exclusion were preexisting respiratory, neurological or renal disease; allergy to local anesthetics; uncooperative or psychiatric patients; and patient's refusal. Intraoperative causes of exclusion were prolonged CPB time (>120 min) or intraoperative inotropic support (dobutamine >5 µg/kg/min or epinephrine infusion >1 µg/ min). Patients who had a catheter dislodgement or who developed any postoperative complications such as bleeding, arrhythmias or renal impairment were excluded from the final analysis. Sixty patients were enrolled in the study but only 20 patients in each group completed the study. All the surgeries were done by the same surgical team. Patient informed consent was obtained from each participant. The Visual Analog Scale (VAS) scores of postoperative pain and pain control methods were explained to each patient during the preoperative visit. [12] Patients in group A received oral pregabalin 150 mg 2 h before surgery. After insertion of wide pore intravenous access, all patients received 2 mg midazolam and then arterial cannula was inserted with infiltration of local anesthesia. General anesthesia was induced by fentanyl 3 µg/ kg followed by propofol (1 mg/kg) and cisatracurium (0.15 mg/kg). Endotracheal tube was inserted, and patients were mechanically ventilated. The central venous line was inserted and secured. Anesthesia was maintained by sevoflurane in a mixture of oxygen and air 1:1 (FIO2 = 50%) and cisatracurium. Fentanyl bolus (1 µg/kg) was given before sternotomy and if systolic arterial blood pressure and/or heart rate increased by more than 20% above baseline in both groups. Before induction of anesthesia in group B, we counted and marked spinous processes from C7 to T7 while the patient was in the sitting position. We were guided by bony landmarks and ultrasound scanning. Bilateral ESPB was performed after induction of general anesthesia and positioning the patient in lateral position. Left lateral position was preferred as the radial arterial catheter was often inserted in the left forearm. We used a linear transducer 6-12 MHz (SonoSite M-turbo, USA). The probe was firstly placed in a transverse view over the T5 spinous process and then we moved laterally to view the lamina followed by the transverse process at approximately 3 cm from the median plane. Lastly, we rotated the probe to obtain a longitudinal view of adjacent transverse processes at the paramedian sagittal plane. Three muscles from superficial to deep were seen (trapezius, rhomboids and ES) above the hyperechoic transverse processes. In plane toughy, epidural needle was inserted deep to ES muscle from caudal to cephalic direction. Correct needle location was visualized by saline hydrodissection and then epidural catheter (B Braun Epidural kit) was threaded and secured. The same steps were repeated on the other side ( Figure 1). Bilateral ESPB was activated in all patients in the supine position while a CV line was inserted and other monitors were applied. After negative aspiration, bupivacaine 0.25%, 15 ml total volume was given in the left catheter for 15 min (5 ml each 5 min) followed by the right catheter. No other boluses were given during the surgery. After ICU transfer, bupivacaine 0.125%, 8 ml/h was infused postoperatively for 48 h after extubation using a silicon balloon infuser (Accufuser, Woo Young Medical co. Korea 300 ml). Heart rate and mean arterial blood pressure were documented at baseline (after induction of general anesthesia), before skin incision, after skin incision (skin incision was done 20 min after activation of both sides in group B), after sternotomy, 10 min after the end of cardiopulmonary bypass and before ICU transfer. Hypotension and bradycardia were properly managed by the anesthesia team. Hypotension was defined if mean arterial blood pressure was <65 mmHg offpump or <50 mmHg on-pump and treated with intravenous noradrenaline 4-8 µg. Bradycardia was defined as a heart rate <50 bpm and treated with intravenous atropine 0.01 mg/kg. If the patient had both hypotension and bradycardia, ephedrine 5-10 mg was given. After fulfilling extubation parameters, patients were extubated in the ICU. We started acetaminophen 1 g/ 6 h in both groups, to which ketorolac 30 mg/12 h was added in group A if there is no contraindication to NSAID (renal impairment, gastric ulcer, bleeding tendency and bronchial asthma). Intravenous morphine shots of 0.05 mg/kg were given to patients in both groups by the nurse upon patient's request as rescue analgesia. Our primary outcome was the postoperative pain score measured by VAS. It was assessed at 0 h (extubation), 4, 8, 12, 24 and 48 h by a nurse not included in the study. Secondary outcomes were intraoperative fentanyl and postoperative morphine consumption, time to extubation, peak inspiratory flow rate at 8, 12, 24 and 48 h using incentive respirometry (1 ball = 600 ml/min, 2 balls = 900 ml/min and 3 balls = 1200 ml/min). The period of ICU stay was also recorded. Complications as hypotension, bradycardia, catheter-related hematoma or infection were documented. Failure of block was diagnosed if high anesthetic and analgesic requirements were needed during surgery or postoperatively. Results Sixty patients were assessed for enrollment. Thirteen patients were excluded pre-randomization, and seven patients were excluded post-randomization. Finally, 20 patients in each group completed the study (Figure 2). Statistical analysis Data were coded and entered into the Statistical Package (IBM SPSS) version 23. The quantitative data were presented as mean, standard deviations when their distribution was found parametric. Also, qualitative variables were shown as numbers and percentages. The comparison between the two groups as regards qualitative data was done by using the Chi-square test. The comparison between the two independent groups with quantitative data was done by using an Independent t-test. The confidence interval was set to 95% and the accepted margin of error was set to 5%. There was no statistically significant difference between the two groups in terms of demographic data (Table 1). Intraoperative heart rate ( Table 2) and intraoperative mean arterial blood pressure (Table 3) were significantly lower after skin incision and after sternotomy in group B, although the difference was not significantly different between the two groups at baseline, just after bypass and before ICU transfer. The total intraoperative fentanyl and postoperative morphine consumption was significantly less in group B. Also, the number of breakthrough episodes was significantly less in group B than in group A (Table 5). Peak inspiratory flow was significantly higher in group B at 8, 12, 24 and 48 h after extubation (as shown in Figure 3). The length of stay in the ICU and the extubation time were shorter in group B (Table 6). Comparing the number of episodes of bradycardia or hypotension in both groups, the difference was not significant (Table 6). Paresthesia of the upper limbs was reported in two patients in Group B but resolved after discontinuation of the local anesthetic infusion. No adverse effects related to catheters, such as hematoma or abscess, were recorded and no signs of bupivacaine toxicity were observed. Discussion Acute postoperative pain after coronary bypass surgery is related to multiple intraoperative factors as sternotomy, tissue retraction, intercostal nerve trauma and intercostal tube insertion [1]. Opioids provide predictable perioperative satisfactory analgesia for cardiac surgeries but they are not without side effects. The interest of perioperative regional blocks in open cardiac surgery was supported by Bigeleisen et al., who demonstrated that patients may benefit from combinations of different pain control strategies [13]. In addition, the advantages of fast tracking could be the driving force for the usage of these techniques [3]. Thoracic epidural and paravertebral blocks are usually avoided by anesthetists and refused by surgeons in cardiac surgery. Although this concept has changed recently, the fear of hematoma formation with systemic heparinization remains a crucial issue [4,5]. Therefore, the absence of major neurovascular bundles in and around the area of interest renders ESPB safe especially with anticoagulation [14,15]. Despite that, studies investigating the analgesic effect of ESPB in cardiac surgeries with sternotomy are scarce. The present study illustrates the possibility of using continuous ESPB for perioperative analgesia in coronary bypass surgery. Most of the patients in our study were strongly anxious, highly worried and preferred to be anesthetized before receiving the block. We hope our study will increase the awareness among both surgeons and patients about the safety of ESPB in cardiac surgery. Performing the block under sedation before general anesthesia saves intraoperative time and helps to avoid changing the patient position under anesthesia. Additionally, this facilitates the assessment of cutaneous sensory loss as inter-individual variation of intensity of the ESPB is problematic. The results of our study show that patients in group B had significantly lower resting pain scores than those in group A; the median VAS score was ≤2 for 48 h after extubation. Krishna et al. had similar results but for a shorter period as patients in their study received single-shot block [11]. Their patients reported pain scores <4/10 for (8.98 ± 0.14 h) in ESPB group. Despite that, our patients have reported satisfactory pain scores after receiving low volume (3 ml for each dermatome to cover dermatomes from T2 to T6) and low concentration of bupivacaine (0.125%). Using larger bolus volume (20-30) or higher concentration (0.25%) of local anesthetics may produce a better quality of analgesia resembling thoracic epidural block. However, side effects of higher volumes and concentrations should be firstly investigated. Receiving a high dose of opioids has been a predictor of patient readmission within 30 days after cardiac surgery [16]. In order to decrease opioid consumption after cardiac surgery, Eljezi et al. added ketoprofen to postoperative morphine in the first 48 h [17]. In agreement with the previous research, we gave patients in group A ketorolac postoperatively in addition to acetaminophen. Therefore, postoperative morphine doses in the present study were 32.3 ± 5.04 mg versus 38 (27-45 mg) in the previously mentioned study. Morphine consumption rather decreased by 50% in our patients of group B (15.9 ± 2.63 mg), p = 0.00. Similarly, Bogusław et al. demonstrated that patients who received unilateral ESPB for valve replacement via minithoractomy consumed less postoperative oxycodone [18]. Effective pain management in group B resulted in significantly higher peak inspiratory flow than the flow in group A. Our results go with the results of Nagaraja et al. They concluded that ESPB was effective as an epidural blockade in improving inspiratory capacity following sternotomy in various cardiac surgeries. [19] Hamilton DL and Manickam B have hinted that ESPB is really an indirect paravertebral block [8]. Multiple studies examined the analgesic effect of the continuous paravertebral block in conventional cardiac surgery. Patients who received the block in those studies experienced early weaning from mechanical ventilation and early transfer to the ward from ICU [20,21]. Concurrent to these findings, extubation time in group B was 4.96 ± 0.71 h in comparison with 6.08 ± 0.69 h in group A, p = 0.00. Moreover, ICU stay of patients in group B was shorter than that in group A, 35.52 ± 3.87 h versus 47.06 ± 5.08 h, p = 0.00. In the present study, patients in group B did not experience significant hypotension or bradycardia, which suggests that hypotension is not a major risk in those patients. No complications were reported due to needle puncture as postoperative hematoma. Inspection of the catheter was done once daily and no catheter was removed due to inflammation at the puncture point. Catheters were safely removed 48 h after extubation despite starting both aspirin and clopidogrel. Limitation of the study 1-We did not stratify the patients according to the number of grafts required in each patient. 2-We did not report the total doses of vasoactive drugs used during the study in order to investigate their effects on heart rate and arterial blood pressure. Conclusion This study revealed that continuous bilateral ESPB might provide a safe and satisfactory perioperative pain control after coronary bypass surgery. It decreased perioperative opioid consumption, enhanced early postoperative rehabilitation, and caused early extubation and ICU discharge with a low incidence of adverse events. Recommendations -Ropivacaine can be used instead of bupivacaine because of its lower toxicity, so higher doses and volumes can be used for better pain control. Declarations Ethics approval and consent to participate: The study was approved by the ethical committee of the Ain Shams University (file reference no FMASU R 07/2019). Written consent for all patients was obtained.
33696150
s2orc/train
v2
2017-09-28T18:21:28.624Z
2017-09-28T00:00:00.000Z
Misoprostol Inhibits Equine Neutrophil Adhesion, Migration, and Respiratory Burst in an In Vitro Model of Inflammation In many equine inflammatory disease states, neutrophil activities, such as adhesion, migration, and reactive oxygen species (ROS) production become dysregulated. Dysregulated neutrophil activation causes tissue damage in horses with asthma, colitis, laminitis, and gastric glandular disease. Non-steroidal anti-inflammatory drugs do not adequately inhibit neutrophil inflammatory functions and can lead to dangerous adverse effects. Therefore, novel therapies that target mechanisms of neutrophil-mediated tissue damage are needed. One potential neutrophil-targeting therapeutic is the PGE1 analog, misoprostol. Misoprostol is a gastroprotectant that induces intracellular formation of the secondary messenger molecule cyclic AMP (cAMP), which has been shown to have anti-inflammatory effects on neutrophils. Misoprostol is currently used in horses to treat NSAID-induced gastrointestinal injury; however, its effects on equine neutrophils have not been determined. We hypothesized that treatment of equine neutrophils with misoprostol would inhibit equine neutrophil adhesion, migration, and ROS production, in vitro. We tested this hypothesis using isolated equine peripheral blood neutrophils collected from 12 healthy adult teaching/research horses of mixed breed and gender. The effect of misoprostol treatment on adhesion, migration, and respiratory burst of equine neutrophils was evaluated via fluorescence-based adhesion and chemotaxis assays, and luminol-enhanced chemiluminescence, respectively. Neutrophils were pretreated with varying concentrations of misoprostol, vehicle, or appropriate functional inhibitory controls prior to stimulation with LTB4, CXCL8, PAF, lipopolysaccharide (LPS) or immune complex (IC). This study revealed that misoprostol pretreatment significantly inhibited LTB4-induced adhesion, LTB4-, CXCL8-, and PAF-induced chemotaxis, and LPS-, IC-, and PMA-induced ROS production in a concentration-dependent manner. This data indicate that misoprostol-targeting of E-prostanoid (EP) receptors potently inhibits equine neutrophil effector functions in vitro. Additional studies are indicated to further elucidate the role of EP receptors in regulating neutrophil function. Overall, our results suggest misoprostol may hold promise as a novel anti-inflammatory therapeutic in the horse. inTrODUcTiOn Neutrophils provide a first-line defense against all types of tissue insult, including invading bacterial pathogens, and sterile tissue injury in both humans and horses. Upon infection, neutrophils move from the vasculature into areas of tissue inflammation by an intricate mechanism of recruitment and activation. This process includes adhesion, crawling, extravasation, interstitial tissue migration, and culminates in the release of bactericidal products such as reactive oxygen species (ROS) and antibacterial proteins (1). While these steps are necessary to defend the host against pathogens, dysregulated or overabundant neutrophil responses elicit substantial host tissue injury (2,3). Indeed, neutrophils have been implicated in the pathogenesis of many devastating disorders in horses including laminitis (4,5), heaves (6,7), and gastrointestinal ischemia-reperfusion injury (8). Horses diagnosed with these conditions could potentially benefit from therapeutics that prevent or minimize excessive neutrophil activation. Currently, therapies designed to inhibit neutrophilic inflammation in humans and animals are limited (9), and novel targets for neutrophil inhibition must be identified. One potential molecular target known to regulate neutrophil functions is cyclic AMP (cAMP). cAMP is a ubiquitously produced second messenger molecule that is generated intracellularly through neutrophil G-protein coupled receptor (GPCR) signaling. Inflammatory ligands such as cytokines bind to GPCRs and activate intracellular adenylate cyclase (AC), which catalyzes the cyclization of AMP to form cAMP. cAMP regulation is essential for neutrophil functions including adhesion (10,11), chemotaxis (12), and production of ROS (11,13,14). Naturally occurring cAMP-elevating agents include E-type prostaglandins (PGEs) PGE1 and PGE2. Binding of PGEs to E-prostanoid (EP) 2 and EP4 receptors increases intracellular cAMP and attenuates multiple neutrophil functions in vitro (15)(16)(17)(18)(19)(20). Unfortunately, clinical use of prostaglandins is limited because they are unstable and have poor oral bioavailability. One PGE analog that is both stable and well absorbed orally is misoprostol (21). Misoprostol is an EP2, EP3, and EP4 receptor agonist that increases intracellular cAMP and is FDA-approved to treat NSAID-induced ulceration in humans (21)(22)(23). In horses, misoprostol has been shown to decrease gastric acid secretion, increase recovery of ischemia-injured equine jejunum, and is currently used to treat NSAID-induced colitis and ulceration (24)(25)(26). The anti-inflammatory properties of misoprostol, however, have yet to be studied in equine neutrophils. Therefore, our goal was to evaluate misoprostol as a novel anti-inflammatory therapeutic in equine neutrophils. We hypothesized that the PGE1 analog misoprostol would inhibit proinflammatory functions of stimulated equine neutrophils in vitro. This study is the first to demonstrate that misoprostol pretreatment attenuates equine neutrophil adhesion, chemotaxis, and ROS production in a concentration-dependent manner. equine Donors and neutrophil isolation All experiments were approved by the Institutional Animal Care and Use Committee at North Carolina State University (NCSU). Horses included in this study were part of the NCSU Teaching Animal Unit herd, 5-15 years of age, and of mixed breed and gender. All horses were deemed healthy upon physical examination of a board-certified equine internal medicine specialist and were housed under similar conditions and did not receive any medications for the duration of the study. Neutrophils were isolated from equine whole blood by density-gradient centrifugation as previously described (27). Briefly, 30-60 cc of heparinized equine whole blood was collected via jugular venipuncture. Whole blood was placed into sterile conical tubes for 1 h at room temperature to allow erythrocytes to settle out of suspension. The leukocyte-rich plasma (supernatant) was layered onto Ficoll-Paque Plus (GE Healthcare, Sweden) at a 2:1 ratio. Cells were centrifuged and erythrocyte contamination was removed from the neutrophil pellet via 1-min hypotonic lysis. Misoprostol Pretreatment Neutrophils were pretreated with indicated concentrations of misoprostol, db-cAMP, wortmannin, staurosporine, or vehicle for each inhibitor, for 30 min at 37°C. Cell viability was evaluated before and after pretreatment using trypan blue exclusion and was routinely >98%. neutrophil adhesion Equine neutrophil adhesion methods have been optimized in our lab previously (27). Neutrophils were resuspended to a concentration of 1 × 10 7 cells per ml in HBSS. 2 µg/ml of the fluorescent dye calcein AM (Anaspec, Fremont, CA, USA) was added to cells and incubated in the dark at room temperature for 30 min. Following calcein AM-labeling, cells were resuspended at 2.0 × 10 6 in HBSS supplemented with 1 mM Ca 2+ , 1 mM Mg 2+ , and 2% FBS. For immune complex (IC)-induced adhesion, Immulon2HB plates (Thermo Fisher Scientific) were coated with 10 µg BSA overnight at 4°C and then incubated at 37°C for 2 h with 5 µg of anti-BSA antibody. 1 × 10 5 cells were plated per well and incubated for 30 min at 37°C. Wells that were not coated with anti-BSA antibodies served as unstimulated controls. For LTB4-and PMA-induced adhesion, plates were coated overnight with 5% FBS at 4°C. 1 × 10 5 cells were plated in each well and allowed to rest at 37°C for 10 min before addition of 10 ng/ml PMA (or 1 × 10 −5 % DMSO vehicle) or 10 nM LTB4 (or 3 × 10 −3 % ethanol vehicle). Cells were incubated at 37°C for 30 min with PMA or 75 s with LTB4. Following incubation, fluorescence readings were obtained using an fMax plate reader (485 nm excitation, 530 nm emission) to obtain initial fluorescence, and then again after the second (LTB4) and third (IC and PMA) wash. Percent adhesion was calculated as the difference between the initial and final fluorescence readings in each well. Treatment conditions were performed in triplicate on each plate. neutrophil chemotaxis Equine neutrophil chemotaxis methods have been optimized in our lab previously (27,28). Neutrophils were labeled with calcein AM and resuspended in media as described for adhesion experiments. Neuroprobe disposable ChemoTx Systems (Neuroprobe, Gaithersburg, MD, USA) with 3-µm pore size and polycarbonate track-etch filters were used. Cell media containing chemoattractant or vehicle was added to the bottom chamber of each well. Chemoattractants included 10 nM LTB4, 10 nM PAF, 100 ng/ml CXCL8, and vehicle for each chemoattractant (3 × 10 −3 % ethanol for LTB4 and PAF, HBSS for CXCL8). 100% migration control wells were prepared by adding 1 × 10 4 calcein AM-labeled cells to the bottom chamber of three wells. Porous filters were placed over the bottom chambers so that contact between filter and chemoattractant or control media was established. 1 × 10 4 misoprostol-or control-treated, calcein AM-labeled neutrophils were then added to the top portion of the membrance. Plates were incubated for 1 h at 37°C to allow directed cell migration into the bottom chambers (27). Following incubation, non-migrated cells were removed from the top of the filters and EDTA was added for 10 min at room temperature to detach cells remaining within the membrane. EDTA was removed and fluorescence of the bottom well was measured using an fMax plate reader as described above. Percent cell migration was determined by percent fluorescence of wells in each treatment group compared to the 100% migration control wells. Treatment conditions were performed in triplicate on each plate. neutrophil rOs Production Production of ROS was measured using luminol-enhanced chemiluminescence as previously optimized for equine neutrophils (29). Cells were plated on sterile, white, 96-well highbinding plates (Sigma), which were coated with 5% FBS (for LPS-and PMA-mediated respiratory burst) or 5 μg/well IC (for IC-mediated respiratory burst) as described above for adhesion experiments. Neutrophils were then stimulated using three different experimental protocols: (1) priming for 30 min with 1 ng/ml GM-CSF, followed by stimulation with 100 ng/ml LPS (or PBS vehicle); (2) 100 ng/ml PMA (or 0.01% DMSO vehicle); or (3) 5 µg/well immobilized IC (or no IC unstimulated control). 1 mM luminol was added to each well, and luminesce was measured every 5 min using a Fluoroskan Ascent FL Microplate Fluorometer and Luminometer (Thermo Scientific). As this assay had not yet been attempted in our lab, preliminary experiments were conducted prior to data collection to determine the optimal number of neutrophils to be used for each stimulant. A standard curve was created by plotting neutrophil cell numbers versus raw luminescence values. From this curve, neutrophil quantities that fell on the linear portion of the curve were selected for analysis (data not shown). In accordance with this preliminary data, neutrophils were resuspended to achieve a final concentration of 3 × 10 5 cells/well (IC), 2 × 10 5 cells per well (LPS), or 1 × 10 5 cells/well (PMA) in HBSS supplemented with 10 µM Ca 2+ , 10 µM Mg 2+ , and 2% FBS. A series of kinetics studies were completed to determine the time of maximal significant ROS production in response to each stimulant. Based on these results, time points selected were 35 min for LPS, 40 min for PMA, and 55 min for IC. The effect of misoprostol pretreatment on stimulated ROS production was then evaluated at those time points. ROS production in misoprostol-pretreated cells was determined as a percentage of stimulated cells. Treatment conditions were performed in triplicate on each plate for both kinetics and experimental studies. statistical analysis Data were analyzed using SigmaPlot (Systat Software, San Jose, CA, USA). All data were normally distributed (Shapiro-Wilk test) and are presented as mean ± SEM. Significant differences between treatments were determined with One-Way Repeated Measures Analysis of Variance (One-Way RM ANOVA) with Holm-Sidak multiple comparisons post hoc testing, or two-tailed t-test, where appropriate. ROS production from one horse was considered a significant outlier via the ESD (extreme studentized deviate) method and was excluded from analysis (α = 0.05). A p-value < 0.05 was considered statistically significant. Misoprostol Pretreatment inhibits lTB 4but not PMa-or ic-induced equine neutrophil adhesion We hypothesized that the cAMP-elevating agent misoprostol would decrease equine neutrophil adhesion, and that a cellpermeant cAMP analog (db-cAMP) would serve as a model for increased intracellular cAMP and thus a positive control for inhibition of adhesion in our experiments. To examine the effects of misoprostol on varying degrees of β2 integrin binding, we stimulated equine neutrophil adhesion with the GPCR agonist LTB4, the FcγR agonist immune complexes (IC), or a direct PKC agonist phorbol 12-myristate 13-acetate (PMA). Immune complex stimulated 71.3% adhesion of equine neutrophils in our assay. Interestingly, pretreatment of cells with misoprostol and db-cAMP had no significant effect on IC-induced adhesion ( Figure 1B). Wortmannin pretreatment significantly inhibited IC-induced adhesion to 12.1%. . Preincubation with known inhibitors of neutrophil adhesion-wortmannin (for IC) or staurosporine (for LTB4 and PMA)-were utilized as a positive control for inhibition. Neutrophils were stimulated with the following stimulants or vehicles: (a) 10 nM LTB4 (or EtOH vehicle), (B) 100 ng/ml PMA (or DMSO vehicle), (c) 5 µg immobilized IC (or 5% BSA vehicle). Cells were stimulated with LTB4 for 75 s, or PMA and IC for 30 min. Initial fluorescence readings were taken prior to removal of non-adherent neutrophils via multiple washing steps including two washes for LTB4, and three washes for IC and PMA. Percent adhesion was calculated as the final fluorescence reading versus the initial fluorescence reading in each well. Data are expressed as mean% adhesion ± SEM. **p < 0.001 and *p < 0.05 indicate significant difference from stimulated cells pretreated with misoprostol vehicle via One-Way RM ANOVA; n = 3. PMA stimulated approximately 67.5% of equine neutrophils to become adherent. Our results show that db-cAMP had no significant effect on PMA-mediated neutrophil adhesion; however, misoprostol pretreatment significantly enhanced PMA-induced adhesion to a maximum of 87.5%. Neutrophil pretreatment with the PKC inhibitor staurosporine was utilized as a positive control for inhibition in this assay and significantly inhibited PMA-induced equine neutrophil adhesion to 48.0% ( Figure 1C). Misoprostol Pretreatment inhibits equine neutrophil Migration toward lTB 4 , cXcl8, and PaF To our knowledge, there are no previous reports on the effect of the PGE1 analog misoprostol on the chemotaxis of neutrophils from any species. In this study, we stimulated equine neutrophils to migrate utilizing CXCL8, LTB4, and PAF in order to investigate the effects of misoprostol pretreatment on equine neutrophil chemotaxis. Concentrations of chemoattractants utilized for migration experiments were chosen based on previous work in our lab (27,28,30). LTB4 and CXCL8 were the most potent chemoattractants and induced directed migration of 73.6 and 70.8% of equine neutrophils, respectively (Figures 2A,B). While slightly less potent than LTB4 and CXCL8, PAF also induced significant neutrophil chemotaxis (58.0%, Figure 2C). Misoprostol significantly inhibited LTB4-and CXCL8-stimulated neutrophil migration at 300 µM (Figures 2A,B). Misoprostol significantly inhibited PAF-stimulated neutrophil migration at 100 µM ( Figure 2C). Interestingly, db-cAMP significantly inhibited PAF-mediated neutrophil Production of rOs is increased by PMa and immune complexes in Unprimed cells, and by lPs in gM-csF Primed cells We defined the magnitude and kinetics of equine neutrophil ROS production using luminol-enhanced chemiluminesence to establish a reliable assay for the production and detection of equine neutrophil ROS production. Equine neutrophils were stimulated to produce ROS in response to IC, PMA, and LPS. Equine neutrophils stimulated with 5 µg of immobilized IC produced a robust ROS response that peaked following 60 min of stimulation. This was followed by a decline in ROS production over the subsequent 60 min (Figure 3A). Because of high horse-to-horse variability, ROS production at this 60-min point was not considered significantly increased over unstimulated cells. However, the next highest point of ROS production at 55 min of IC stimulation was significantly different from controls ( Figure 3A) and was, therefore, selected for subsequent experiments. PMA (100 ng/ml) stimulation resulted in significant ROS production by equine neutrophils at 10 min ( Figure 3B). Equine neutrophils stimulated with 100 ng/ml LPS showed a small increase in ROS production that was not significant compared to controls. Priming cells with 1 ng/ml GM-CSF for 30 min prior to LPS stimulation led to a significant increase in ROS production which peaked following 35 min of LPS treatment. GM-CSF priming alone did not have a significant effect on ROS production ( Figure 3C). It is worth noting that LPS treatment of primed equine neutrophils resulted in a less potent ROS response than PMA and IC. Peak luminescence values in these cells were one order of magnitude lower than those produced by IC and PMA treatment, indicating that LPS is a weaker stimulant of equine neutrophil ROS production, even following neutrophil priming (Figure 3). Misoprostol Pretreatment significantly inhibits ic-and lPs-induced equine neutrophil rOs Production by equine neutrophils We next utilized our validated luminol-enhanced chemiluminescence assay to evaluate the effect of misoprostol on ROS production by isolated equine peripheral blood neutrophils in response to our three stimulants, IC, PMA and LPS. Misoprostol pretreatment of 200 and 300 µM inhibited IC-stimulated ROS production in a concentration-dependent manner to 49.4 and 42.9% of control, respectively ( Figure 4A). db-cAMP pretreatment also inhibited IC-stimulated ROS production in a concentration-dependent manner at 750 µM and higher ( Figure 4A). Most concentrations of misoprostol pretreatment actually enhanced PMA-stimulated ROS production of equine neutrophils in a concentration-dependent manner (Figure 4B). Unlike the other concentrations tested, 300 µM misoprostol pretreatment significantly inhibited PMA-mediated neutrophil ROS production ( Figure 4B). db-cAMP had no effect on PMA-stimulated ROS production. Based on the db-cAMP treatment results, we suspect the inhibitory effect seen with 300 µM misoprostol might be attributed to cAMP-independent mechanisms. The PKC inhibitor staurosporine was used as a positive control for ROS inhibition (Figure 4B). Misoprostol pretreatment significantly inhibited LPS-stimulated ROS production in primed equine neutrophils. This effect was concentration dependent, and was observed even at even the lowest misoprostol concentration. Similarly, db-cAMP pretreatment inhibited ROS production in a concentration-dependent manner ( Figure 4C). DiscUssiOn The aim of this study was to investigate the hypothesis that the cAMP-elevating agent misoprostol would have anti-inflammatory effects on equine neutrophil functions, including adhesion, chemotaxis, and ROS production. Consistent with our hypothesis, we show that misoprostol pretreatment inhibits LTB4-stimulated adhesion (Figure 1A), LTB4-, CXCL8-, and PAF-stimulated migration (Figure 2), and LPS-, IC-and PMA-stimulated ROS generation (Figure 4) of isolated, primary equine neutrophils. Misoprostol pretreatment had no effect on IC-induced adhesion and actually increased adhesion of neutrophils treated with PMA (Figures 1B,C). This study brings to light important information about misoprostol, which is a clinically relevant therapeutic that has been widely used to treat horses with colonic ulceration and has more recently been reported to be superior to omeprazole and sucralfate for healing gastric glandular lesions in horses with clinical disease (31). To our knowledge, we are the first group to establish anti-inflammatory effects of misoprostol on equine neutrophil functions in vitro. Given the recent report that misoprostol is an excellent therapeutic for healing gastric glandular lesions in horses, our in vitro results offer relevant information regarding the anti-inflammatory effects of misoprostol. The presumed mechanism for misoprostol's anti-inflammatory effects on neutrophil functions is EP2 and EP4 receptor binding. Activation of these receptors is known to increase intracellular cAMP (15,19,21). cAMP is a ubiquitously produced second messenger molecule that is generated through GPCR signaling in neutrophils. Ligand binding to GPCRs leads to activation of intracellular AC, which catalyzes the cyclization of AMP to form cAMP. cAMP activates two different intracellular pathways that mediate neutrophil adhesion, protein kinase A (PKA) and exchange proteins directly activated by cAMP (Epac) (10,11). Interestingly, the two pathways activated by cAMP have been shown to induce opposing cell signaling cascades. While PKA signaling in neutrophils is primarily inhibitory, Epac signaling is associated with neutrophil activation (32). Recently, it has been suggested that PKA is the predominant cAMP pathway within neutrophils, and thus agents that elevate cAMP hold great promise as inhibitors of neutrophil function in inflammatory disease (33). Previous reports demonstrate a direct link between elevation of intracellular cAMP and inhibition of equine neutrophil functions. Similarities between the effects of misoprostol and db-cAMP in our system suggest that increased cAMP is the predominant mechanism through which misoprostol inhibits most of the evaluated equine neutrophil functions. However, as we did not measure cAMP levels in response to misoprostol pretreatment, additional studies are needed to determine if there is a direct link between misoprostol, intracellular cAMP, and neutrophil functions. To enter inflamed tissues, circulating peripheral blood neutrophils must first adhere to endothelial cells neighboring injured tissues. This requires transient chemoattractant-induced adhesion, followed by firmer adhesion of activated neutrophils. Both adhesion events are mediated by β2 integrins (34). Increased intracellular cAMP has been shown to inhibit β2 integrin-dependent adherence of equine neutrophils (10,11). In this study, pretreatment of equine neutrophils with misoprostol, a cAMP-elevating agent, inhibited LTB4-and IC-induced adhesion in a concentration-dependent manner. However, in contrast to previous studies in our lab (11), inhibition only achieved statistical significance in LTB4 but not IC-stimulated cells (Figures 1A,B). We hypothesize that divergent signaling pathways induced by these endogenous stimulants lead to differing effects of misoprostol on transient (LTB4), versus firm (IC), neutrophil adhesion. In human neutrophils, LTB4-mediated GPCR-stimulated adhesion is PI3K-independent, while IC-mediated FcγR-stimulated adhesion is PI3K dependent (35). These findings are supported in this study in equine neutrophils (Figures 1A,B). Differences in PI3K dependence, as well as additional downstream signaling molecules, could explain the weaker inhibitory effects of misoprostol on IC-versus LTB4-stimulated equine neutrophil adhesion. In contrast to LTB4 and IC, misoprostol pretreatment led to a dose-dependent increase in PMA-stimulated adhesion that is similar to previous reports from our lab (11). PMA is a synthetic mimic of diacylglycerol and permeates the cell to directly activate PKC. Because PMA bypasses cell surface receptor activation it is a non-physiologic stimulus of neutrophil adhesion (1). From this experiment, we conclude that misoprostol inhibits equine neutrophil adhesion responses through a mechanism that is upstream of PKC-activation. Following adhesion to the vascular endothelium, neutrophils must crawl along the vascular endothelium and undergo directed interstitial tissue migration in response to chemoattractant gradients to reach sites of tissue injury or infection (36,37). Chemoattractants induce neutrophil migration by engaging GPCRs and activating many downstream signaling pathways, including mitogen-activated protein kinases, phospholipase C, and PI3K. While many of these mechanisms are shared, each unique GPCR initiates different chemotactic responses, intensities, and migration patterns (38). Lipid chemoattractants (such as LTB4 and PAF) initiate neutrophil chemotaxis into inflamed tissues, while chemokines such as CXCL8 act at later stages to amplify neutrophil chemotaxis (39). Because of these differences, we evaluated the effect of misoprostol on multiple types of chemoattractants including the lipids LTB4 and PAF, and the chemokine CXCL8. Human neutrophil migration is enhanced by 1 µM PGE1 pretreatment in response to fMLP, but is inhibited at higher PGE1 concentrations (40). In this study, misoprostol pretreatment inhibited equine neutrophil chemotaxis toward CXCL8, LTB4, and PAF (Figure 2). Misoprostol most potently inhibited chemotaxis toward PAF, which was the weakest chemoattractant evaluated ( Figure 2C). With the more potent chemoattractants (LTB4 and CXCL8), 300 µM concentration of misoprostol was significantly inhibitory. Interestingly, 1-10 µM misoprostol showed a trend toward enhancing LTB4-and CXCL8-directed migration, but this was not statistically significant (Figures 2A,B). These data are consistent with previous reports that low versus high levels of cAMP can stimulate or inhibit neutrophil migration, respectively (41). Previous studies have demonstrated that endogenous PGEs inhibit cell migration through an EP2 receptor mediated increase in intracellular cAMP (12,15,21). Additionally, cAMP's effect on neutrophil chemotaxis varies depending on the concentration of the chemoattractant. For example, increased intracellular cAMP in the presence of optimal concentrations of LTB4 has little effect on neutrophil migration; but the same levels of intracellular cAMP significantly inhibit neutrophil migration toward threshold concentrations of LTB4 (defined as the lowest concentrations of LTB4 that elicit a significant chemotactic response) (12). Taken together with our findings, it is likely that the effects of cAMP on equine neutrophil chemotaxis are dependent on the specific cAMP-elevating agent and the chemoattractant concentration utilized. Once within tissues, neutrophils are designed to produce ROS such as such as superoxide O 2 − ( ) and hydrogen peroxide (H2O2) to kill bacterial pathogens. Neutrophils are capable of releasing ROS intracellularly within phagosomes containing engulfed microbes, as well as into the surrounding tissues to kill nearby pathogens. Release of ROS into surrounding tissues can significantly contribute to host tissue damage in many disease states; in patients with inflammatory, rather than infectious diseases, overabundant ROS production can cause substantial tissue injury. Equine patients with inflammatory diseases would benefit from therapies that restrict neutrophil ROS production. Misoprostol and other cAMP-elevating agents are known to inhibit human neutrophil ROS production (19,42). Therefore, we hypothesized that misoprostol would inhibit ROS production by LPS-stimulated equine neutrophils. To detect both intra-and extra-cellular neutrophil ROS production, both of which can contribute to host tissue damage, we utilized luminol-enhanced chemiluminescence methods (43). Because LPS alone induced minimal ROS production in equine neutrophils, we hypothesized that priming neutrophils with granulocyte-monocyte colony-stimulating factor (GM-CSF) would enhance LPS stimulation. This hypothesis was based on previous studies which showed that neutrophil priming with GM-CSF prior to stimulation led to more robust ROS generation (44). Consistent with this report, we show that GM-CSF-primed, LPS-stimulated equine neutrophils generate significantly higher levels of ROS compared to LPS-stimulated cells that have not been primed (Figure 3C). Misoprostol and db-cAMP inhibited IC and LPS-stimulated ROS production in a concentration-dependent manner in our study (Figures 4A,C). IC-and LPS-stimulated ROS production is dependent on PI3K activation and is inhibited by PKA (45,46). Together, these data suggest that misoprostol likely inhibits ROS production in IC and LPS-stimulated neutrophils through cAMP-activated PKA. In contrast, PMA stimulates ROS production through direct activation of PKC (47,48). Therefore, this pathway is generally thought to be insensitive to cAMPelevating agents (42,45). Interestingly, while db-cAMP had no effect on PMA-mediated ROS production, 100 µM misoprostol significantly increased ROS levels. Conversely, 300 µM misoprostol significantly decreased ROS production ( Figure 4B), which is consistent with previously published findings (49). Insensitivity of PMA-induced ROS generation to cAMP helps to explain these contradictory findings. Misoprostol is currently used to treat and prevent NSAIDinduced GI injury in equine patients suffering from inflammatory disease (23,24). While select non-steroidal anti-inflammatory drug have been shown to inhibit neutrophil adhesion (50), chemotaxis (51), and respiratory burst (52), previous reports suggest these effects are further augmented by misoprostol (53,54). Previous research has also shown that misoprostol restores mucosal barrier function following ischemia-reperfusion injury in equine small intestine, potentially through a cAMP-dependent mechanism (26,55). Therefore, the addition of misoprostol to NSAID therapy could lead to more complete inhibition of neutrophilic inflammation with fewer GI side effects compared to NSAID treatment alone. Based on these previous reports and our current data, we propose that combining misoprostol with NSAID therapy in horses could help maintain GI health and could potentially inhibit neutrophil inflammatory functions. This study demonstrates for the first time that misoprostol exerts anti-inflammatory effects on equine neutrophil effector functions in vitro. While it is true that relatively high concentrations of misoprostol were necessary for significant inhibition of neutrophil function in vitro, this does not preclude our data from being clinically relevant. We propose that orally administered misoprostol may produce local, cAMP-mediated, antiinflammatory effects on injured GI mucosa. While these data are promising, high doses of misoprostol have been associated with negative side effect in human, canine, and equine patients, which include abdominal discomfort and diarrhea (56,57). Additional studies utilizing ex-and in vivo equine inflammatory models are currently underway to investigate this hypothesis and assess the safety profile of this drug in relation to potential anti-inflammatory effects. Additionally, these data provide proof of principle that misoprostol, a known EP receptor agonist, elicits anti-inflammatory effects on equine neutrophils. Based on this finding, we are currently conducting additional studies to evaluate the anti-inflammatory effects of specific EP2 and/or EP4 receptor agonists on equine leukocytes. eThics sTaTeMenT This study was approved by the Institutional Animal Care and Use Committee (IACUC) at North Carolina State University. aUThOr cOnTriBUTiOns EM was responsible for study design, experimental execution, and preparing the manuscript. RT performed all neutrophil adhesion experiments and critically reviewed the manuscript. MS critically reviewed the manuscript and aided in figure design and layout. SJ was responsible for overseeing all aspects of the study, including study design and critically reviewing the manuscript. FUnDing Funding for this study was provided by the Morris Animal Foundation (Grant # D15EQ018).
119134890
s2orc/train
v2
2017-08-24T22:09:42.000Z
2016-05-31T00:00:00.000Z
Index and topology of minimal hypersurfaces in R^n In this paper, we consider immersed two-sided minimal hypersurfaces in $\mathbb{R}^n$ with finite total curvature. We prove that the sum of the Morse index and the nullity of the Jacobi operator is bounded from below by a linear function of the number of ends and the first Betti number of the hypersurface. When $n=4$, we are able to drop the nullity term by a careful study for the rigidity case. Our result is the first effective generalization of Li-Wang. Using our index estimates and ideas from the recent work of Chodosh-Ketover-Maximo, we prove compactness and finiteness results of minimal hypersurfaces in $\mathbb{R}^4$ with finite index. Introduction Minimal hypersurfaces of the Euclidean spaces R n are critical points of the area functional. The Jacobi operator from second variation of area functional gives rise to the Morse index of the minimal surface. In Euclidean spaces R n , the second variation formula for a two-sided minimal hypersurface Σ is given by It induces a second order elliptic operator where |A| 2 is the sum of square of principal curvatures, and f is a compactly supported smooth function representing the normal variation. The Morse index of a compact subset K ∩ Σ is defined to be the number of negative eigenfunctions of J with Dirichlet boundary condition. By the domain monotonicity of eigenvalues, when K 1 ⊂ K 2 , index(K 1 ∩ Σ) ≤ index(K 2 ∩ Σ). Hence we may define the Morse index of Σ to be lim R→∞ index(B R (0) ∩ Σ). This limit exists and may be infinity. The classical Bernstein theorem [Ber27] asserts that an entire solution to the minimal surface equation in R 2 must be affine. Later, it was proved by Fischer-Colbrie-Schoen [FCS80], do Carmo-Peng [dCP79] and Pogorelov [Pog81] that the plane is the only stable (index 0) minimal surface in R 3 . If we allow positive Morse index, there are lots of examples of complete immersed minimal surfaces in R 3 . In [Cos82], [HM85] and [HM90], the authors constructed embedded minimal surfaces of genus g with any g ≥ 1. The index of a genus g Costa-Hoffman-Meeks surfaces is 2g + 3, by [Nay92] and [MOR09]. Another example of an immersed minimal surface with finite topology is the Jorge-Meeks surface [JM83]: for any integer r ≥ 3, there is an immersed simply connected minimal surface with r catenoidal ends. The index of a Jorge-Meeks surface with r ends is 2r − 3 [MR91]. These examples indicate a good control of the topology of a minimal surface in R 3 by its Morse index. The relationship between the topology and Morse index of a minimal surface has been studied by many authors. From the work of Fischer-Colbrie [Dor85], we know that if minimal hypersurface in a 3 dimensional manifold has finite Morse index, then outside a compact part the surface is stable. [Cho90], [Ros06] and [DM14] prove that the index of a minimal surface in R 3 is bounded from below by a linear function of the number of ends and the genus. [DM14] also summarized various known results connecting the index and topology of minimal surfaces in R 3 with finite total curvature. Such a 'small index implies simple topology' principle seems natural yet nontrivial when it comes to high dimensions. In [CSZ97], Cao, Shen and Zhu proved that for all n ≥ 4, complete two-sided stable minimal hypersurfaces have at most one end. Later Shen and Zhu [SZ98] proved that any complete stable minimal hypersurfac in R n with finite total curvature must be a plane. For minimal hypersurfaces with positive Morse index, Tam and Zhou [TZ09] showed that the high dimensional catenoid has index 1. Schoen proved in [Sch83] that the catenoid is the only connected minimal hypersurface with two regular ends. Li and Wang [LW02] proved that finite index implies finitely many of ends. However, their result did not give an explicit control of the number of ends by the index of the minimal hypersurface. It was pointed out by [SY76] that the existence of an L 2 harmonic 1-form violates stability. This was utilized by Cao-Shen-Zhu in [CSZ97] and by Li-Wang in [LW02]. Later Mei and Xu in [MX01] pointed out that if the minimal hypersurface has k ends, then there exists a k − 1 dimensional space of L 2 harmonic 1-forms. [Tan96] also investigated the connection between L 2 harmonic 2-forms and the stability in low dimensions. In this paper, we combine an idea of Savo [Sav10] with the harmonic 1-form technique discussed above to get an effective estimate of certain topological invariants and the index of minimal hypersurfaces in R n . In fact, we can prove: Theorem 1.1. Let Σ n−1 be a complete connected two-sided minimal hypersurface in R n , n ≥ 4. Suppose that Σ has finite total curvature, that is, Σ |A| n−1 is finite. Then we have index(Σ) + nullity(Σ) ≥ 2 n(n + 1) where nullity(M ) is the dimension of the space of L 2 solutions of the Jacobi operator, and b 1 (M ) is the first Betti number of the compactification of Σ. When n = 4, we are able to get rid of the nullity term by a more precise rigidity analysis of the construction, and get Theorem 1.3. Suppose Σ 3 is a complete connected two-sided minimal hypersurface in R 4 with Euclidean volume growth. Then The assumptions of Euclidean volume growth or finite total curvature in the previous two theorems are natural. In [Tys89], Tysk proved that all minimal hypersurfaces with finite total curvature must be regular at infinity [Sch83]. That means, at each end the surface is a graph over some plane of some function decaying like C|x| −n+2 . This precise large scale behavior of each end enables us to perform a more precise analysis. Our theorem has some interesting applications in the study of minimal hypersurfaces in Euclidean space. For example, it is unknown if the catenoid is the only index 1 minimal hypersurface in the Euclidean space. By [Sch83], we know that if a minimal hypersurface has two regular ends, then it is a catenoid. Theorem 1.3 is not strong enough to conclude this. However, we do have the following properties of the space of index 1 minimal hypersurfaces in R 4 . Theorem 1.4. The space of complete connected immersed two-sided index 1 minimal hypersurfaces Σ 3 ⊂ R 4 with Euclidean volume growth, normalized such that |A Σ |(0) = max |A| Σ = 1, is compact in the smooth topology. Theorem 1.5. There exists a constant R 0 such that the following holds: for any complete connected immersed two-sided minimal hypersurface Σ ⊂ R 4 with finite total curvature and index 1, normalized so that |A Σ |(0) = max |A Σ | = 1, Σ is a union of minimal graphs in R 4 − B R (0). Such property is not expected for a family of minimal hypersurfaces with larger index bounds, as illustrated by the following example. Example 1.6. Let C 0 be the genus 2 Costa-Hoffman-Meeks surface with 3 ends, one planar end and two catenoidal ends behaving like log |x|, − log |x| near infinity. It is known that there is a family of deformed surfaces C t with three catenoidal ends whose growth rate near infinity is approximately a t log |x|, log |x|, b t log |x|, with a t > 0 > b t , a t + b t + 1 = 0. The surface C t qualitatively looks like two surfaces, one above the other, joined by three catenoidal necks. The curvature of the surface C t is maximized at the three catenoidal necks. Now if we normalize each C t to C ′ t , with where 0 is on one of the three necks, then other necks of C ′ t drifts to infinity as t goes to infinity. In particular, for any R > 0, there is C ′ t which is not graphical outside B R (0). However, the family C ′ t have uniformly bounded index. The second application is the finiteness of diffeomorphism types of minimal hypersurfaces in R 4 with Euclidean volume growth and bounded index. Using theorem 1.3 and ideas from the recent work of Chodosh-Ketover-Maximo [CKM15], we are able to get the following: Theorem 1.7. There exists N = N (I) such that there are at most N mutually nondiffeomorphic complete embedded minimal hypersurfaces Σ 3 in R 4 with Euclidean volume growth and index(Σ) ≤ I. It would be interesting to see in more generality how the index of a minimal hypersurface in R n depends on its topological invariants. It is conjectured that a similar statement as in Theorem 1.3 should hold for 4 ≤ n ≤ 7. Even in dimension 4, we believe that the inequality of Theorem 1.3 is not optimal. For example, it does not answer the question of whether the high dimensional catenoid is the only minimal hypersurface in Euclidean space of index 1. These are interesting questions to investigate in future. The author would like to express his most sincere gratitude to his advisors, Rick Schoen and Brian White, for bringing this question to his attention, for several enlightening discussions, and for their encouragement. He also wants to thank Robert Bryant and Jesse Madnick for their helpful suggestions in the rigidity discussion, and David Hoffman for a careful description of the Costa-Hoffman-Meeks surfaces. Spectral property of minimal hypersurface with finite total curvature We start by revisiting the following classical result of Fischer-Colbrie. Then there exist k L 2 orthonormal eigenfunctions f 1 , . . . , f k of the Jacobi operator J with negative eigenvalues, such that for any compactly supported smooth function The proof of the theorem can be generalized for Σ n−1 in R n without much difficulty. Now let us first recall the definition for a minimal hypersurface to be regular at infinity. Sch83]). Suppose n ≥ 3. A minimal hypersurface Σ n−1 ⊂ R n is regular at infinity, if outside a compact set, each connected component of Σ is the graph of a function u over a hyperplane P , such that for x ∈ P , where C is some constant. In order to perform a more careful rigidity analysis, we use the extra condition that the minimal hypersurface Σ has finite total curvature. For our purposes, we use the fact that if Σ has finite total curvature, then |A| is bounded on Σ, and the induced metric on Σ tends to the Euclidean metric near infinity in the C 2 sense. Proposition 2.4. Let Σ n−1 ⊂ R n be a complete minimal hypersurface with index k that is regular at infinity, and let f 1 , . . . , f k be k L 2 orthonormal eigenfunctions with negative eigenvalue given by theorem 2.1. Then for any function Proof. We first observe that each f j is in fact in W 1,2 . Indeed, f j is a solution of ∆f j + |A| 2 f j = λ j f j . Since Σ is regular at infinity, the operator ∆ Σ is a uniformly elliptic operator, and |A| 2 is bounded. Therefore by a covering argument and elliptic estimates, we have ∇f j L 2 (Σ) ≤ C f j L 2 (Σ) < ∞. The first statement follows from a standard cutoff argument. Now let us assume Q(f, f ) = 0. We will prove Q(f, g) = 0 for any g ∈ W 1,2 (Σ). Let's first assume g is a compactly supported smooth function that is L 2 orthogonal to f 1 , . . . , f k . Take a large R > 0 so that supp(g) is contained in . . , f k , then g can be approximated by compactly supported smooth functions that are L 2 orthogonal to f 1 , . . . , f k . This implies Q(f, g) = 0. Next we show Q(f, f j ) = 0. We use the fact that each f j is in W 1,2 , so it is a weak limit of a sequence of eigenfunctions of J on B Ri (0) ∩ Σ. The statement now follows from a cutoff argument similar to the one before. Hence Q(f, g) = 0 for g in the span of f 1 , . . . , f k and in its L 2 orthogonal complement, therefore Q(f, g) = 0 for each g in W 1,2 (Σ). The space of bounded harmonic functions on Σ The statement in this section can be found in [CSZ97] and [MX01]. We include the proof here because bounded harmonic functions are essential in the construction of test functions for the stability operator. [MX01]). Let n ≥ 3 and Σ n−1 be a complete minimal hypersurface in R n with k ends. Then the are k linearly independent bounded harmonic functions with finite Dirichlet energy. By the maximum principle 0 < f i,R < 1 in B R (0) ∩ Σ. Using Schauder theory we get a uniform bound on |f i,R | C 2,α (K) for each compact K ⊂ B R (0). Therefore we may use Arzela-Ascoli to get a subsequence {f i,R } R converging to f i in C 2,β (β < α). For R 1 < R 2 , the function f i,R1 can be extended with constant value to a function on B R2 (0) ∩ Σ. Since harmonic functions minimize Dirichlet energy, Therefore the function f i is a bounded harmonic function with finite Dirichlet energy. Next we prove f i is not a constant function. Suppose the contrary. The function Therefore f i is identically 0 or 1. Without loss of generality we assume f i ≡ 1 (otherwise consider 1 − f i instead). Choose some l = i. Now take any smooth function ϕ which is identically 1 on E l , 0 on all other ends. Then f i,R ϕ is compactly supported. By the Michael-Simon Sobolev inequality and the fact that ∇ϕ is compactly supported, contradicting the fact that the l-th end E l has infinite volume. By similar reasoning, the functions f 1 , . . . , f k are linearly independent. Otherwise we would have u = c 1 f 1 + . . . + c k f k = 0. However, u is the C 2,β limit of some compactly supported harmonic functions taking c 1 , . . . , c k as boundary values. An argument similar to the one before shows that such a u cannot be constant. Proof of the main theorem In this section we prove Theorem 1.1. We start by collecting a family of L 2 harmonic 1-forms. Proposition 4.1. Let Σ n−1 be a complete minimal hypersurface in R n that is regular at infinity. Suppose Σ has k ends. Then there are k + b 1 (Σ) − 1 linearly independent closed L 2 harmonic 1-forms on Σ. Proof. Take the functions f 1 , . . . , f k constructed in section 3. Their differentials df 1 , . . . , df k are harmonic since d and ∆ commute. We prove that span{df 1 , . . . , df k } is k −1 dimensional. The function f 1 +. . .+f k is the limit of a sequence of harmonic functions with boundary values 1. By the maximum principle, each harmonic function in the sequence is identically 1. So f 1 + . . . + f k is also the constant function 1. We see df 1 + . . . + df k = 0. Suppose df 1 , . . . , df j are linearly dependent for some j < k. Then c 1 df 1 + . . . + c j df j = 0. Therefore c 1 f 1 + . . . + c j f j is a constant function on Σ. Combine f 1 + . . . + f k = 1, we get a nontrivial linear combination of f 1 , . . . , f k that equals 0, contradicting their linear independence of f 1 , . . . , f k . If b 1 (Σ) > 0, then we have b 1 (Σ) linearly independent closed non-exact harmonic 1-forms η 1 , . . . , η b1(Σ) . Then the set {df 1 , . . . , df k−1 , η 1 , . . . , η b1(Σ) } is a set of k + b 1 (Σ) − 1 linearly independent closed harmonic 1-forms on Σ. Now let us fix some notations. For any minimal hypersurface Σ n−1 in R n , let ∇ be the Euclidean connection on R n and ∇ be the Levi-Civita connection of the induced metric on Σ. Denote the Hodge Laplacian on p-forms by ∆ = −(dδ + δd). Suppose Σ is two-sided with a unit normal vector ν. Take two vector fields X, Y on Σ. Let S be the shape operator defined by S(X) = −∇ X ν, and let A be the second fundamental form defined by A(X, Y ) = S(X), Y . For two parallel vectorsW ,V in R n , let W, V be their projection on Σ. Let ω be a harmonic 1-form on Σ and ξ its dual vector field. With these notations, we have (1) ∇ X W = W , ν S(X), Proof. To prove (5), we see that We use (3) and (4) to simplify the first two terms. For the third term, we have where the first equality is true by (2), and the third equality by (1), the fourth equality by the fact that S is symmetric. Using the above equality and (3), (4) we get (5). Now we are ready to prove Theorem 1.1. Take x 1 , . . . , x n to be the standard coordinates of R n . The vector fieldsV 1 = ∂ ∂x1 , . . . ,V n = ∂ ∂xn are parallel vector fields in R n . Their projections onto Σ are denoted by V 1 , . . . , V n . Define the vector fields X ij = V i , ν V j − V j , ν V i . For a harmonic 1-form ω on Σ dual to a vector field ξ, define the functions f ω,ij = ω, X ij = V i , ν V j , ω − V j , ν V i , ω for 1 ≤ i < j ≤ n. It is clear that f ω,ij is in L 2 . By lemma 4.2, Note that if Σ has regular ends, then near infinity we have decay rates |A(x)| ≈ |x| −n . So each term of the right hand side of (4.1) is square integrable. Suppose index(Σ) = I is finite. By Proposition 2.4, there exist I W 1,2 smooth eigenfunctions ϕ 1 , . . . , ϕ I of the Jacobi operator ∆ + |A| 2 . Consider the linear system on ω Denote by l the dimension of the space of L 2 harmonic 1-forms on Σ. Now the linear system (4.2) has I · n(n−1) 2 equations. If l > I · n(n−1) 2 then there exists at least l − I · n(n−1) 2 linearly independent harmonic 1-forms for which (4.2) is satisfied by f ω,ij , for each pair of i, j with 1 ≤ i < j ≤ n. For each such ω, by proposition 2.4, Q(f ω,ij , f ω,ij ) ≥ 0 for each pair of 1 ≤ i < j ≤ n. On the other hand, For the first summand, Therefore each Q(f ω,ij , f ω,ij ) is equal to zero. By proposition 2.4, f ω,ij is in the kernel of Jacobi operator. To conclude the proof of Theorem 1.1, we prove the l − n(n−1) 2 I linearly independent harmonic 1-forms generate at least 2 n(n−1) l−I linearly independent functions f ω,ij . Then nullity(Σ) ≥ 2 n(n−1) l − I ≥ 2 n(n−1) (#ends + b 1 (M ) − 1) − index(Σ). In fact, we have: Proposition 4.3. Let H be an h dimensional subspace of L 2 harmonic 1-forms on Σ. Then the set {f ω,ij : ω ∈ H , 1 ≤ i < j ≤ n} has at least 2 n(n−1) h linearly independent L 2 smooth functions on Σ. Proof. Define a map F : We will prove that F is injective. Suppose ω is a L 2 harmonic 1-form such that SinceV 1 , . . . ,V n is an orthonormal basis for R n , V, ω = c V , ω for each parallel vector fieldV in R n and its projection V on Σ. In particular, at a point p ∈ Σ, chooseV 1 = ν(p) and V 2 , . . . , V n be a basis for T p Σ, we get c = 0 and ω(p) = 0. Denote by p ij the projection of ⊕ For this particular (i, j), the space of functions spanned by {f ω,ij : ω ∈ H } are at least 2 n(n−1) h dimensional. Remark 4.4. Let us look closer at the equality case in the proof of Theorem 1.1. For any harmonic 1-form ω with f ω,ij , 1 ≤ i < j ≤ n, in the kernel of Jacobi operator, we have 0 = − 1 2 (∆ + |A 2 |)f ω,ij = ∇ S(Vi) ω, V j − ∇ S(Vj ) ω, V i . Locally, every ω can be written as dφ for some smooth harmonic function φ. Then ∇ S(Vi) ω, V j = ∇ S(Vj ) ω, V i is equivalent to Hess φ(S(V i ), V j ) = Hess φ(S(V j ), V i ). Since {V i } is a basis for T Σ, we conclude that Hess φ(S(X), Y ) = Hess φ(S(Y ), X) for every pair of tangent vectors X, Y . Now taking a local orthonormal frame of principal vectors on Σ, we see that the above condition is equivalent to Hess φ being diagonalized by principal vectors of Σ. We are able to bound the dimension of the space of such functions φ when n = 4. Rigidity case We prove that for Σ 3 in R 4 , the space of L 2 harmonic 1-forms on Σ satisfying (∆ + |A| 2 )f ω,ij = 0, for each pair of (i, j), is at most 6 dimensional. This is the result of two geometric properties of minimal submanifolds. Proof. Take an orthonormal frame in a small neighborhood of the point p consisting of principal vectors e 1 , . . . , e n−1 with corresponding principal curvatures λ 1 , . . . , λ n−1 (all distinct), respectively. Then for any function φ with Hess φ(S(X), Y ) = Hess φ(S(Y ), X), letting X = e i and Y = e j for i = j, we get Hess φ(e i , e j ) = 0. Also ∆φ = 0 implies i Hess φ(e i , e i ) = 0. Now Σ is an analytic manifold since it is a minimal hypersurface of an analytic manifold. By the unique extension theorem, any harmonic function is uniquely determined by all its derivatives at one point p. We prove that if a harmonic function φ satisfies the extra condition that Hess φ commutes with the shape operator S, all the covariant derivatives ∇ j φ(p) are uniquely determined by φ(p), ∇ e1 φ(p), . . . , ∇ en−1 φ(p), ∇ 2 e1,e1 φ(p), . . . , ∇ 2 en−2,en−2 φ(p), so the dimension of all such functions is at most 2(n − 1). Case 1 Not all of i 1 , . . . , i j 's are equal. Then after switching the order of taking derivatives finitely many times, we will get an expression of ∇ j Every time we switch two consecutive indices i α , i α+1 , the difference we get is a curvature term depending linearly on lower order derivatives of φ at p. By assumption all lower order derivatives of φ at p are zero. On the other hand, since Therefore, in this case, ∇ j ei 1 ,ei 2 ,...,ei j φ = 0. Case 2 i 1 = i 2 = . . . = i j are all equal. Without loss of generality we may assume ..,e1,ej ,ej φ. From case 1 we know ∇ j e1,...,e1 φ = 0. The next theorem shows the assumptions of the previous proposition holds for general minimal hypersurfaces in R 4 . Theorem 5.2. Suppose Σ 3 ⊂ R 4 is a connected complete minimal hypersurface, with the property that at each point there are two equal principal curvatures. Then Σ is either a hyperplane or a catenoid. Proof. If the principal curvature at every point is 0, then Σ is a hyperplane. We assume that there is an open subset U of Σ such that principal curvatures of Σ in U are given by λ, λ, −2λ for some nonzero λ. Denote∇ the connection in R 4 , and ∇ the connection on Σ. Choose an orthonormal frame {e 1 , e 2 , e 3 } locally in U , and let N be its unit normal vector in R 4 , such that∇ e1 N = λe 1 ,∇ e2 N = λe 2 ,∇ e3 N = −2λe 3 . We first prove that span{e 1 , e 2 } is an integrable distribution. For this, let's show [e 1 , e 2 ] is also a principal vector with curvature λ. Denote by Γ to be the integral submanifold of the distribution spanned by {e 1 , e 2 }. From the above we also see that λ is constant along Γ. We next prove that Γ is part of a sphere. To see this, we first note that∇ e1 e 3 has no component in e 3 and N , and∇ e3 e 1 has no component in e 1 and N . Therefore we may assumē ∇ e1 e 3 = ae 1 + be 2 (5.2)∇ e3 e 1 = ce 2 + de 3 . On the other hand, we havē We now have∇ ei N = λe i and∇ ei e 3 = αe i for some constant λ, α along Γ. Viewing Γ as a vector valued function X, we see that X − 1 λ N and X − 1 α e 3 are both constant vectors (when α = 0 the second conclusion is X lies on a plane). Hence X lies on the intersection of two 3-spheres (when α = 0), or the intersection of a 3-sphere and a hyperplane (when α = 0). In either case, Γ is a part of a 2-sphere. The above proves a foliation structure of Σ by spheres Γ. By a result of Jagy (corollary of section 4 in [Jag91]), a connected minimal hypersurface of R 4 with an open set foliated by spheres possesses SO(3) symmetry. Hence Σ is a 3-dimensional catenoid. Remark 5.3. The same proof directly gives the higher dimensional analogue of theorem 5.2. Namely, if Σ n−1 is a connected minimal hypersurface in R n with the property that at every point on Σ there is a principal curvature with multiplicity n − 2, then Σ is a higher dimensional catenoid. 6. The space of index 1 minimal hypersurfaces in R 4 In this section we consider the space of index 1 minimal hypersurfaces in R 4 with Euclidean volume growth. For minimal hypersurfaces with Euclidean volume growth in R n , 4 ≤ n ≤ 7, finite index is equivalent to finite total curvature. Moreover, theorem 1.3 implies a control of the volume growth rate in terms of index. Consider the set S = {Σ 3 ⊂ R 4 : Σ is a complete connected oriented minimal hypersurface with index 1 and Euclidean volume growth, |A Σ |(0) = max |A Σ | = 1.} Then the volume growth rate of every surface in S is uniformly bounded. That is, for every Σ ∈ S, R > 0, For example, we may take η = 15. Now prove the space S is compact in the smooth topology. Take a sequence Σ j in S. We first observe that up to a subsequence (which we also denote by Σ j ), there are two modes of convergence. The first is by the fact that the curvature of Σ j is uniformly bounded. Therefore by Arzela-Ascoli, there is a subsequence converging locally graphically in C 1,α to some Σ. From standard minimal surface theory, this also implies the convergence is locally smooth. The second mode of convergence is that, since we have a uniform density bound, the varifolds determined by Σ j have uniformly bounded local mass. By Allard's compactness theorem, a subsequence converges as varifolds to some Σ ′ . By the constancy theorem, Σ ′ is supported on Σ. As a result, we get that Σ j converges to Σ both locally smoothly and in the varifold sense. Now the varifold convergence implies that the second variation of Σ j converges to Σ. In particular, the index of Σ cannot be larger than 1 (otherwise for large j, there will be at least two negative eigenfunctions for the Jacobi operator on Σ j ). However, from smooth convergence we know |A Σ |(0) = |A Σj |(0) = 1, hence by [SZ98], Σ cannot be stable. Therefore we conclude that Σ has index 1. It remains to prove that Σ is connected. The argument we use here is similar to [CKM15]. The following observation of White asserts that rapid curvature decay implies simple topology, namely Proposition 6.1 ( [Whi87]). Let Σ n−1 be a minimal hypersurface in Euclidean space. Assume for all x ∈ B R (0) c , |A Σ |(x) · |x| ≤ 1 4 , and Σ intersects ∂B R (0) transversely on k connected components, each one diffeomorphic to S n−2 . Then We briefly mention the proof of this proposition. Under the curvature condition, |x| 2 is a Morse function with no critical point in Σ − B R (0). Therefore by Morse theory each connected component of it is diffeomorphic to S n−2 × [0, 1). Now go back to the proof of our compactness theorem. Each Σ j and Σ have finite index and Euclidean volume growth, hence they are regular at infinity. So there are constants R j such that Σ j intersects ∂B Rj (0) transversely and |A Σj | · |x| ≤ 1 4 , for x ∈ Σ − B Rj (0). Assume also this R j is the least possible choice. Assuming the claim, the connectedness of Σ follows. Indeed, suppose R j < R. Then for each j, by Proposition 6.1, Σ j ∩ B R (0) is connected. Now the varifold convergence of Σ j → Σ implies Hausdorff convergence in compact set. Therefore Σ ∩ B R (0) is connected. From this and Proposition 6.1 we see that Σ is connected. Now we prove the claim. Suppose the contrary. Then by taking a further subsequence (which we still denote by Σ j ), R j → ∞. Consider the rescaled sequencē Σ j = 1 Rj Σ. The sequenceΣ j has the same density at infinity as Σ j , hence by Allard's compactness theorem they converge, up to a subsequence, to some varifold Σ. By the choice of R j we see that the curvature estimate |AΣ j | · |x| < 1 4 holds for x ∈Σ j − B 1 (0). By Proposition 6.1, eachΣ j ∩ B 1 (0) is connected. ThereforeΣ is connected. Now the curvature of Σ ′ j blows up at 0, so the convergence cannot be smooth at {0}. SinceΣ j has index 1, the surfaceΣ is regular everywhere, and the convergence is not smooth at no more than 1 point. So {0} is the unique point where the convergenceΣ j →Σ is not smooth. By Allard's theorem the convergence cannot be of multiplicity 1. A statement in [Sha15] then guarantees the existence of a nonzero Jacobi field onΣ, i.e. thatΣ is stable. Therefore Σ is a plane through 0. By the choice of R j there exists some x j ∈Σ j ∩ ∂B 1 (0) such that |AΣ j |(x j ) = 1 4 . Taking a subsequence of x j converging to some x ∈Σ ∩ ∂B 1 (0), we get a contradiction, since |AΣ(x)| = 0, andΣ j →Σ smoothly near x. The claim is proved. Theorem 1.4 roughly says that an index 1 minimal hypersurface in R 4 cannot have two necks that are far away, as in Example 1.6. This, together with the following corollary, can be viewed as evidence that the 3 dimensional catenoid is the unique index 1 minimal hypersurface in R 4 . Corollary 6.3. There exists a constant R such that for any minimal hypersurface Σ 3 in R 4 with index 1 and Euclidean volume growth, such that |A Σ |(0) = max |A Σ | = 1, we have that Σ − B R (0) is the union of minimal graphs. Finite diffeomorphism types of minimal hypersurfaces in R 4 with Euclidean volume growth and bounded index Recently, Chodosh-Ketover-Maximo proved the following finiteness diffeomorphism result of minimal hypersurfaces Σ n−1 ⊂ R n , 4 ≤ n ≤ 7. Recall that a minimal hypersurface Σ n−1 ⊂ R n with Euclidean volume growth and finite index must be regular at infinity. By the monotonicity formula, the volume growth rate lim R→∞ Vol(Σ ∩ B R (0))/ω n−1 R n−1 is equal to the number of ends. When n = 4, theorem 1.3 provides an upper bound of the number of ends in terms of the index. In fact, without assuming a uniform volume growth rate, a minimal hypersurfaces Σ 3 ⊂ R 4 with Euclidean volume growth and index I must satisfy Vol(Σ ∩ B R (0)) ≤ (6I + 7)ω n−1 R n−1 . As a result, we have
54573630
s2orc/train
v2
2012-11-05T23:05:46.000Z
2012-11-05T00:00:00.000Z
Ultralow frequency acoustic resonances and its potential for mitigating tsunami wave formation Bubbles display astonishing acoustical properties since they are able to absorb and scatter large amounts of energy coming from waves whose wavelengths are two orders of magnitude larger than the bubble size. Thus, as the interaction distance between bubbles is much larger than the bubble size, clouds of bubbles exhibit collective oscillations which can scatter acoustic waves three orders magnitude larger than the bubble size. Here we propose bubble based systems which resonate at frequencies that match the time scale relevant for seismogenic tsunami wave generation and may mitigate the devastating effects of tsunami waves. Based on a linear approximation, our na\"ive proposal may open new research paths towards the mitigation of tsunami waves generation. Bubbles display astonishing acoustical properties since they are able to absorb and scatter large amounts of energy coming from waves whose wavelengths are two orders of magnitude larger than the bubble size. Thus, as the interaction distance between bubbles is much larger than the bubble size, clouds of bubbles exhibit collective oscillations which can scatter acoustic waves three orders magnitude larger than the bubble size. Here we propose bubble based systems which resonate at frequencies that match the time scale relevant for seismogenic tsunami wave generation and may mitigate the devastating effects of tsunami waves. Based on a linear approximation, our naïve proposal may open new research paths towards the mitigation of tsunami waves generation. Tsunami waves are one of the most devastating natural events. Recent examples originated by submarine earthquakes having a magnitude M W > 9.0 have been covered by worldwide media showing the outcomes of such a natural disaster and the human and economic tragedy thereafter. Current measures and efforts are focused on forecast, early warning systems, and occasionally also on coastal tsunami run-up mitigation [1]. Paradoxically, such destruction is generated by a very small amount of the strain energy released by the faulting, roughly speaking less than 1% [2,3]. Then, one may ask: Could it be possible to mitigate the generation of seismogenic tsunami waves?. Any thinkable mechanism that can be thought for this purpose acquires enormous proportions. Mitigation of coastal waves would imply huge barriers over extensive zones that are not affordable in the case of large tsunamis. It has been proposed the deployment solid periodic resonators arrays as a way to block water waves [4]. However, if applied to tsunami scales it concerns very large rigid scatterers hardly feasible for real situations. In a similar way to what happens in the sky, where kilometer size clouds composed by water droplets can be seen thank to light being scattered by tiny water droplets, we propose the use of air bubble metaclouds immersed into water to affect the generation of tsunamis (see Fig. 1). Thus, instead of dealing with a huge system composed of large parts, our proposal takes advantage of properly arranged small building blocks that could collectively resonate at a frequency scale a tsunami requires. Bubbles have unique properties due to the large mismatch between air and water mechanical compressibilities [5,6]. A bubble ( Fig. 1 stage 0) having equilibrium radius R 0 displays a resonance [7] whose angular frequency is given by ω 0 = R −1 0 3γp 0 /ρ, where ρ corresponds to water density, γ represents the specific heats ratio for air, and p 0 denotes the static pressure in water. Considering the sound speed in water as c = 1500 m/s and atmospheric pressure, one can easily find that Figure 1. Diagram depicting the metacloud approach. The building block (stage 0) consists of small air (gas) bubbles of radius R0 which arranged form a spherical cloud (stage 1) of radius R1 with a gas filling fraction α1. A number of N2 clouds are then arranged within a disk (stage 2) of radius R2 and thickness 2R1 with a cloud filling fraction α2 forming a metacloud. From the resonance formulae, n, j = 1, 2, . . . . λ 0 /2R 0 ≈ 200, which means that the wavelength in water at the bubble resonance is 200 times larger than its size. This deep subwavelength behavior makes bubbles a perfect building block for a metamaterial [8]. In order to figure out the length scale of a bubble resonating with a tsunami wave, we should estimate the order of magnitude of the time scale of the seismic movement responsible of arXiv:1211.1070v1 [physics.flu-dyn] 5 Nov 2012 the tsunami formation. Long gravitational surface waves (tsunami waves in open ocean) in the linear regime propagate with a speed c g = √ gh over the water layer of depth h (g = 9.8 m/s 2 is the acceleration due to gravity). Choosing the time interval that this wave requires to propagate a distance equal to h as a target we obtain frequencies on the order of ω T /2π = g/h = 0.075 Hz for h = 5 km. Using the resonance frequency of an hypothetical bubble tuned at ω T we obtain a radius R T = ω −1 T 3γp 0 /ρ ≈ 43 m. Although having a deep subwavelength resonance, this enormous bubble suffers from several practical drawbacks such as buoyancy that may be prevented by using huge gas balloons attached to the see bottom. However, the gradient pressure appearing at increasing sea depths should strongly affect the gas balloon sphericity, modifying its scattering properties. Figure 1 shows the strategy we have followed to reach a resonance frequency of 0.075 Hz using a small bubble of few millimeters as building block (stage 0). In the next stage (stage 1) a bubble cloud is considered and eventually the metacloud stage (stage 2) is reached. Bubble clouds can be found in nature and have been studied as a source of noise in underwater environments [9,10]. In this context, collective modes of bubbles in clouds ( Fig. 1 stage 1) [6] have been observed due to the large acoustical scattering and absorption cross sections (σ s , σ a respectively) of single bubbles allowing long range interactions across the cloud. Defining the acoustical interaction radius as σ/π, we can compare between a spherical cloud and bubbles (or balloons) of different sizes as in Fig. 2. Following [11] and considering thermal effects and damping mechanisms, we calculate scattering and absorption interaction radius in the linear regime for time harmonic excitation, i.e. the bubble radius can be written as R = R 0 (1 + ϕ exp(iωt)) when the complex oscillation amplitude |ϕ| << 1. Then, the bubbly water of the cloud [12], can be treated as an effective homogeneous medium coupled to the surrounding liquid. A cloud of R 1 = 37 m formed by bubbles of R 0 = 3.7 mm with a gas filling fraction α 1 = 0.024 clearly shows a collective resonant mode at R 1 /λ ≈ 0.01 ( Fig. 2 see labels). As it can be expected, the cloud interaction radius curves are orders of magnitude away from that of their building block. Also, the maximum interaction radius of the cloud overcome those provided by large bubbles having either the same radius R 1 , or a radius R g = 10.6 m which contains the same volume of gas than the cloud. Although both large bubbles have lower resonant frequencies, the interaction radius is larger for the cloud, which depends strongly on the radius R 1 . Taking ( σ/π) Max for different cloud radii R 1 while keeping R 1 /R 0 = 10 4 and the same gas filling fraction for the cloud (α 1 = 0.024), we have plotted in Fig. 3 the maximum acoustical interaction radius of a cloud as a function of R 1 . For the sake of comparison and in the same manner as in Fig. 2, we also depict the cases of a single bubble (or balloon) of different sizes. It can be seen that above a critical radius (R 1 ≈ 10 m) the cloud has higher interaction radius than the corresponding large bubbles. The acoustic results shown above allow us to consider low frequency modes of clouds of bubbles as a single bubble, i.e. the lowest resonant mode of a cloud is similar to the fundamental breathing mode of a single bubble. Also the collective mode takes place when the wavelength in water is at least 50 times larger than the cloud radius R 1 = 37 m [12], which corresponds to a length around 1.85 km. This wavelength value can further be enlarged to the tsunamis length scale by increasing the size of the bubble cloud. Alternatively, we can move further up to larger scales using the spherical cloud as a building block for a two dimensional (2D) metacloud (stage 2 in Fig. 1) and expect to see similar effects. In fact, using the multiple scattering method [13] [14] we are able to observe the interaction between clouds within a metacloud, as depicted in Fig. 4. Expressing the pressure at the n-cloud as p n = p 0 + φ n exp(iωt), then Re{φ n } will give us information concerning the pressure at t = 0 as depicted in Fig. 4(a), (b), (d), and (e). Fig. 4(c) and (f ) show the time averaged pressure at the n-cloud |φ n | as a function of its distance to the metacloud center normalized by the metacloud radius r/R 2 . These two modes of the metacloud are obtained arranging N 1 = 1005 clouds to form either square or random two-dimensional arrays of radius R 2 = 1.7 km having a cloud filling fraction α 2 = 0.3. Both modes have the same shape regardless the way the clouds are arranged (Fig. 4 (a) and (b); (d) and (e)), which can be directly observed in Fig. 4 (c) and (f ). In addition, there is a good qualitative agreement between the mode shape for spherical bubble clouds, for which φ(r) = sin(kr)/kr as derived in [15] (k is the wavenumber in the bubbly effective medium), and present results for a circular metacloud as shown in Fig. 4(c) and (f ). Thus, our results predict the existence of metacloud collective modes starting from large bubble clouds whose acoustical characteristics are comparable or even better than its large bubble counterparts. Consequently, a suitable spatial distribution of tiny bubbles having 3.7 mm in size would be able to collectively oscillate at the low frequencies (0.075 Hz) of tsunamis. In summary, within the linear approximation and using the acoustical approach, we predict the existence of collective oscillations of metaclouds. This phenomenon could be used to scale down the resonant frequency of a system able to target time scales which are characteristic in the generation of seismogenic tsunamis. Our proposal concerns a naïve approach as it avoids important ingredients such as bubble cavitation and nonlinear effects in clouds. An extended discussion on the role of incompressibility in the formation of seismogenic tsunami waves and metaclouds, the nonlinear behavior of bubbles and bubble clouds as well as some notes on the feasibility of the metacloud approach are given in the Appendix. We hope this work could open a new path towards tsunami gener-ation mitigation, which, to the best of our knowledge, is not yet included in any tsunami-related agenda [1]. Furthermore, our study could stimulate further investigation towards experimental demonstration of the collective oscillations of bubble metaclouds and the development of more realistic and quantitatively accurate theoretical models that may lead to a feasible strategy to mitigate tsunami generation. We thank E. Economou, J. Garcia de Abajo, and R. Alvarez for the critical reading of the manuscript. This work has been supported by the Spanish MICINN MAT2010-16879, Consolider CSD2007-00046 and Generalitat Valenciana PROMETEO 2010/043. F.M. conceived the idea of bubbles influencing tsunami wave generation. H.E. performed the calculations and developed the metacloud concept. F.M. and H.E. analyzed the data and wrote the paper. Appendix Born approximation The effects of the bubble cloud on the acoustic wave propagation are difficult to deal with even within the linear regime. Born approximation, which is assumed in the multiple scattering formulation, limits the amplitude of the scattered wave to be negligible in comparison to the incident wave. In bubble clouds, however, this is not always the case and the model fails in giving quantitative predictions even at low filling fractions (α 1 ∼ 0.01) [16]. The power balance of the sum of the scattered and the absorbed power over the incident power Π T /Π i deviates from unity as the Born approximation fails. However, the position of the peaks where the power balance is not preserved corresponds to the frequencies where collective modes appear. We have tested this behavior for clouds and expect the same to hold for metaclouds (see Fig. 5). Thus, we obtain a qualitative estimation of metacloud collective modes depicted in Fig. 4 of the paper. More sophisticated methods have been developed for the one-dimensional case [17] and to study the collective modes in microbubble clouds [18] However, further work would be required to obtain quantitative predictions. Compressibility Tsunami waves generation by a moving sea-bottom is on its own a difficult problem. Considering the effect of compressibility in the generation of tsunami waves, the low resonant hydroacoustic mode of the water column at h = λ/4 can affect the transmission of the sea bottom displacement to the free water surface via nonlinear mechanism depending on the moving bottom velocity. In addition, the hydroacoustics mode would constitutes the only tsunami generation mechanism in the absence of sea bottom residual displacement [19,20]. A distribution of bubbles (cloud or metacloud) located at a significant depth [21] could affect this hydroacoustic modes. However, for slow sea-bottom velocities and large generation areas compared with the water depth, the water compressibility can be neglected and then the generation of tsunami waves can be understood within the incompressible water framework [22,23]. One may think that as we obtained our results from the acoustic approximation and compressibility is a fundamental condition for acoustic waves to exist, metacloud and bubbles in general have no chance to affect tsunami generation when it is governed by incompressible mechanisms. However, due to a) the large differences between the compressibility of air and water and b) due to the deep subwavelength nature of the resonances studied, the incompressibility of water is also present in our proposal. The well known Rayleigh-Plesset equation, which is at the core of any study on air bubbles Moreover, in the spherical cloud model developed by d'Agostino and Brennen [12] incompressibility is recalled when deriving the scattering and absorption cross sections. We rederived their results in the linear regime including water compressibility in the boundary conditions and the only difference between their derivation and ours lies on (i2πR 1 /λ + 1) factors. Provided that R 1 /λ << 1 (deep subwavelength regime) the incompressible approximation dominates. In simple words, as the wavelength is so large compared to the cloud (metacloud), the bubbles (clouds) only feel hydrostatic pressure across the whole ensemble. Thus, incompressible flow is not an issue that would turn off the collective metacloud resonance, although further research is certainly needed to know how the tsunami generation would be affected by these resonances. Nonlinearity It is well known that even for small amplitude, bubbles can display a rich nonlinear behavior [6]. Cavitating clouds have also being studied mainly in the context of hydrodynamic systems [5,6]. How nonlinearities would affect the collective oscillations of the metacloud and whether these nonlinearities would play or not a role in tsunami generation mitigation are challenging questions that should be addressed in the future. Feasibility There are several factors which come into play concerning the feasibility of the metacloud approach for tsunami generation mitigation. If small bubbles were to be chosen, polydispersity should not be an issue since the lowest order modes are only slightly affected [18]. The generation of monodisperse microbubbles has been successfully achieved in laboratory environments [24], however, another source of polydispersity will arise if the small bubbles are supposed to form clouds having tens of meters given by the static pressure difference along the cloud. Buoyancy and interaction of the clouds with ocean currents should also be considered. If instead of small bubbles, gas filled balloons ∼ 1 m were to be chosen as the building block, buoyancy could be countered by means of ballast. In this case, the sphericity of the balloon would be compromised and its effect together with the effect of the covering membrane must be taken into account. Whether the cloud/metacloud should lie near to the sea surface or at a certain depth must be considered at the light of its influence on the tsunami generation. Although our proposal considers a certain degree of randomness within the metacloud, provided that a certain global filling fraction α 2 is reached and the clouds are at a minimum distance of each other (smaller than the interaction radius), in a real deployment the appropriate geometry might be an stochastic fractal one.
231898280
s2orc/train
v2
2021-02-11T14:01:57.771Z
2021-01-29T00:00:00.000Z
Is Palm Kernel Cake a Suitable Alternative Feed Ingredient for Poultry? Simple Summary Supply of raw materials such as corn and soybean meal as livestock and poultry feeds may be limited and is a significant concern during the Covid-19 pandemic especially for the countries that depend on importation of raw materials. Consequently, the palm kernel cake has been proposed as an alternative raw material for animal feeds to reduce importation dependency. The chemical composition of palm kernel cake varies depending on the method of oil extraction. The crude fiber content of palm kernel cake is acceptable to most ruminants but is considered high for poultry. Biodegradation of palm kernel cake through solid-state fermentation can improve its nutritional quality, improving broiler health status and growth performance. Abstract Palm kernel cake (PKC), a by-product of oil extracted from palm nuts through expeller press or solvent extraction procedures is one of the highest quantities of locally available and potentially inexpensive agricultural product. PKC provides approximately 14–18% of crude protein (CP), 12–20% crude fiber (CF), 3–9% ether extract (EE), and different amounts of various minerals that feasible to be used as a partial substitute of soybean meal (SBM) and corn in poultry nutrition. Poultry’s digestibility is reported to be compromised due to the indigestion of the high fiber content, making PKC potentially low for poultry feeding. Nevertheless, solid-state fermentation (SSF) can be applied to improve the nutritional quality of PKC by improving the CP and reducing CF content. PKC also contains β-mannan polysaccharide, which works as a prebiotic. However, there is a wide variation for the inclusion level of PKC in the broiler diet. These variations may be due to the quality of PKC, its sources, processing methods and value-added treatment. It has been documented that 10–15% of treated PKC could be included in the broiler’s diets. The inclusion levels will not contribute to a negative impact on the growth performances and carcass yield. Furthermore, it will not compromise intestinal microflora, morphology, nutrient digestibility, and immune system. PKC with a proper SSF process (FPKC) can be offered up to 10–15% in the diets without affecting broilers’ production performance. Introduction The livestock and poultry industries are vital to global industries that recorded consistent growth over the last 30 years. The poultry industry has relative advantages of being simpler in management, higher productivity, and faster return on investment than other There are two methods of palm oil extraction: expeller or screw press, and solvent extraction. PKC is the result of the expeller oil extraction procedure, while the solvent extraction technique yields PKM. Extraction with solvent generally produces less residual oil than the expeller process, whereas crude protein and crude fiber are higher in solventextracted PKM [21,22]. Therefore, the nutritional values of PKC and PKM differ depending on their method of extraction [10]. More than 75% of PKC are made from cell-wall components, which made up of 35.2% mannose, 2.6% xylose, 1.1% arabinose, 1.9% galactose, 15.1% lignin, and 5.0% ash [23]. β-mannan is the main component of palm kernel by-products NSPs which is regarded as a prebiotic and is known to enhance birds' immune system and reduce pathogenic bacteria in the small intestines [10]. The nutritional profile of PKC is shown in Table 1. It has a low nutritional value; however, processing and conversion through SSF could significantly increase its nutritional values and make it useful for poultry [10,[22][23][24][25][26]. [27]. 1b Untreated palm kernel cake [28]. 1c Untreated palm kernel cake [29]. 2a Untreated palm kernel meal [30]. 2b Untreated palm kernel meal [25]. 2c Untreated palm kernel meal [31]. 3 Ensiled PKM; PKM was ground, sprinkled with water until wet (not dripping) then ensiled for 7 days [25]. 4 Degraded PKC; PKC was sprayed by extracts from Aspergillus niger and bags sealed for 7 days [32]. 5 Fermented PKC by Paenibacillus polymyxa ATTCC 842, for 9 days incubation period [28]. 6 Fermented PKC by Paenibacillus curdlanolyticus DSMZ 10248, for 9 days incubation period [28]. 7 Fermented PKC by Trichoderma koningii for 21 days [27]. Table 2 illustrates the amino acid contents of PKC. Digestible amino acids are essential for determining acceptable sources of protein and dietary supplements. PKC can be utilized as a source of protein as well as an energy source [6]. The CP values of PKC range between 14 and 21% and this variation in CP can be attributed to the different processing methods [33]. This level is low for starter diets for young chicks, but it is adequate for older birds that require lower protein diets. Furthermore, due to the poor amino acid content, especially essential amino acids such as lysine and methionine, the nutritional value of PKC is considered very low [26,32]. The Crude Fiber Content of PKC The fiber inclusion response depends on the source and amount of dietary fiber and dietary characteristics, likewise for the bird's physiological and health status [34]. PKC is considered as a high fiber co-product [26]. The CF content of PKC, ranging from 16-18%, is acceptable to most ruminants, but it may not be suitable if included at the high levels in poultry or pig diets. Insoluble and soluble fibers present in PKC are the main reasons for a lower nutrient digestibility in monogastric animals [35]. However, the CF content of PKC can be significantly reduced through fermentation [28,36]. Cellulose is the most significant structural component of the plant cell wall [37]. Cellulose, hemicellulose, and lignin represent approximately 20-50%, 15-35%, and 10-30% of plant cell walls respectively on a dry weight basis [38]. Different species of cellulolytic bacteria and fungi can hydrolyze lignocellulose in plant cell walls [39], as a significant fraction of lignocellulose is composed of carbohydrates. Hence it can be used as a source of renewable energy [40]. For instance, mannan degrading enzymes can be added in the broiler diet to breakdown the main polysaccharide component. This will directly improve feed digestibility and feed efficiency [10]. Besides, a combination of various fibro-lytic enzymes can also enhance the saccharification of NSPs [41]. A study has observed that the effect of PKM supplemented with enzyme contributed greater nutrient digestibility compared to enzyme-free diet, resulting in a higher body weight gains and better feed conversion ratio (FCR) among birds fed with enzyme supplemented diets (40). Hydrolysis tests showed that the yield of monosaccharides obtained represented nearly 75% of the total polysaccharides content in PKC [23]. Common monosaccharides composition of PKC includes glucose, fructose, and mannose at 154, 218, and 22.1 mg/100 g, respectively [15]. Table 3 demonstrates the mineral content of PKC. The ratio of calcium to phosphorus and sodium to potassium is low in diets based on PKC. It is necessary to supplement those minerals to meet most animals' nutrient requirements [42]. PKC is also a better source of Ca, Mn, Zn, and Na than groundnut cake, whereas groundnut cake is a good source for K, Mg, Fe, and P [43]. Energy of PKC The growth rate of broiler chickens requires an energy-intensive diet to sustain their growth. PKC provides 6.5 to 7.5 MJ/kg metabolizable energy for poultry ( Table 4). The total carbohydrate content of PKC is 47.71%, which is higher than that of groundnut cake (28.3%) and cocoa cake (42.1%) [6,43]. Mannan is the main component of PKC NSPs. It was found that 78% of NSPs in PKC are mannan with low galactose substitution, 12% cellulose, 3% glucuronoxylan, and 3% arabinoxylan [10]. Because of its useful properties, mannan is a biodegradable and bioactive polysaccharide that has been of interest to different sectors. Mannan can be further categorized into glucomannan, galactomannan, and galactoglucomannan based on the sugar unit types in mannan chains [44]. However, PKC is an excellent source of raw material for mannose and mannan oligosaccharide production [10,45]. Solid-State Fermentation (SSF) of PKC SSF is a biotechnological process in which microorganisms grow in solid substrates in the absence of free water. The goal of SSF is to place cultured microorganisms in direct contact with the insoluble substrate to obtain the concentration of the maximum nutrients for fermentation from the substrate [47]. SSF appears to be a possible technology for the production of microbial products. It improves the nutritional value of agriculture by-products produced by agricultural industries as a residue [48]. As a result, SSF is used widely because of its economical and practical advantages over submerged fermentation such as using of wide variety raw materials with an extensive variation of substrate composition and size, low energy expenditure, less expensive, lesser fermentation space, easier control of contamination and high reproducibility [48,49]. Biochemical Aspects of SSF One important application of SSF is the production of various enzymes such as cellulases, hemicellulases, ligninase, protease, lipase, pectinase, phytase, amylase, and xylanase which are essential enzymes required for biotransformation of PKC [19,47,50]. Besides various physicochemical factors, numerous environmental factors, for instance, temperature, moisture, pH, inoculum type, substrate, particle size, agitation and aeration, oxygen, and carbon dioxide could influence the growth and activities of microorganisms in SSF [47]. Modification in Fermented PKC (FPKC) Due to SSF Dietary fibers are heterogeneous dietary components that are not hydrolyzed by the digestive enzymes of non-ruminant animals [51]. For proper functioning of the digestive organs, poultry requires a low amount of complex fiber in their diet. The SSF of PKC produces a product that contains low hemicellulose and cellulose concentration but high protein content [19]. Microbial fermentation using bacteria or fungi has been documented to improve agricultural by-products' nutritional values by altering its composition. Findings from numerous studies suggest that both bacterial and fungal fermentation increase the total protein and decrease fiber contents of PKC [10,16,17,26,50,51]. Different bacteria, such as Bacillus amyloliquefaciens, Paenibacillus curdlanolyticus, P. polymyxa, lactobacillus, and B. megaterium able to degrade cellulose, hemicellulose, xylans, and mannans molecules, thus significantly improve the natural quality of PKC [12]. Lactiplantibacillus plantarum strains (especially; L. plantarum RG11, L. plantarum RI11, and L. plantarum RG14 (based on their total score of extracellular hydrolytic enzymes activates) can grow on PKC biomass. It performs synergistic secretions of various extracellular proteolytic, cellulolytic, and hemicellulolytic enzymes essential for the effective biodegradation of PKC [19]. The latest findings showed the effects of L. plantarum RI11 on different renewable natural polymers, describing the L. plantarum RI11 can be a potential candidate as lignocellulosic biomass degrader. It can produce functional extracellular cellulolytic and hemicellulolytic enzymes in rice straw, molasses, PKC, and soybean pulp [52]. On the other hand, the fermentation of PKC by Trichoderma longibrachiatum significantly increased CP from 18.76 to 32.79% and decreased cellulose levels from 28.31 to 12.11% [53]. Fermentation of PKC by Aspergillus oryzae decreased the hemicelluloses levels from 37.03 to 19.01% [54]. Lateef et al. [36] reported that fermentation of agro-waste by-products by fungal strain Rhizopus stolonifera LAU 07 under SSF, increased crude protein level from 19.7 to 26.3% and decreased crude fiber level from 22.5 to 12.5%. Yadi et al. [55] reported that fermented substrate by Trichoderma viride, containing 80% PKC and 20% rice bran, increased CP from 13.38 to 17.34% and decreased crude fiber from 30.55 to 23.67%. Nevertheless, the primary concern for fungal fermentation is the production of various mycotoxins in the substrate. Deoxynivalenol, nivalenol, zearalenone, fumonisin, vomitoxin, patulin, aflatoxin, and ochratoxin are few examples of mycotoxins which can depress the growth of animals and could be hazardous for both human and animals. The mycotoxin problem can be prevented by substituting fungi with various cellulolytic bacteria in SSF [12,18,19,56]. Briefly, microorganisms will utilize agricultural biomass as raw materials for their growth via the fermentation processes [57]. Hence, the desirable effect of microbial activity in fermented feed is caused by its biochemical activity [58]. Those microbial enzymes will break down carbohydrates, lipids, proteins, and other feed components during the fermentation of PKC, which ultimately improves the overall PKC nutritional quality [12]. Utilization of PKC as Livestock Feed Palm fibers are safe as they are pure, non-carcinogenic, free of pesticides, and have soft parenchyma cells that can be processed and produced as animal feeds [59]. PKC is one of the highest quantities of locally available and potentially inexpensive feedstuffs in many tropical countries [27] (Table 5). Limitation to Using PKC in Non-Ruminant Nutrition There is a limitation to using PKC in monogastric animal diets because of the high CF, coarse texture, and gritty appearance. Traditionally, PKC has not been used widely in pig and poultry diets. This mainly because of its unpalatability and high fiber content (150 g/kg DM). As a result, this reduces its digestibility for these animals [17]. The CF content of PKC, ranging between 16% and 18%, is considered high for non-ruminants. It may not be suitable if included at high levels in poultry or pig diets [6]. The presence of high content of NSPs in PKC prevents it from being widely used in poultry diets. Thus, SSF is employed to reduce NSPs [17,35]. Furthermore, PKC has different anti-nutritional factors like 0.40% tannic acid, 6.62 mg/g phytin phosphorus, 23.49 mg/g phytic acid, and 5.13 mg/g oxalate which has adverse effects on the nutritional quality of PKC [43]. PKC in Poultry Nutrition Malaysia is one of the world's largest palm oil producers with abundant PKC available throughout the year. There is a need to efficiently utilize this by-product as an alternative feed for the local poultry industry [17]. The importation cost of corn and SBM dramatically influences the price of animal feedstuff in the country, making PKC an alternative feed ingredient. To poultry farmers, the primary factor in utilizing PKC is its relatively low price to be used as one of the ingredients in poultry diets [62]. The feed cost per/kg decreases with increasing levels of PKC [1,34,63,64]. Nonetheless, the challenge of using agro-byproducts as feed ingredients for poultry is the presence of fiber components in these materials. Since poultry has a simple digestive system, the inclusion of PKC in their feeding diet is limited because of the absence of fiber digestive enzyme activities in their gastrointestinal tract (GIT) [51]. Additionally, some essential nutrients such as amino acids and energy content in the PKC may influence the feed cost. Few researchers have reported variations of optimum inclusion level of PKC in poultry rations. The use of PKC in poultry depends on the type, age, and sex of the chickens, as well as the sources and variations of oil and shell content of the PKC [6,46]. Edwards et al. [62] suggested that PKC in poultry diets should be limited to 20%. The same finding by Anaeto et al. [1] showed that broiler birds could utilize PKC based diet up to 20% without adverse effects on their production performance. Furthermore, Ugwu et al. [65] also recommended that the 20% PKC can effectively replace maize for the finisher phase of broilers resulting in a better performance. Furthermore, PKC inclusion in broiler chickens' diets improved the relative weights of immune organs and enhanced humoral immunity [66]. On the contrary, results obtained by Alshelmani et al. [17,35] showed that the inclusion of more than 10% untreated PKC in broiler diet might have adverse effects on birds' performance. These contradictory results may be contributed to the oil extraction methods from palm fruits, which led to the differences in its composition. The findings obtained by Zanu et al. [60] showed that layers could utilize PKC based diet better (up to 5 and 10% inclusion) without any adverse effects on their production. In contrast, the egg production was adversely affected consequent to 15% PKC supplementation. Effects of PKC on Broiler Growth Performance In broiler chickens, growth performance is the most important economic factor in their production. It was reported that broiler chickens could tolerate up to 40% of PKM inclusion without adverse effects when those diets were formulated based on digestible amino acids and metabolizable energy [10,67]. Furthermore, another study indicated that the inclusion of 8 and 16% PKM increased weight gain compared to 0% PKC diet, whereas weight gain was severely reduced by feeding 24% PKM diets. Meanwhile, 0 and 16% PKM diets had similar feed intake, whereas feed intake of 8% PKM diet was higher among the groups [68]. Results obtained by Alshelmani et al. [17] determined that 15% PKC in broiler feeding diet led to a significant decrease in body weight gain compared to 0 and 5% PKC. The finisher phase of broilers fed with 10 and 15% PKC had lower growth performance than birds fed with 0 and 5% PKC. Bodyweight gains of birds fed with 10 and 15% PKC were significantly lower than birds fed with the same levels of FPKC and control groups. The FCR was higher (2.07 and 2.16 g:g) for 10 and 15% PKC compared with the same levels of FPKC (1.83 and 1.93 g:g) and control groups (1.91 g:g), respectively. While the body weight gain was significantly lower for chickens fed with 10 and 15% PKC compared to the same levels of FPKC and control groups. Moreover, Rahim et al. [69] using diets containing 25% PKC observed the growth performance of broilers fed with untreated and treated PKC groups was significantly (p < 0.05) lower than the broilers fed the control (untreated) diet. These discrepancies of PKM and PKC findings may be due to the differences in oil extraction methods, which are solvent and expeller press that led variation in nutrient composition. Anaeto et al. [1] have reported the effect of feeding broiler during the finisher phase with 0, 10, 20, and 30% PKC. There was a significant difference in weight gain among the birds, with birds fed 20 and 30% PKC diets had higher weight gain, while feed intake and FCR were not significantly different. In contrast to another study, PKC inclusion at 5 and 10% in broilers' diet did not adversely affect the body weight, daily body weight gains, and feed intake. In contrast, the inclusion of PKC at 15 and 20% significantly reduced birds performances [66]. Zanu et al. [60] referred that PKC inclusion at 15% level in layers' diets did not affect the feed intake while significantly reduced body weight gain. A study on Muscovy ducks showed that 35% of PKC's inclusion significantly increased feed intake and reduced FCR [42]. A research conducted by Pushpakumara et al. [70] showed that weight gain of birds fed with diet containing 20% PKC, was significantly lower compared with birds fed with 10% and 15% PKC. The feed intake of birds fed with 15% PKC was significantly higher than birds fed with 0 and 5% PKC. Meanwhile, the FCR of birds fed with 20% PKC was significantly higher than all other treatments. In contrast, the FCR of birds was not significantly affected by the inclusion of PKC at 5%, 10%, and 15%. As mentioned before, the inclusion of PKC at a higher level in the poultry diet decreases nutrient digestibility due to the higher fiber content of PKC [17,64]. Kalmendal et al. [71] reported that the presence of high levels of fiber in poultry diets negatively affect the surface, width, and height of intestinal villi, and subsequently affecting nutrient utilization negatively. This reduction in nutrient digestibility may be accompanied by an increase in feed intake [42]. The birds' poor performance was observed to be a result of high concentrations of neutral detergent fiber (NDF), acid detergent fiber (ADF), CF, and NSPs in the mentioned components [17,72]. Additionally, increased fibers in poultry diets may also increase viscosity and passage of ingesta in the small intestines. Therefore, the utilization of nutrients such as CP, amino acids, and energy would be adversely affected [17]. High levels of fiber in broiler feeding diets could also negatively affect intestinal villi morphology [71]. Hence, short villi resulted in an impaired absorption due to loss of intestinal surface area and, consequently, the overall growth performance [2,51]. The increase in feed consumption by broilers feeding on higher PKC levels could be due to energy dilution in PKC that encourages chickens to consume adequate feed to meet their energy requirements [63,73,74]. An optimum level of PKC or PKM in broiler chickens' diets to enhance growth performance can be achieved through solid-state biodegradation. It has been proposed that the biomass of PKC or PKM can be treated with microbes. These microbes can produce extracellular proteolytic, cellulolytic, and hemicellulolytic enzymes to improve nutritional values [11,12,15,19,32,33,36,[75][76][77]. Additionally, as the fermented feeds improved gut health through proper microflora population and balance of metabolism [78], it may increase the utilization of nutrients like CP, amino acids, and energy. Hence, chicken growth performance could be improved [79]. Results of numerous studies [10,11,41] indicated that the body weight and feed intake of birds fed with 15% PKC were significantly lower compared to the same level of PKC + enzyme. It maybe due to the fact that the mannanase, α-galactosidase, cellulase, and various other fibrolytic enzymes in broilers' diets increase the degradation of diet and eventually increase the growth performance of birds [41,76]. Briefly, the variations in the effects of PKC on broiler growth performance was noted to be due to the different nutrient composition of PKC arising from differences in methods of oil extraction [21,22]. Effects of PKC on Carcass Yield and Internal Organs It has been known that the birds' dietary protein sources can influence the relative weights of breast, drumstick, abdominal fat, as well as their internal organs (liver, heart, spleen, gizzard and bursa). Findings of Okupe et al. [80] showed that breast weight of birds fed 0% PKC diet was significantly higher compared to those fed with 10, 20 and 30% levels of PKC, whereas drumstick weight of birds fed with 30% PKC diets was significantly higher compared to birds fed with 0, 10 and 20% PKC diet. The result obtained by Okupe et al. [80] showed that the abdominal fat weight of birds fed with 0% PKC diet was significantly higher than those fed with different levels of PKC. Chinajariyawong and Muangkeow [72] reported that the abdominal fat of birds fed on diets containing PKM or FPKM was significantly lower than the control. Alshelmani [17] showed that abdominal fat of chickens fed with 10 and 15% PKC included was higher than 5% PKC and FPKC included groups. Dietary composition and lipid metabolism can greatly influence abdominal fat [81]. The significant increase in abdominal fat for birds fed high levels PKC or FPKC attributed to the inclusion of palm oil with higher ratios in their diets than the control or low levels PKC or FPKC treatments [17]. β-mannan is the main component of palm kernel biomass NSPs [10]. It is reported that 0.5 gr/kg and 1 gr/kg mannan oligosaccharide in broilers feeding diet reduced percentage of abdominal fat in the carcass and did not significantly affect dressing percentage and liver, heart, gizzard and bursa weights of birds [82]. On the other hand, PKC at 0, 5, 10, 20, and 30% inclusion levels in broiler diet did not significantly affect birds' carcass characteristics [66,83]. Numerous other studies [17,55,77] determined that the inclusion of PKC and FPKC at 5, 10, and 15% in broiler diet also did not significantly affect their carcass characteristics. Similar findings were recorded by Bello et al. [63], Mardhati et al. [84], and Pushpakumara et al. [70], that PKC did not significantly affect the dressing percentage of birds. These variations of PKC effects on birds dressing percentage may be due to different ration types, nutrient content, birds breed, environmental conditions, processing, and management conditions. It was reported [80] that the liver weight of birds fed with 0% PKC diet was significantly higher than those of birds fed with 10% PKC diet, whereas the weight of heart of chickens fed 0% PKC was significantly lower than birds fed with 10, 20, and 30% PKC diet. Besides, the proventriculus weight of birds fed with 30% PKC diet was also significantly higher compared to birds fed with 10% or 20% PKC diets. Pushpakumara et al. [70] determined that liver weights of birds fed with 0, 5, 10, 15, and 20% levels of PKC were not significantly different among birds fed with varying treatment diets. Besides, Soltan [66] showed that gizzard size and relative spleen weight did not significantly increase at 5% inclusion level of PKC but increased substantially with 10, 15 and 20% inclusion levels in broiler diet compared with 0% PKC group. Zanu et al. [60] showed the same findings that the gizzard of laying chickens increased with 15% inclusion level of PKC. Okeudo et al. [83] reported that gizzard weight was significantly higher in birds fed with 10, 20, and 30% PKC diet compared with the 0% PKC group. The increase in gizzard size could be due to the high fiber content of PKC [85]. An increase in spleen's relative weight following the feeding of fungal on fermented PKC or PKM could be due to the influence of fungal toxin during SSF [72]. Commonly, the suggested inclusion levels of PKC in broiler diets do not have adverse effects on overall carcass traits. Gut Morphology The intestinal crypt is the dilation of the epithelium around the villi. The base of the crypt is continuously dividing to maintain the structure of the villi. Thus an increase in depth of the crypt creates more developed villi [2]. The height of villi and surface area is important to determine nutrient absorption potential [86]. The density and size of small intestinal villi and micro-villi are directly related to the birds' ability to absorb the nutrient [87,88]. Diet has an important influence on gut health, including effects on the proliferation of pathogenic bacteria, and it can provide either beneficial or harmful effects [51]. High fiber levels in poultry feeding diets negatively affect the surface, width, and intestinal villi height [5,71]. A fermented feed with low pH, high amount of lactic acid and acetic acid, and an increased number of lactic acid bacteria (LAB) can effectively improve gut health through intestinal microflora balance and development of intestine [89]. Alshelmani et al. [35] reported no significant difference in birds' morphology of small intestine among the different diets (5, 10, and 15% PKC and FPKC) groups. Utilization of FPKC until at level 9% with no significant effects on height and density of villi in all parts of the small intestine, whereas at 18% level, only jejunum villi were significantly lower [5]. Results obtained by Yaophakdee et al. [90] showed that chickens fed PKM at 15% in diet did not affect ileum morphology. Meanwhile, Zulkifli et al. [30] showed that the 25% inclusion of PKM significantly increased the villi height and width. Findings by Sabour et al. [87] indicated that fibrous supplementation increased villus height of broilers chickens. Hence the improvements of gut morphology may be due to the high fiber content of PKC [30,87]. Further investigations are still needed on this topic. The chickens' response with dietary fibers is based on the source of fiber content, diet characteristics, and the bird's physiologic and health status [34] and duration of the dietary fiber in the diet, animal species, and age of the animal [51]. The performance and gut health of broiler chickens could be improved due to improved nutrient digestibility when SSF was used to reduce the anti-nutritional factors in plant protein sources and increase the bio-availability of nutrients and inhibit pathogenic bacteria in the gut [88]. Microflora Count Gut microflora benefits the host by helping in digestion, absorption, and storage of nutrients. Furthermore, gut microflora control and improve epithelial immune responses and function [2]. The inclusion of palm kernel expeller in chickens' diet improved nonpathogenic bacteria count in the intestines [10]. For instance, the inclusion of 15% FPKC in broiler feeding diet significantly increased the LAB counts compared to the negative control and different PKC groups' levels. Simultaneously, no significant difference was observed between the dietary treatments in Enterobacteriaceae (ENT) counts [35]. Zulkifli et al. [30] showed that broiler chickens fed with 25% PKM significantly increased counts of Lactobacillus sp. and Streptococcus sp. in both caecum and ileum. Loh et al. [91] showed that the addition of 6 and 9% fermented products in laying hens feeding diets reduced the ENT population in feces and increased the fecal LAB population. Dietary treatments have shown to influence the composition of gut microbiota. LAB is the most common bacteria used as probiotics and inhibits the intestinal ENT population [2]. Zulkifli et al. [30] reported that feeding a higher fiber diet may increase gut microflora population. Moreover, fermented products increase intestinal LAB, decrease intestinal pH, and increase lactic acid concentration. It is well established that the dominance of beneficial bacteria in the host's gut can improve nutrient intake and nutrient absorption [58]. Effects of PKC on Nutrient Digestibility Generally, poultry feed formulation is based on digestibility and absorption in the diet. It is known that most dietary fibers in PKC are in the form of mannan, which is not hydrolyzed by digestive enzymes of those non-ruminant animals. Therefore, nutrient digestibility decreases with increasing PKC in birds' feeding diet [10,41,85,92]. It was described that 10 and 15% PKC in broiler diet in both starter and finisher phases significantly decreased digestibility of DM, CP, EE, and nitrogen-free extract (NFE) compared to the negative control. However, there was no significant difference observed in DM, CP, and EE digestibility for 10 and 15% FPKC. Additionally, no significant difference was observed in ash's digestibility for 10 and 15% FPKC than the negative control. In contrast, ash's digestibility was lower for 15% PKC than the negative control group [35]. Aya et al. [64] reported that CP, ash, and NFE digestibility values in the control group (0% PKM) were significantly higher than other PKM diets with or without enzyme supplementation. Another study conducted by Fadil et al. [42] determined that the digestibility of DM, gross energy, and CP for 35% PKC included diet in ducks was more impoverished than 0 and 15% PKC diet. Dietary protein sources are known to affect broiler nutrient digestibility [93]. It was claimed that CP, CF, and EE's digestibility decreased with PKC inclusion in broiler diet [80]. The decrease in nutrient digestibility of chickens fed with PKC could be attributed mainly to the lack of any mannan-degrading enzymes in poultry's digestive system [10]. Moreover, indigestible fiber molecules could increase the passage rate of ingesta and could decrease nutrient absorption. Consequently, high levels of dietary fiber led to a reduction in the digestibility of energy, starch, protein, and lipids in monogastric [51]. Furthermore, insoluble materials in diet could increase the viscosity of intestinal digesta, leading to reduced di-gestibility and absorption of nutrients [94]. Moreover, Hakim et al. [95], show that bacterial fermentation, enzymatic fermentation and thermal extrusion have the potential to improve the apparent metabolizable energy (AME) of PKC. Both bacterial and enzymatic fermentation enhanced the CP digestibility. However, Lawal et al. [33] reported that feeding treated PKC with enzymes contributed to better nutrient digestibility of broiler chickens. Effects of PKC on the Immune System Mannan oligosaccharide content of PKC plays various biological functions, particularly in minimizing gut pathogens and enhancing poultry's immune responses [96]. It was reported that mannose and manno-oligosaccharides could act as prebiotics by improving the chicken immune system, reducing the gut's harmful bacteria and increasing non-pathogenic bacteria population [10]. The result obtained by Shashidhara and Devegowda [97] showed that antibody responses against infectious bursal disease virus (IBDV) were higher in birds fed with manno-oligosaccharides (MOS) supplemented diet. Moreover, the maternal antibody titers in chicks were also significantly influenced by MOS supplementation. Soltan [66] showed that the inclusion of 18% PKC in broiler feeding diet significantly improved relative spleen weight, whereas 5% of PKC non-significantly increased relative spleen weight. Nonetheless, the different inclusion levels of PKC showed no significant improvement in the relative weights of the bursa and thymus gland compared to control. However, feed supplementation of mannanase for broiler chickens improves gut morphology and plasma immunological status [84]. Supplementing OligoPKE at 5 and 1% in broiler diet had no effect on plasma immunoglobulin M (IgM) of birds. However, feeding with OligoPKE supplemented diets had higher immunoglobulin A (IgA) concentrations than control [98]. Van der Wielen et al. [99] showed that dietary fibers were fermented in the caecum of birds by faecal microbes which produced end products such as volatile fatty acids (VFAs), an essential compound in decreasing of Salmonella spp and other pathogenic bacteria. MOS is the main component of PKC and is considered to contain prebiotic properties that can reduce pathogenic bacteria and improve birds' immunity [10,96,100]. FPKC in the broiler feeding diet may increase intestinal LAB and lactic acid concentration [58]; hence, the interactions between the LAB and the host immune system have been suggested to lead to some immunomodulatory activities [101]. Organic acids, mainly lactic acid in fermented feeds, may increase beneficial bacteria leading to higher production of short-chain fatty acids and ultimately acidify and lower the pH throughout the GIT [89]. Additionally, organic acid can decrease pathogenic bacteria directly through penetration into the bacterial cell wall and produce H + ions that interfere with bacteria's enzyme activities or indirectly by changing the intestines' pH. Thus, a cell must spend more energy to maintain internal pH, and this energy cannot be used for other metabolic processes. It may result in less number of pathogenic bacteria [91,102]. Moreover, lowering of pH improved organic acids' antimicrobial activity against pathogens [103]. Conclusions Conclusively, PKC is acceptable to be included in ruminants' diet, but may not be suitable to be included at high levels in poultry diets owing to the high CF content. In regular farm practice, not more than 6-8% PKC level is included in broiler chickens' dietary without affecting growth performance, carcass yield, intestinal microflora and morphology, nutrient digestibility, and chicken's immune system. However, this agriculture biomass could be treated by various microorganisms through SSF, increasing the inclusion level (10-15% FPKC) in the diets without affecting broilers' production performance. SSF of PKC help to decrease CF levels by degrading the complex carbohydrate fractions. Furthermore, microbial activities in SSF contribute to increasing levels of CP, amino acids, and energy in FPKC. Additionally, the inclusion of biodegraded PKC in broilers feeding diets could ultimately improve gut health, increase nutrient digestibility, and improve chickens' overall growth performance.
235822930
s2orc/train
v2
2021-07-15T06:16:41.374Z
2021-07-14T00:00:00.000Z
Quorum-Sensing Signals from Epibiont Mediate the Induction of Novel Microviridins in the Mat-Forming Cyanobacterial Genus Nostoc ABSTRACT The regulation of the production of oligopeptides is essential in understanding their ecological role in complex microbial communities, including harmful cyanobacterial blooms. The role of chemical communication between the cyanobacterium and the microbial community harbored as epibionts within its phycosphere is at an initial stage of research, and little is understood about its specificity. Here, we present insight into the role of a bacterial epibiont in regulating the production of novel microviridins isolated from Nostoc, an ecologically important cyanobacterial genus. Microviridins are well-known elastase inhibitors with presumed antigrazing effects. Heterologous expression and identification of specific signal molecules from the epibiont suggest the role of a quorum-sensing-based interaction. Furthermore, physiological experiments show an increase in microviridin production without affecting cyanobacterial growth and photosynthetic activity. Simultaneously, oligopeptides presenting a selective inhibition pattern provide support for their specific function in response to the presence of cohabitant epibionts. Thus, the chemical interaction revealed in our study provides an example of an interspecies signaling pathway monitoring the bacterial flora around the cyanobacterial filaments and the induction of intrinsic species-specific metabolic responses. IMPORTANCE The regulation of the production of cyanopeptides beyond microcystin is essential to understand their ecological role in complex microbial communities, e.g., harmful cyanobacterial blooms. The role of chemical communication between the cyanobacterium and the epibionts within its phycosphere is at an initial stage of research, and little is understood about its specificity. The frequency of cyanopeptide occurrence also demonstrates the need to understand the contribution of cyanobacterial peptides to the overall biological impact of cyanopeptides on aquatic organisms and vertebrates, including humans. Our results shed light on the epibiont control of microviridin production via quorum-sensing mechanisms, and we posit that such mechanisms may be widespread in natural cyanobacterial bloom community regulation. ABSTRACT The regulation of the production of oligopeptides is essential in understanding their ecological role in complex microbial communities, including harmful cyanobacterial blooms. The role of chemical communication between the cyanobacterium and the microbial community harbored as epibionts within its phycosphere is at an initial stage of research, and little is understood about its specificity. Here, we present insight into the role of a bacterial epibiont in regulating the production of novel microviridins isolated from Nostoc, an ecologically important cyanobacterial genus. Microviridins are well-known elastase inhibitors with presumed antigrazing effects. Heterologous expression and identification of specific signal molecules from the epibiont suggest the role of a quorum-sensing-based interaction. Furthermore, physiological experiments show an increase in microviridin production without affecting cyanobacterial growth and photosynthetic activity. Simultaneously, oligopeptides presenting a selective inhibition pattern provide support for their specific function in response to the presence of cohabitant epibionts. Thus, the chemical interaction revealed in our study provides an example of an interspecies signaling pathway monitoring the bacterial flora around the cyanobacterial filaments and the induction of intrinsic species-specific metabolic responses. IMPORTANCE The regulation of the production of cyanopeptides beyond microcystin is essential to understand their ecological role in complex microbial communities, e.g., harmful cyanobacterial blooms. The role of chemical communication between the cyanobacterium and the epibionts within its phycosphere is at an initial stage of research, and little is understood about its specificity. The frequency of cyanopeptide occurrence also demonstrates the need to understand the contribution of cyanobacterial peptides to the overall biological impact of cyanopeptides on aquatic organisms and vertebrates, including humans. Our results shed light on the epibiont control of microviridin production via quorum-sensing mechanisms, and we posit that such mechanisms may be widespread in natural cyanobacterial bloom community regulation. KEYWORDS cyanobacteria, cyanopeptides, homoserine lactones, microviridin, quorum sensing D espite the rise in cyanobacterial bloom occurrence and the detection of cyanopeptides (CNPs) beyond microcystin (1, 2) across the world, why and how these metabolites are regulated remain poorly understood (3). It has mostly been argued that the net peptide production rates are linearly correlated with the growth rate of the cyanobacterial cells, while a direct impact of environmental factors on peptide production is of relatively minor importance (4). However, the role of bacterial-cyanobacterial interactions in the physiological control of CNP production has never been evaluated. The majority of CNP producers frequently form biofilms, and their associated epibionts might underlie secondary metabolite production as well as biofilm development (5). Microbial communities associated with cyanobacterial biofilms are attracted and held together by cohesive exopolysaccharide envelopes that can harbor numerous coexisting microbial species belonging to diverse lineages (6,7). A recent study provided new insights into the role of a quorum-sensing (QS) signal molecule, Noctanoyl homoserine lactone (C 8 -HSL), in the aggregation processes and biofilm development of a cyanobacterium, Gloeothece sp. strain PCC 6909 (8). Genes responsible for the synthesis/regulation of QS signal molecules such as HSLs could not be confidently identified in cyanobacterial genomes, which led us to speculate that cyanobacteria may have evolved a different mechanism for the regulation of QS autoinducers and might rely on epibionts to produce them. Single-filament picking of Nostoc sp. strain TH1SO1 followed by de novo genome sequencing and metagenomic binning allowed the recovery of one high-quality Nostoc metagenome-assembled genome (MAG), together with a total of five mediumto high-quality epibiont bins assigned to the phyla Proteobacteria (n = 3) and Bacteroidota (n = 2) (see Table S1 in the supplemental material) as well as three lowquality bins derived from Proteobacteria (bins 5, 6, and 9) (https://figshare.com/s/ 817256304aa3f038bd85) (Text S1). The draft genome of strain TH1SO1 was retrieved in 247 genomic contigs amounting to 7,653,454 bp (99.56% estimated completeness and 0.3% contamination) (Table S1). Three complete putative biosynthetic gene clusters (BGCs) for microviridin (MDN), a ribosomally synthesized and posttranslationally modified peptide (RiPP) containing five unique functional precursor peptides (MdnA), were found in the genome of strain TH1SO1 ( Fig. 1A and B). Table S2) acquired using high-performance liquid chromatography-high-resolution tandem mass spectrometry (HPLC-HRMS/MS). The predicted products from gene clusters b and c were not detected, with a possible explanation for this being that these BGCs were silent under the current culture conditions. It has been postulated that factors such as buoyancy regulation or interaction with grazers or pathogens promote differentiation among cyanobacterial chemotypes (9). This prompted us to investigate the BGCs of the recovered epibiont bins. Four autoinducer synthase gene clusters were detected, and from these, one autoinducer synthase, SGBI (630 bp), belonging to the genus Sphingobium (contig 2686), was heterologously expressed in Escherichia coli BL21(DE3)/pET28a ( Fig. 2A). Monitoring of the characteristic product ion at m/z 102.0550 (10) for the most widely studied autoinducers led to the detection of six HSLs possessing hydroxylated fatty acyl side chains of different lengths (3-hydroxy-C 7 -HSL, 3-hydroxy-C 8 -HSL, 3-hydroxy-C 9 -HSL, 3-hydroxy-C 10 -HSL, 3-hydroxy-C 12 -HSL, and 3-hydroxy-C 14 -HSL) ( Fig. 2B and C; Fig. S1 and Text S1). The presence of HSLs within cyanobacterial blooms has been previously reported, with their concentrations reaching up to 10 mg/liter (11)(12)(13). Analogously, an increase in QS-dependent physiological regulation (luxI and luxR gene expression) within Ruegeria pomeroyi was observed when cocultured with the microalga Alexandrium tamarense (14), suggesting a physiological interplay between them. To assess the possible role of QS-dependent regulation in MDN production, we mimicked the bacterial load in the culture of strain TH1SO1 with two major variants of HSLs (3-hydroxy-C 8 -HSL and 3-hydroxy-C 10 -HSL) at a 2.5 mM final concentration (Text S1). A significant increase of up to 2-fold in the production of the MDN-1688 (once normalized to the dry biomass) (Fig. 2D) without any difference in photosynthetic activity ( Fig. S2 and Text S1) among the control and fed-batch cultures was observed. In a similar experimental setup, MDN production was reduced or unchanged when fed with 3-hydroxy-C 8 -HSL/3-hydroxy-C 10 -HSL together with a known QS-inhibitory molecule, penicillic acid (Fig. 2E), suggesting that these results are not an artifact of an unexplained mechanism like inhibition or chemotype specificity. The responses elicited by QS signals can influence directly the symbiotic relationships, which in turn determine the community structure and can also trigger downstream changes in gene regulation to modulate specific biological functions such as biofilm formation or the production of metabolites for chemical defense (15). For example, MDN-J isolated previously was shown to inhibit the molting process of Daphnia, providing an advantage in the maintenance and survival of the dense community during bloom formation (16). These results encouraged us to study the role of induced MDN in biofilm formation within the phycosphere. Biofilm formation involves the employment of a QS-based regulatory network to provide sustained colonization by specific taxa. We assessed the specificity of the chemical interaction mediated by MDN by evaluating its QS-inhibitory activity using two bioreporter strains, E. coli/pSB1075 and E. coli/pSB401, Regulation of Cyanopeptide Production containing the lasI-lasR and luxI-luxR receptor genes, respectively. These receptors specifically control the expression of the luxCDABE operon, inducing luminescence in response to its cognate HSLs (17). Our results showed that MDNs inhibited (32 to 55%) the bioluminescence of the QS bioreporter strain E. coli/pSB1075 in a dose-dependent manner (Fig. 2F). In contrast, no inhibition of the bioluminescence of E. coli/pSB401 was observed for MDNs. The inhibition of bioluminescence against one of the reporter strains at a noninhibitory concentration suggests its specificity in inhibiting the lasR system, further implying its possible role in monitoring the selection of a specific epibiont colonizing the biofilm using the luxR-based QS system. Our results have demonstrated the potential role of HSL-mediated QS in the regulation of MDN production and show that these processes might play a key role in epibiont-cyanobacterium interactions. They further highlight the value of culture-based experimentation and the importance of developing a model organism for studying complex ecological interactions. This knowledge could in turn permit a bottom-up reconstruction of multipartite interactions, mediated by the exchange of secondary metabolites (18,19). Key to this exchange are the regulatory circuits that control the induction of secondary metabolites (19). The question of Induction of microviridin-1688 production after feeding with 3-hydroxy-C 8 -HSL/3-hydroxy-C 10 -HSL (at a 2.5 mM final concentration). The liquid chromatography-mass spectrometry (LC-MS) peak area was normalized to the dry biomass. (E) Inhibition of microviridin-1688 production in the presence of a quorum-sensing inhibitor, penicillic acid (PA), in combination with 3-hydroxy-C 8 -HSL/3-hydroxy-C 10 -HSL (at a 2.5 mM final concentration). (F) Dose-dependent inhibition activity of microviridin-1688, microviridin-1739, microviridin-1748, and penicillic acid on the QS-dependent bioluminescence of the lasR-based bioreporter strain E. coli/pSB1075 induced by its cognate molecule 3-oxo-C 10 -HSL at a noninhibitory concentration. The average bioluminescence observed for the negative control is used to calculate the relative inhibition percentage. Data are expressed as standard deviations (SD) of the means (n = 3). *, P , 0.001 versus the control by analysis of variance (ANOVA) followed by a Bonferroni posttest. when and how a bacterium "chooses" to induce a given BGC is a fascinating and unresolved one, and future detailed studies are likely to illuminate the mechanisms, perhaps new ones, by which exogenous signals govern this process. Data availability. Sequence data generated during this study have been deposited at the NCBI database under BioProject accession no. PRJNA718890. Genome bins assembled in this study have been deposited at the DDBJ/ENA/GenBank database under accession no. JAGKSW000000000 to JAGKTB000000000. The version described in this paper is accession no. JAGKSW010000000. The derived data that support the findings of this paper, including the assembled metagenomic contig collection, are available in Figshare (https://figshare .com/s/817256304aa3f038bd85). SUPPLEMENTAL MATERIAL Supplemental material is available online only. TEXT S1, DOC file, 0.1 MB. FIG S1, TIF
235249310
s2orc/train
v2
2021-05-31T13:21:40.356Z
2021-05-28T00:00:00.000Z
A homing suppression gene drive with multiplexed gRNAs maintains high drive conversion efficiency and avoids functional resistance alleles Abstract Gene drives are engineered alleles that can bias inheritance in their favor, allowing them to spread throughout a population. They could potentially be used to modify or suppress pest populations, such as mosquitoes that spread diseases. CRISPR/Cas9 homing drives, which copy themselves by homology-directed repair in drive/wild-type heterozygotes, are a powerful form of gene drive, but they are vulnerable to resistance alleles that preserve the function of their target gene. Such resistance alleles can prevent successful population suppression. Here, we constructed a homing suppression drive in Drosophila melanogaster that utilized multiplexed gRNAs to inhibit the formation of functional resistance alleles in its female fertility target gene. The selected gRNA target sites were close together, preventing reduction in drive conversion efficiency. The construct reached a moderate equilibrium frequency in cage populations without apparent formation of resistance alleles. However, a moderate fitness cost prevented elimination of the cage population, showing the importance of using highly efficient drives in a suppression strategy, even if resistance can be addressed. Nevertheless, our results experimentally demonstrate the viability of the multiplexed gRNAs strategy in homing suppression gene drives. Introduction At the frontier of pest and disease vector control, gene drives hold the potential to influence large, wild populations. These engineered genetic elements have the ability to spread quickly by biasing inheritance in their favor, allowing for the manipulation of population sizes or traits such as disease transmission (Alphey 2014;Burt 2014;Esvelt et al. 2014;Champer et al. 2016;Hay et al. 2021). Gene drives can act through many mechanisms and include both engineered and naturally occurring forms . For engineered homing drives, the CRISPR/Cas9 system has been widely used to create gene drive constructs in many organisms, including yeast (DiCarlo et al. 2015;Basgall et al. 2018;Roggenkamp et al. 2018;Shapiro et al. 2018), flies Champer et al. 2017Champer et al. , 2018Champer et al. , 2019aChamper et al. , 2019bChamper et al. , 2020dCarrami et al. 2018;Oberhofer et al. 2018;Guichard et al. 2019;Chae et al. 2020;Kandul et al. 2020;Ló pez Del Amo et al. 2020aXu et al. 2020), mosquitoes Hammond et al. 2016Hammond et al. , 2017Hammond et al. , 2021aHammond et al. , 2021bKyrou et al. 2018;Pham et al. 2019;Adolfi et al. 2020;Carballar-Lejarazú et al. 2020;Li et al. 2020b;Simoni et al. 2020;Fuchs et al. 2021;Taxiarchi et al. 2021), and mice (Grunwald et al. 2019). The homing mechanism converts an organism heterozygous of the drive into a homozygote in the germline, and the drive is thus transmitted to offspring at a rate above 50%. These drives contain a Cas9 endonuclease, which cleaves a target sequence, and at least one guide RNA (gRNA), which directs Cas9 to the cleavage location. The resulting DNA break can be repaired by homology-directed repair (HDR) using the drive allele as a template, thereby copying the drive into the wild-type chromosome. However, a major obstacle that impedes drive efficiency is the alternative DNA repair method of end-joining, which does not use a homologous template and often alters the target sequence, preventing further recognition by the gRNA/Cas9 system. Such gRNA target site mutations, whether formed by drive cleavage or preexisting in the population, are therefore considered resistance alleles and can form at high rates in the germline as well as in the embryo due to cleavage activity from maternally deposited Cas9 and gRNA Hammond et al. 2016Hammond et al. , 2021aChamper et al. 2018Champer et al. , 2019aChamper et al. , 2019bChamper et al. , 2020dOberhofer et al. 2018;Adolfi et al. 2020;Ló pez Del Amo et al. 2020a). Resistance alleles that disrupt the function of the target gene by causing frameshifts or otherwise sufficiently changing the amino acid sequence tend to be more common in almost all gene drive designs, and we call them "r2" alleles. By contrast, "r1" alleles preserve gene function and are therefore particularly detrimental to gene drives. If the drive allele imposes a greater fitness cost than the resistance allele, which is usually the case for functional alleles in most drives that target native genes, then the resistance alleles will outcompete the drive and thwart its potential to modify or suppress the population (Hammond et al. 2017;Noble et al. 2017;Unckless et al. 2017;Champer et al. 2018Champer et al. , 2020dLi et al. 2020a). While modification drives aim to genetically alter a population, for instance by spreading a specific gene variant or genetic cargo, the goal of suppression drives is to ultimately reduce and potentially even eliminate a population, usually by disrupting an essential but haplosufficient gene target, leading to a negative fitness impact in drive homozygotes. For example, such a drive could cleave and be copied into a gene with a recessive knockout phenotype that affects viability or fecundity. As the drive increases in frequency in the population [via heterozygotes, which remain fertile and viable (Burt 2003)], the proportion of sterile or nonviable individuals will increase, thereby reducing population size. Even if the drive forms some nonfunctional resistance alleles, they would show the same phenotype as drive alleles, thus only somewhat slowing the spread of the gene drive and likely still allowing successful suppression (Beaghton et al. 2019). Functional resistance alleles, on the other hand, would be expected to have a drastic effect on this type of drive, quickly halting and reversing population suppression and outcompeting the drive (Deredec et al. 2011;Eckhoff et al. 2017;Hammond et al. 2017Hammond et al. , 2021aChamper et al. 2021). Therefore, the success of a suppression drive hinges on its ability to reduce the functional allele formation rate to a sufficiently low level while also avoiding gRNA targets where functional alleles are already present in the population. The formation of such functional resistance alleles was successfully prevented in one Anopheles study targeting a highly conserved sequence of a female fertility gene, since end-joining repair of such a target would be unlikely to result in a functional mutation (Kyrou et al. 2018). However, the population size in this experimental study was necessarily limited to several hundred individuals (Kyrou et al. 2018;Simoni et al. 2020;Hammond et al. 2021b), so it remains unclear if any functional resistance alleles could still form against this drive in much larger and more variable natural populations. Additional measures may thus be needed in a large-scale release to prevent the formation of functional resistance alleles. Furthermore, such highly conserved sequences in possible target genes for suppression drives may not be available in other species, and even high conservation of the target site alone is sometimes insufficient to prevent formation of functional resistance alleles, as shown by another recent study in Anopheles (Fuchs et al. 2021). Multiplexing gRNAs has been proposed as a mechanism that could reduce the rate of functional allele formation by recruiting Cas9 to cleave at multiple sites within the target gene. If one gRNA target is repaired by end-joining in a way that leaves the gene functional, additional sites could still be cleaved, resulting in additional opportunities for drive conversion or creation of nonfunctional mutations. Simultaneous cleavage at multiple sites and repair by end-joining could also result in large deletions, which would usually render the target gene nonfunctional (Champer et al. 2018). Several models indicate that multiplexed gRNAs would likely be effective at reducing functional resistance alleles (Marshall et al. 2017;Prowse et al. 2017;Champer et al. 2020d), and a handful of experimental studies have supported this notion (Champer et al. 2018(Champer et al. , 2020dOberhofer et al. 2018). Furthermore, multiplexing of gRNAs is capable of increasing drive conversion efficiency, as has been demonstrated in a modification homing drive with two gRNAs (Champer et al. 2018). However, one study using 4 gRNAs for a homing suppression drive reported very low drive efficiency (Oberhofer et al. 2018), which would likely prevent effective population suppression (Deredec et al. 2011;Champer et al. 2020d), particularly in larger, spatially structured populations (North et al. 2020;Champer et al. 2021). This reduction in efficiency was in part caused by repetitive elements in the drive, which resulted in removal of large portions of the drive by recombination during HDR (Oberhofer et al. 2018). However, widely spaced gRNAs also likely played an important role, since failure to cleave the outermost gRNAs would require end resection of large DNA tracts before an area of homology would be reached with the drive allele (Champer et al. 2020d). These findings suggest that an effective suppression drive could consist of multiple gRNAs targeting closely spaced sequences. The best target would likely be a female-specific haplosufficient but essential fertility gene (Champer et al. 2021). Although such a drive would impose a high fitness cost to homozygous females, it could still spread at a high rate through germline conversion in heterozygous females and males, and any nonfunctional resistance alleles would eventually be removed from the population rather than outcompeting the drive. Females with any combination of drive and nonfunctional resistance alleles would be infertile. Here, we construct such a drive in Drosophila melanogaster with 4 multiplexed gRNAs targeting yellow-g. The homing suppression drive demonstrated in these experiments showed elevated inheritance rates and successfully persisted in cage populations that averaged over 4,000 flies per generation without apparent formation of functional resistance alleles. However, the drive also imposed an unintended fitness cost of unknown type. This, together with the low drive conversion rate and high embryo resistance allele formation rate compared to Anopheles drives, ultimately prevented suppression of the experimental populations. Plasmid construction The starting plasmids TTTgRNAtRNAi (Champer et al. 2020d), TTTgRNAt (Champer et al. 2020d), and BHDcN1 (Champer et al. 2018) were constructed previously. For plasmid cloning, reagents for restriction digest, PCR, and Gibson assembly were obtained from New England Biolabs; oligos and gBlocks from Integrated DNA Technologies; 5-a competent Escherichia coli from New England Biolabs; and the ZymoPure Midiprep kit from Zymo Research. Plasmid construction was confirmed by Sanger sequencing. A list of DNA fragments, plasmids, primers, and restriction enzymes used for cloning of each construct can be found in the Supplementary material section. We provide annotated sequences of the final drive insertion plasmid and target gene genomic region in ApE format at github.com/ MesserLab/HomingSuppressionDrive (for the free ApE reader, see biologylabs.utah.edu/jorgensen/wayned/ape). Genotypes and phenotypes Flies were anesthetized with CO 2 and screened for fluorescence using the NIGHTSEA adapter SFA-GR for DsRed and SFA-RB-GO for EGFP. Fluorescent proteins were driven by the 3ÂP3 promoter for expression and easy visualization in the white eyes of w 1118 flies. DsRed was used as a marker to indicate the presence of the split drive allele, and EGFP was used to indicate the presence of the supporting nanos-Cas9 allele (Champer et al. 2019a). Cage study For the cage study, flies were housed in 30 Â 30 Â 30 cm (Bugdorm, BD43030D) enclosures. The ancestral founder line that was heterozygous for the split drive allele and homozygous for the supporting nanos-Cas9 allele was generated by crossing successful transformants with the Cas9 line (Champer et al. 2019a) for several generations, selecting flies with brighter green fluorescence (which were likely to be Cas9 homozygotes) and eventually confirming that the line was homozygous for Cas9 via PCR. These flies (heterozygous for the split drive and homozygous for Cas9), together with nanos-Cas9 (Champer et al. 2019a) homozygotes of the same age, were separately allowed to lay eggs in 8 food bottles for a single day. Bottles were then placed in cages, and 11 days later, they were replaced in the cage with fresh food. Bottles were removed from the cages the following day, the flies were frozen for later phenotyping, and the egg-containing bottles returned to the cage. This 12-day cycle was repeated for each generation. Artificial selection small cage study For the small cage population experiment designed to detection functional resistance alleles, flies heterozygous for the split drive and homozygous for Cas9 were crossed to each other. We then crossed 3 batches of 50 drive heterozygous males to 50 drive heterozygous females, which were allowed to lay eggs for 2 days. Their progeny were collected 3 times each day at 6-h intervals after they started eclosing. Nonfluorescent flies (indicating absence of the drive allele) were discarded. Over 90% of females were clearly virgins by visual phenotype using this collection scheme. Thirteen days after the first egg laying day, the original vial was discarded, and 14 days afterward, the progeny were allowed to lay eggs for 2 days after being split randomly in two separate vials. The cycle was then repeated for each generation. With this method, wild-type alleles are removed from the population at an increased rate each generation, compensating for the drive's intermediate drive conversion rate and fitness cost. This increases the genetic load (suppressive power) of the drive, raising the chance that the population is eliminated instead of reaching an equilibrium frequency, as would be predicted if drive conversion is low (particularly in the presence of high fitness costs and embryo resistance allele formation). If functional resistance alleles form, however, they would usually have high viability, preventing suppression. Thus, lack of population elimination in this experiment would likely indicate that functional resistance alleles were present. Phenotype data analysis Data were pooled into two groups of crosses (drive heterozygous females with w 1118 males and drive heterozygous males with w 1118 females) in order to calculate drive inheritance, drive conversion, and embryo resistance. However, this pooling approach does not take potential batch effects (offspring were raised in different "batches"-vials with different parents) into account, which could bias rate and error estimates. To account for such batch effects, we conducted an alternate analysis as in previous studies (Champer et al. 2020c(Champer et al. , 2020d. Briefly, we fit a generalized linear mixed-effects model with a binomial distribution (by maximum likelihood, adaptive Gauss-Hermite quadrature, nAGQ ¼ 25). This model allows for variance between batches, usually resulting in slightly different parameter estimates and increased standard error estimates. Offspring from a single vial were considered as a separate batch. This analysis was performed with the R statistical computing environment (3.6.1) including packages lme4 (1.1-21, https://cran.r-project.org/web/ packages/lme4/index.html) and emmeans (1.4.2, https://cran.rproject.org/web/packages/emmeans/index.html). The script is available on Github (https://github.com/MesserLab/Binomial-Analysis). The resulting rate estimates and errors were similar to the pooled analysis (Supplementary Data Sets 1-3). Genotyping For genotyping, flies were frozen, and DNA was extracted by grinding single flies in 30 ml of 10 mM Tris-HCl pH 8, 1 mM EDTA, 25 mM NaCl, and 200 mg/ml recombinant proteinase K (ThermoScientific), followed by incubation at 37 C for 30 min and then 95 C for 5 min. The DNA was used as a template for PCR using Q5 Hot Start DNA Polymerase from New England Biolabs with the manufacturer's protocol. The region of interest containing gRNA target sites was amplified using DNA oligo primers YGLeft_S_F and YGRight_S_R. This would allow amplification of wild-type sequences and sequences with resistance alleles but would not amplify full drive alleles with a 30-s PCR extension time. After DNA fragments were isolated by gel electrophoresis, sequences were obtained by Sanger sequencing and analyzed with ApE software (http://biologylabs.utah.edu/jorgensen/ wayned/ape). Deep sequencing and analysis was performed by Azenta Life Sciences on a pool of approximately 100 newly eclosed flies after DNA purification and amplification with the same primers as described above. PCR products were treated with enzymes for 5 0 Phosphorylation and dA-tailing, and T-A ligation was performed to add adaptors, and products were ligated to beads. PCR was conducted using primers on the adaptors, and the final library was purified and qualified by beads. Qualified libraries were pairend sequenced for 150 nucleotides using Illumina Hiseq Xten/ Miseq/Novaseq/MGI2000. Data were analyzed with cutadapt (1.9.1), flash (v1.2.11), bwa (0.7.12-r1039), and Samtools (1.6). Fitness cost inference framework To quantify drive fitness costs, we modified a previously developed maximum-likelihood inference framework (Liu et al. 2019;Champer et al. 2020e). Similar to a previous study (Langmü ller et al. 2021), we extended the model to two unlinked loci (drive site and a site representing undesired mutations from off-target cleavage that impose a fitness cost). The Maximum Likelihood inference framework is implemented in R (v. 4.0.3) (R Core Team 2018) and is available on GitHub (https://github.com/MesserLab/ HomingSuppressionDrive). In this model, we make the simplifying assumption of a single genetic loci and a single gRNA at the gene drive allele site. Each female randomly selects a mate. The number of offspring generated per female can be reduced in certain genotypes if they have a fecundity fitness cost, and the chance of a male being selected as a mate can be reduced if they have a mating success fitness cost. In the germline, wild-type alleles in drive/wild-type heterozygotes can potentially be converted to either drive or resistance alleles, which are then inherited by offspring. At this stage, wildtype alleles at the off-target site are also cleaved, becoming disrupted alleles that may impose a fitness cost. The genotypes of offspring can be adjusted if they have a drive-carrying mother. If they have any wild-type alleles, then these are converted to resistance alleles at the embryo stage with a probability equal to the embryo resistance allele formation rate. This final genotype is used to determine if the offspring survives based on viability fitness. We set the germline drive conversion rate and the embryo resistance allele formation rate to the experimental inferred estimates (76.7% for drive conversion using the average of male and female rates and embryo cut rate of 52.2%, see Results sectionnote that we did not include in this average the data from females in the drier vials as described in the Results section since their progeny had lower viability, which would make assessment of drive conversion unreliable if based only on drive inheritance). Based on previous observations (Champer et al. 2017(Champer et al. , 2018(Champer et al. , 2019a(Champer et al. , 2020d, we set the germline nonfunctional formation rate to 22.2% so that nearly all wild-type alleles would either be converted to a drive allele or a resistance allele. Functional resistance alleles were not initially modeled since they are expected to be extremely rare in the 4-gRNA design (but see below). Note that in this framework, drive conversion and germline resistance allele formation take place at the same temporal stage in the germline. We set the germline cut rate at the off-target locus to 1 and did not model additional off-target cuts in embryos with drivecarrying mothers. This represents the simplest model of mostly distant off-target sites that are mostly cut in the germline when Cas9 cleavage rates are highest [actual off-target cleavage would likely be at many sites at much lower rates, with some linked to the drive alleles, which would not be possible to easily model with our maximum-likelihood method (Langmü ller et al. 2021)]. We assumed that in drive carriers at the beginning of the experiment, 50% of the off-target sites are cut because the drive carrier flies all came from male drive heterozygotes. All drive carriers were initially drive heterozygotes. In future generations, we used the relative rate of drive heterozygotes and homozygotes (among drive carriers with DsRed) as well as relative rates of other genotypes with a wild-type (non-DsRed) phenotype as predicted in the maximum likelihood model. In one model, we assumed the fitness costs would occur only in female drive/wild-type heterozygotes due to somatic Cas9 expression and cleavage. In the remaining scenarios, we assumed that drive fitness costs would either reduce viability or reduce female fecundity (separately from the sterility of female drive homozygotes) and male mating success. These fitness costs either stemmed directly from the presence of the drive or from cleavage at a single off-target site (representing multiple possible off-target sites that were unlinked to the drive). Our fitness parameters represent the fitness of drive homozygotes (or simply the net fitness of drive heterozygotes for the somatic Cas9 cleavage fitness model). Heterozygous individuals were assigned a fitness equal to the square root of homozygotes, assuming multiplicative fitness costs between loci and alleles. The model incorporates the sterility of females not carrying any wild-type allele of yellow-g, and thus, any inferred fitness parameters <1 represent additional fitness costs of the drive system. To estimate the rate at which resistance alleles might be functional types, we took the best model for each cage and introduced a new "relative r1 rate" parameter, representing the fraction of resistance alleles that become functional alleles instead of nonfunctional alleles. Our germline rate of 22.2% then became the total resistance allele formation rate, while the experimental measured embryo nonfunctional resistance allele formation rate remained fixed at 52.2%. This relative r1 rate parameter was then inferred as above to obtain an estimate and confidence interval. Drive construct design In this study, we aim to develop a population suppression homing drive in D. melanogaster that utilizes multiple gRNAs to improve drive efficiency and reduce the rate of functional resistance allele formation. Our drive construct targets yellow-g, which has previously been used as a female fertility homing suppression drive target in flies (Oberhofer et al. 2018) and mosquitoes (Hammond et al. 2016). Located on chromosome 3, it is highly conserved across Drosophila species (Supplementary Table 1). The yellow gene family is closely related to the major royal jelly family in Apis mellifera and has been shown to play a critical role in the membrane proteins of the embryo during egg development in Drosophila (Claycomb et al. 2004). Null mutations of yellow-g usually result in sterile females when homozygous but show no effects on males or on females when one wild-type copy is present (Fig. 1). Both integration of the drive or formation of nonfunctional resistance alleles that disrupt gene function will result in such null alleles. Conversion of wild-type alleles to drive alleles in the germline of drive heterozygotes allows the drive to increase in frequency in the population (Fig. 1). This will lead to an increasing number of sterile individuals that can eventually induce population suppression. The drive is inserted between the leftmost and rightmost gRNA target sites of yellow-g, providing the template for HDR (Fig. 2). The drive construct contains a DsRed fluorescent marker driven by the 3xP3 promoter for expression in the eyes to indicate the presence of a drive allele. It also contains 4 gRNAs (confirmed to be active by target sequencing) within tRNA scaffolding that target the second exon of yellow-g. By eliminating the need for multiple gRNA cassettes, the construct is more compact and avoids repetitive gRNA promoter elements. All 4 gRNA target sites are located within the second exon of yellow-g, a site chosen to allow for closely spaced target sites in a moderately conserved area (Supplementary Table 1) while still being sufficiently far from the end of the gene to ensure that frameshift mutations would always disrupt the gene's function. This design should both increase the drive's homing rate as well as the probability that when a resistance allele is formed, it is an nonfunctional allele (disrupting the target gene's function) rather than an functional allele. gRNA sites were also chosen to avoid strong offtarget sites (Supplementary Table 2). The Cas9 element, required for drive activity, is placed on chromosome 2R and provided through a separate line that carries Cas9 driven by the nanos germline promoter and EGFP with the 3xP3 promoter. In this split-Cas9 system, the drive will only be active in individuals where the Cas9 allele is also present (Champer et al. 2019a). Drive inheritance Successful transformants were used to establish fly lines with the construct, which were maintained by removing wild-type females in each generation. We first crossed the drive line to a line that was homozygous for nanos-Cas9. The offspring of this cross that had DsRed are expected to carry 1 copy each of the drive allele and the Cas9 allele, and these flies were then crossed with w 1118 flies for drive conversion assessment. The offspring of this second cross were phenotyped for red fluorescence, indicating the presence of the drive allele (Fig. 3) and in a subset of the vials, also for green fluorescence, indicating presence of the Cas9 allele. The drive was inherited at a rate of 86.4% in the progeny of female drive heterozygotes (Supplementary Data Set 1), substantially higher than the Mendelian inheritance rate of 50% (Fisher's exact test, P < 0.00001) and thus indicative of strong drive activity. For the progeny of male drive heterozygotes, the inheritance rate was 90.4% (Supplementary Data Set 2), which was also substantially higher than the Mendelian expectation (Fisher's exact test, P < 0.00001). Because we do not expect this drive to reduce the viability of any eggs (except those laid by sterile females) as confirmed for this set of crosses (though see viability section below for an additional data set), we can calculate the rate at which wild-type alleles were converted to drive alleles based on the drive inheritance rate. This drive conversion rate was 72.7% for Homing suppression drive schematic. The drive is placed inside the yellow-g gene at the gRNA target sites to allow for HDR. A DsRed fluorescence marker is driven by the 3ÂP3 promoter. Four gRNAs (multiplexed in tRNA scaffolding and driven by the U6:3 promoter) target regions of the second exon of yellow-g. This is a split drive system, so Cas9 (driven by the nanos promoter) was provided at an unlinked site in the genome for drive experiments. The inheritance rate of the Cas9 allele (which should have unbiased inheritance) was 46.7% for females and 43.8% for males. These rates were not significantly different from the Mendelian expectation of 50% (P ¼ 0.3 for females and P ¼ 0.1 for males, Binomial test), consistent with little to no fitness costs for the Cas9 cassette. Resistance alleles and fertility To determine the rate of resistance allele formation in the embryo due to maternally deposited Cas9 and gRNAs, DsRed female offspring were assessed for fertility. These individuals were daughters of drive heterozygous mothers (heterozygous for both drive and Cas9 alleles and crossed to w 1118 males as described above) and could thus have developed embryo resistance. This would convert these flies from fertile drive/wild-type heterozygotes into drive/nonfunctional resistance allele heterozygotes, which are expected to be sterile. Several of these daughters of drive heterozygous mothers were each crossed to two w 1118 males. Their vials were observed 1 week later and compared to control crosses with w 1118 females. Vials with no offspring were considered to have sterile females due to embryo resistance allele formation or other factors. Twelve out of 22 (54.5%) assessed females were sterile, which is significantly higher than the 5% sterility rate (in 1 out of 20 individuals tested) of female drive-carrier offspring from male drive and Cas9 heterozygotes crossed to w 1118 females (P < 0.001, Fisher's exact test). Assuming that this 5% sterility rate represents a baseline for our laboratory flies under the given experimental conditions, we can calculate that an embryo resistance allele formation rate of 52.2% will account for the increased sterility rate in the progeny of drive females. This should provide an estimate for the rate at which the paternal wild-type alleles of yellow-g were cleaved at one or more gRNA target sites in embryos with drive mothers. To analyze resistance alleles from the perspective of individual sequences, progeny from drive heterozygous females or males were Sanger sequenced around the target site (Supplementary Table 3). To examine embryo resistance, 27 drive-carrying progeny of female drive heterozygotes were analyzed. These individuals received a wild-type allele from their father, which could then undergo cleavage due to maternally deposited Cas9 and gRNAs. We found that 10 of these progeny harbored resistance alleles, but only 1 with a large deletion did not have any gRNA target sites that remained wild-type. Six progeny were fully wild-type, while another 9 were mosaic to various degrees. Only some of these mosaics were likely sterile, so these results are approximate agreement with our estimate of 52% sterile progeny, assuming resistance alleles were all nonfunctional (40% with a full resistance sequence, meaning that about 1/3 of the mosaics are likely sterile to reach 52%). Ten nondrive progeny of either female or male drive heterozygotes were also assessed, and half of them were found to be carrying resistance sequences, only one of which did not have any wild-type sites available for cleavage (Supplementary Table 3). Looking at all sequenced individuals, it is clear that the first and last gRNA cut sites experienced relatively high rates of cleavage, while the middle 2 gRNA sites experienced low rates of cleavage. A similar 4-gRNA drive with a different target site had the highest activity in the first and third gRNA (Champer et al. 2020d), and the gRNAs in this drive were placed in the same relative order in the gRNA gene (with the outermost target sites as the first two gRNAs in the gene). Thus, the observed variance in gRNA activity in both studies is likely due to differences in individual gRNA activity levels rather than position in the target site or in the gRNA gene. Fig. 3. Drive inheritance rates. Drive inheritance as measured by the percentage of offspring with DsRed fluorescence from crosses between drive individuals (heterozygous for the drive and for a Cas9 allele) and wild-type flies. Each dot represents offspring from one drive parent, and the size of dots is proportional to the number of total offspring from the parent. Rate and standard error of the mean are displayed for the overall inheritance rate for all flies pooled together. An alternate analysis that accounts for potential batch effects yielded overall similar rates with slightly increased error estimates (Supplementary Data Sets 1 and 2). To better understand resistance allele formation in the germline, several female and male drive heterozygotes were crossed to each other, and approximately 100 progeny were generated in 2 vials. All of these were pooled and deep sequenced around the gRNA target sites. Each wild-type allele therefore experienced potential germline cleavage in either male or females, and then experienced further cleavage in the embryo due to maternally deposited Cas9. In contrast to our Sanger sequences of alleles that only underwent potential cleavage in the embryo, we saw substantial cut rates at all 4 gRNA target sites, including the 2 middle ones that had very low embryo cut rates ( Supplementary Fig. 1). This is generally consistent with the notion that germline cut rates are quite high, with most, though not all, wild-type alleles being converted to resistance alleles or undergoing successful drive conversion by HDR. Examining individual resistance allele sequences, we found many instances of deletions between cut sites ( Supplementary Fig. 2), indicative of cleavage at multiple gRNA sites before end-joining repair and subsequent loss of DNA between the cut sites. Such deletions are potentially disadvantageous in terms of reducing future ability to perform drive conversion. However, they have the benefit of further reducing the chance of functional resistance allele formation because larger deletions are more likely to disrupt the function of the target gene, even if they are in frame. Fecundity and viability Drive homozygous females (as confirmed by sequencing) were found to be sterile as expected. One important issue with population suppression gene drives is leaky somatic expression that can convert drive/wild-type heterozygotes partially or completely into drive/resistance allele heterozygotes (or perhaps even drive/ drive homozygotes) in somatic cells, which was responsible for substantially reducing the fertility of mosquitoes carrying homing suppression drives in previous studies (Hammond et al. 2017(Hammond et al. , 2021aKyrou et al. 2018). To determine if drive heterozygotes had altered fertility, 3-day-old female virgins that were heterozygous for the drive and Cas9 alleles were crossed with w 1118 males and then allowed to lay eggs for 3 consecutive days in different vials, with the eggs counted each day. They laid an average 6 standard deviation of 33 6 4 apparently normal eggs per day (Supplementary Data Set 1), which was significantly higher than the 20 6 2 eggs per day laid by w 1118 females crossed to drive and Cas9 heterozygous males (Supplementary Data Set 2, P ¼ 0.008, ttest) or the 23 6 2 eggs per day laid by w 1118 females crossed with w 1118 males (Supplementary Data Set 3, P ¼ 0.017, t-test). This greater number of eggs per day was likely a batch effect from perhaps slightly older or healthier drive females compared to the w 1118 females used. Indeed, if the first day of egg laying is discounted, the new average of 25 6 3 eggs per day for drive heterozygous females is statistically indistinguishable from the other groups, regardless of whether the first day of egg-laying is retained in these groups (P > 0.1 for all comparisons, t-test). This indicates that any drive cleavage from leaky somatic expression is sufficiently low, such that it does not substantially reduce female fertility (though we cannot rule out small reductions). These results are consistent with the notion that the nanos-Cas9 allele has little to no leaky somatic expression, as shown in previous Drosophila studies (Champer et al. 2017(Champer et al. , 2018(Champer et al. , 2019a. The offspring of these crosses (females heterozygous for the drive and Cas9 crossed with w 1118 males, males heterozygous for the drive and Cas9 crossed with w 1118 females, and w 1118 fly crosses) did not exhibit any apparent developmental fitness costs. In particular, there were no differences in egg or pupae viability between these 3 groups of offspring ( Supplementary Data Sets 1-3). To increase our sample size of individual crosses to better detect small fitness effects, similar crosses were performed with drive heterozygous females and males together with w 1118 individuals, as well as control crosses with only w 1118 flies (Supplementary Data Sets 1-3). However, these crosses took place at a different laboratory with different food preparation technique. While the food was still able to support flies, it was notably drier. Perhaps because of this, the results of these crosses were notably different for female drive heterozygotes (Supplementary Data Set 1). Specifically, the drive inheritance of adult progeny was lower (79% in the new batch vs 87% in earlier crosses), and the viability of eggs was also lower (77% in the new batch vs 83% earlier). This reduction in egg viability reached statistical significance compared to a new batch of control females bearing only one Cas9 allele that were tested in identical conditions (P ¼ 0.0001, Fisher's exact test), which themselves had the same egg viability as earlier w 1118 controls with the original food (Supplementary Data Set 3). In other measures, such as total fecundity, drive inheritance, and viability among the progeny of male drive carriers, no significant differences were found compared to the first batch of crosses. The reduction in the drive inheritance rate among adults suggests that drive-carrying eggs suffered from lower viability than eggs that failed to inherit the drive, indicating a possible viability cost in dry conditions. This could be a direct cost, or it could potentially have occurred by disruption of yellow-g in germline cells by drive conversion before they provided sufficient protein for high quality eggs. The latter would potentially explain why there was no noticeable reduction in viability in earlier experiments with more moist food or in the progeny of male drive carriers. Cage study To assess the ability of our homing suppression drive to spread over the course of several generations, we conducted a cage study with a large population size averaging 4,000 individuals per generation ( Supplementary Fig. 3). Flies heterozygous for the drive and homozygous for Cas9 were introduced into 2 cages at frequencies of 41% and 8.8% and were allowed to lay eggs in food bottles inside the cage for 1 day. Flies homozygous for the Cas9 allele were similarly allowed to lay eggs in separate bottles. We then removed the flies and placed the bottles together in each population cage. The cages were followed for several generations, with all individuals in each discrete generation phenotyped for DsRed to measure the drive carrier frequency, which includes drive homozygotes and heterozygotes. In both cages, the drive carrier frequency increased to approximately 63% ( Fig. 4 and Supplementary Data Set 4). This possibly represents an equilibrium frequency, though the cages may have increased to a somewhat higher equilibrium frequency with additional generations [models of this drive type predict an asymptotic approach to the equilibrium frequency (Champer et al. 2020d), making it difficult to estimate from limited cage data]. Such an equilibrium result is expected for suppression drives with imperfect drive conversion efficiency. Only when this equilibrium level is high enough will the population actually be eliminated. However, given the average drive conversion rate in heterozygotes of 76.7%, the equilibrium value seen in our cages is substantially lower than the expected drive carrier equilibrium frequency of approximately 90% for a simple model of homing suppression drives with one gRNA (Deredec et al. 2011;Beaghton et al. 2019;Champer et al. 2020d) [a more advanced model with different drive conversion rates between sexes and multiple gRNAs would predict a marginally higher equilibrium frequency (Champer et al. 2020d)]. In these models, such a reduction in equilibrium frequency could be explained by a fitness cost of approximately 20% in drive homozygotes (with multiplicative fitness costs and assuming the drive would slowly increase to an equilibrium carrier frequency of 70%) (Champer et al. 2020d). While the cage experiment food started out moist as in our first set of individual crosses, it was exposed to the air throughout the experiments, potentially resulting in similar or even higher fitness costs to female drive egg viability than those seen in the individual crosses with drier food. Nevertheless, the drive frequency did not decrease systematically over the course of the experiment after the initial increase of the drive, suggesting that functional resistance alleles did not form at a high rate. This is in contrast to 4 recent studies [3 of homing suppression drives (Hammond et al. 2017(Hammond et al. , 2021aFuchs et al. 2021) and another of a modification drive with a costly target site (Pham et al. 2019)] where functional resistance alleles outcompeted the drive alleles despite higher drive efficiency and lower overall resistance allele formation rates. A substantial reduction in the population size for this homing suppression drive was not observed (Supplementary Fig. 3). This is likely due to the modest genetic load of the drive, which we use as a measure of the reduction of reproductive capacity of a population (0 ¼ no loss of reproductive capacity compared to wildtype, 1 ¼ the population can no longer reproduce). The load of our drive is closely related to the proportion of sterile females in the population, which increases with drive frequency. In our cages, drive frequency only reached a moderate level that was likely fairly close to its equilibrium level, thus imposing only a moderate genetic load. This genetic load was lower than expected because the drive appeared to carry a fitness cost of unknown type (aside from the intended fitness effects causing sterility in females lacking a functional copy of yellow-g), which would directly reduce the drive's equilibrium frequency and genetic load (Deredec et al. 2008). Additionally, the drive would likely require a particularly high genetic load to reduce the population at all due to the robustness of the cage population Dhole et al. 2020). Specifically, the flies likely laid an average of over 20 eggs per female (Supplementary Data Sets 1-3), and reduced larval densities usually leads to healthier adults (which could perhaps mature faster and lay even more eggs due to greater size obtained as larva). Indeed, with the low competition found in our vials (Supplementary Data Sets 1-3), egg to adult survival was approximately 80%, thus potential enabling a cage population to remain high if even just 1 out of 8 females remain fertile. Reducing female fertility by this amount would require a genetic load of 0.875, and the predicted genetic load in our drives in the last several generations of our cages was perhaps slightly higher than 0.5 (Champer et al. 2020d). While the drive may still have caused a small population reduction in our cages, this was not detectable given the level of fluctuation in population sizes between generations, which could have been caused by variation in larvae density and other random factors. To further investigate the nature of the drive's fitness costs, a new cage was established with flies both homozygous and heterozygous for the drive allele but lacking the Cas9 allele required for homing. Such flies were placed in a cage at an initial drive carrier frequency of 76%. Over 10 generations, the drive-carrier frequency declined to 29% (Supplementary Fig. 4). This observation is consistent with the drive allele being a recessive female sterile allele (as expected from its disruption of yellow-g) and having no additional fitness costs beyond female homozygote sterility in the absence of a genomic source of Cas9. The previous experiments indicated low somatic expression and similar fecundity of drive flies compared to w 1118 flies. Thus, all or most of the unknown fitness cost apparently requires the drive allele to be combined with Cas9. Maximum-likelihood analysis of fitness from cage data To computationally assess drive performance, we adapted a previously developed method ( Flies carrying 1 copy of the drive allele and two copies of Cas9 were introduced at initial frequencies of 8.8% (cage 1) and 41.3% (cage 2) into a population that was wild-type at the drive site and homozygous for the Cas9 allele. The cage populations were followed for several nonoverlapping generations, each lasting 12 days, including 1 day of egg laying. All individuals from each generation were phenotyped for DsRed, with positive drive carriers having either 1 or 2 drive alleles (all drive carriers in the initial generation were drive/wild-type heterozygotes). cages. We used a simplified model that included only a single gRNA and initially neglected possible formation of functional resistance alleles, assuming that all resistance alleles were nonfunctional ( Supplementary Fig. 5). Note that this simplifying assumption of one gRNA for a drive with 4 slightly underestimates drive performance compared to a more complex model (Champer et al. 2020d) with the same parameters for drive-wildtype heterozygotes. Since drive carrier individuals in the initial generation of all 3 cages apparently had substantially lower fitness than in other generations (most likely due differences in health in the populations of the initial generation, though assortative mating could also partially explain the observed effect), likelihood values for the transition from the initial generation were excluded from the analysis. We reason that the first cage is more reliable for parameter estimation due to the greater number of generations and lower starting frequency, allowing more generations in which the drive can increase toward its possible equilibrium frequency. This equilibrium is predicted by models of homing suppression drives that match the design of our drive (Deredec et al. 2011;Beaghton et al. 2019;Champer et al. 2020d). In this cage, a model of viabilitybased fitness costs had the best fit to the data based on the Akaike information criterion corrected for small sample size (Liu et al. 2019), with drive homozygotes having a viability of 80% (95% confidence interval: 72-88%) compared to wild-type individuals (Supplementary Table 4a). We did not observe this reduced viability in our assays based on individual crosses (which only showed a modest reduction in viability in offspring of females with the drive when food was dry), but these had limited power to detect such reduction in drive heterozygotes. More importantly, individually assayed flies probably did not experience the same intense competition that might be found in the cage populations, which also were open to the air and perhaps even drier than our second set of individual cross experiments, so modest fitness effects in these cages is quite plausible. A model that included reduction of female fecundity and male mating success matched the data nearly as well as the viability model. In the second cage, a model with fitness costs from somatic Cas9 cleavage of yellow-g in female drive/wild-type heterozygotes was the best match to the data (Supplementary Table 4b), with such females having a 57% reduction in fecundity. However, this result is not consistent with our direct measurements of fecundity for drive heterozygous females, which could likely have detected such a large fecundity reduction (Supplementary Data Sets 1-3). A model with off-target viability fitness costs due to Cas9 cleavage of distant sites was the next best match, though this model did not perform well in the first cage. Combining data from the two cages (Supplementary Table 4c), the best model remained one based on somatic Cas9 cleavage and fitness costs. However, models with direct viability and fecundity/mating costs were nearly as good of a match to the data. In a control cage lacking Cas9, the drive declined as expected for an allele that caused recessive sterility in females, with no additional fitness costs (Supplementary Table 4d). This indicates that any fitness costs are likely mostly due to the drive itself together with Cas9, rather than strong haploinsufficiency of yellow-g or an effect in males. Overall, while our analysis provides strong evidence for a fitness cost compared to the neutral model (in agreement with our finding of a lower than expected equilibrium frequency), we were unable to determine the exact nature of the cost. Despite our high sample sizes in each generation, this result is within expectations based on our previous exploration of fitness cost inference in maximum-likelihood models, given the autosomal genomic loci and lack of particularly strong fitness effects in this drive system (Liu et al. 2019). Assessment of functional resistance allele formation We did not observe qualitative evidence of functional alleles in our cages, which would have resulted in a systematic decline in drive frequency toward the end of the experiments when present at sufficient frequency. Furthermore, our maximum-likelihood method allows us to estimate an upper bound of the maximum rate at which functional resistance alleles may have been formed in our population cages. To accomplish this, we allowed the "relative r1 rate" to vary in addition to fitness cost for the two best fitness models, assuming the functional resistance allele formation rate to be proportional to the nonfunctional resistance allele formation rate. The most likely estimate for functional resistance allele formation was 0%, so adding the new r1 parameter produced a substantially worse match by Akaike information criterion (Supplementary Table 4e). The 2 cages together had an 95% confidence upper bound of 0.3% for the relative r1 formation rate (fraction of total resistance alleles that were functional), which corresponds to germline and embryo functional resistance allele formation rates of 0.067% and 0.16%, respectively. Note that computational models predict even lower rates of functional resistance allele formation in 4-gRNA drives, likely below 0.01% the level of the nonfunctional resistance allele formation rate, perhaps even lower by several orders of magnitude if the rate of functional sequence repair at individual sites is well below 10% (Champer et al. 2020d). Because functional resistance alleles are expected to be rare, normal cage studies have limited power to detect them if they remain at low frequencies. In our population cages, the highest genetic load was about 0.5, meaning that functional resistance alleles at low frequencies would perhaps have twice the fitness of other alleles that would be at equilibrium, preventing them from substantially influencing drive carrier trajectories for the first few generations after they form and thus limiting power to detect them without continuing the cage for several generations. To assess functional resistance alleles for single-sex fertility systems with low drive conversion efficiency, we modified a protocol developed previously for detecting functional resistance alleles in cage studies (Fuchs et al. 2021) by artificially increasing genetic load (see Methods for details). Because nondrive carrying flies are removed in this method for each generation, the equilibrium genetic load on the populations was close to 98% according to our previous model (Champer et al. 2020d) based on the same parameters we used in maximum-likelihood analysis of the cage. Functional resistance alleles would retain high fitness if they form, even if those not inherited with a drive allele are removed. Equilibrium genetic load is expected to be reached on the second generation. On the first generation afterward, the predicted genetic load would still be 96%. If any functional resistance alleles are present, they would quickly reach high frequencies, resulting in many flies quickly becoming drive or functional resistance allele heterozygotes and drive homozygotes, avoiding population suppression due to high genetic load. Sequencing a few of these flies would allow for characterization of the functional resistance alleles. However, while population sizes in the first generation were large in 3 replicates (Supplementary Table 5) (245, 250, and 271, of which a handful did not carry a drive allele), the population sizes in the next generation were smaller (84, 93, 43). In the following generation, the population was 21 and 16 in the first two groups and zero in the third group. There were no offspring from the remaining two replicates. This indicates that no functional resistance alleles formed during the study or were quickly lost without contributing to the next generation. Modeling indicates that a relative r1 rate (fraction of total resistance alleles that preserve the function of the target gene) of 0.1% would have resulted in formation of an functional resistance allele by the second generation in about half of cages (Champer et al. 2020d), likely saving the population from suppression. Thus, the actual relative r1 rate is most likely below 0.1%, consistent with our maximum likelihood results. Discussion This study experimentally demonstrated the utility of gRNA multiplexing as a means for improving the ability of a homing suppression drive to spread through a population without significant formation of functional resistance alleles. The drive displayed a higher drive conversion rate than most single gRNA Drosophila drive systems (Champer et al. 2017(Champer et al. , 2018(Champer et al. , 2019a, as well as a previous homing suppression drive with 4 gRNAs (Oberhofer et al. 2018). However, it had a moderate embryo resistance rate that presumably reduced its rate of spread through the cage. Furthermore, our analyses suggest that the drive also carried a fitness cost of unknown origin. Because drive conversion efficiency was imperfect, these factors together reduced the genetic load of the drive on the population, ultimately preventing elimination of the large cage populations. Indeed, our cages were the first in which a suppression drive failed due to inadequate genetic load rather than functional resistance alleles, underscoring the need for suppression drives to be highly efficient. Notwithstanding, this study still demonstrated an additional strategy against functional resistance allele formation in suppression drives that is complementary to the targeting of highly conserved sites, which was previously demonstrated as an effective approach (Kyrou et al. 2018). Indeed, all other drives targeting essential or highly important genes without rescue suffered from functional resistance allele formation (Hammond et al. 2017(Hammond et al. , 2021aPham et al. 2019;Fuchs et al. 2021), which our drive avoided, despite the large population size in our cages. Combined, these two strategies for reducing functional resistance alleles would likely be even more effective while still maintaining high drive conversion efficiency (Champer et al. 2020d). Since genetic load (which determines the suppressive power of a drive) is mostly determined by drive conversion efficiency and fitness costs, this represents a hurdle for any suppression strategy based on a homing drive. As the frequency of the drive increases, so does the rate of drive removal. With 100% homing efficiency, the relative frequency of the drive allele would continue to increase as the population numbers decline, and complete suppression would occur as the drive reaches fixation. However, with a lower efficiency, wild-type alleles remain, and the antagonistically acting forces of drive conversion and drive allele removal result in an equilibrium frequency. Fitness costs from the drive would, in this case, further reduce the equilibrium drive frequency and resulting genetic load. Homing drives in Anopheles (Hammond et al. 2016(Hammond et al. , 2021aKyrou et al. 2018), even with similar design and promoters, have demonstrated consistently higher drive conversion rates compared to drives in fruit flies. In mosquito suppression drives with the zpg promoter, the higher somatic fitness costs were more than compensated for by a higher drive conversion efficiency, resulting in a superior genetic load. Engineering sufficiently high drive conversion efficiency could therefore be a challenge when designing drives designed to be employed for suppression in the fields of Drosophilids such as the fruit pest Drosophila suzukii. Additionally, the reduced equilibrium frequency in flies compared to Anopheles mosquitoes represents a limitation in our study for detecting functional resistance alleles. At a lower equilibrium frequency, functional resistance alleles have a reduced fitness advantage compared to drive and wild-type alleles, reducing our power to distinguish them using our maximum likelihood method that analyzes drive carrier frequency trajectories. Our analysis of functional resistance alleles also somewhat depended on the fit of our model with drive efficiency (measured from individual crosses, though the actual efficiency may be slightly higher due to the multiplexed gRNA design) and fitness parameters, the latter of which is inferred by the model and may be particularly difficult to accurately assess. Nevertheless, our results showcase the utility of the maximum-likelihood analysis for predicting functional resistance based on cage phenotype frequency trajectories without additional sequencing. However, such sequencing of each cage generation would still likely have allowed for greater power in detecting functional resistance alleles. Our use of the artificial selection small cage study with a modified protocol represents a potential way to detect functional resistance alleles more directly, with far higher power given an identical experimental effort invested. This method was based off a previous Anopheles study (Fuchs et al. 2021), but modified to retain high power to detect functional resistance even when drive conversion is lower and when the phenotype involves single-sex sterility instead of both-sex reduced viability. It remains unclear exactly what caused the fitness costs associated with our homing suppression drive. Though we could estimate the magnitude of such costs, there is considerable uncertainly in their exact value. There may be additional factors that we did not model due to lack of evidence in this or other studies, such as potentially reduced inheritance of cleaved chromosomes that would have the tendency of inflating calculated drive conversion, which would reduce apparent fitness costs compared to actual costs. The nature of the fitness costs also could not be determined based on our data. It is possible that a combination of several different types of fitness costs was at play, including fitness components that we did not include in our model. For example, perhaps the target gene was slightly haploinsufficient, causing females with only one wild-type allele to have slightly reduced fertility compared to wild-type females. Another possibility is that yellow-g is partly required by germline cells that were underdoing drive conversion (thus eliminating their remaining wild-type allele), which could explain the reduced egg viability seen in some of our individual crosses with drier food (where reduced levels of yellow-g below that provided by a single, stable allele could weaken the egg casing, making the eggs more vulnerable to dry conditions). These issues could potentially be addressed by changing the target gene to one of many other possible female fertility genes that does not have a germline-related maternal effect. Another possibility is off-target cleavage effects as seen previously (Langmü ller et al. 2021), which would likely be exacerbated by multiplexed gRNAs. However, such an issue could be addressed relatively easily by using high fidelity Cas9 nucleases that show little to no off-target cleavage (Kleinstiver et al. 2016;Slaymaker et al. 2016;Casini et al. 2018;Lee et al. 2019;Tan et al. 2019;Chatterjee et al. 2020;Xie et al. 2020), which have been shown to have similar drive performance (Langmü ller et al. 2021). By contrast, if fitness costs are caused by the expression of the drive components themselves, they may be more difficult to directly address. In this case, increasing drive conversion efficiency, for example by using a different Cas9 promoter, may be the best route to developing successful drives, while potentially also minimizing fitness costs from direct component expression by further limiting expression only to cells and time windows where drive conversion takes place. Indeed, modeling indicates that high drive efficiency and fitness may play an even more important role in ensuring success in complex natural populations with spatial structure (Bull et al. 2019;Rode et al. 2019;Dhole et al. 2020;North et al. 2020;Champer et al. 2021). If the drive conversion rate cannot be sufficiently increased in a Drosophila or other species given the set of available genetic tools, then a nonhoming TADE-type suppression drive (Champer et al. 2020a(Champer et al. , 2020b may still be able to provide a high genetic load. This only requires high efficiency for germline cleavage (regardless of whether it results in HDR or end-joining) rather than for the drive conversion process (which requires HDR). Though engineering such a drive targeting haplolethal genes may be challenging, working with such genes is possible at least for homing drives (Champer et al. 2020e). However, TADE drives are frequency dependent and thus weaker than homing drives, requiring higher release sizes for success (Champer et al. 2020a(Champer et al. , 2020b. In some cases, this feature may be desirable if the drive should be strictly confined to a target population. Another way to achieve confinement that could still involve a homing suppression drive would be to use a tethered system in which the split homing element is linked to a confined modification drive system (Dhole et al. 2019;Metzloff et al. 2021). Such a method would also allow split homing suppression drive elements, similar to the one we tested, to potentially be release candidates if their performance is sufficient. Overall, we have demonstrated that even avoiding functional resistance alleles is often insufficient to ensure a high enough genetic load to suppress populations, underscoring the need to develop highly efficient drives. We also showed that gRNA multiplexing is a promising technique for reduction of functional resistance alleles in a homing suppression drive while maintaining relatively high drive conversion efficiency. Since multiplexing gRNAs is a fairly straightforward and flexible process using either tRNA, ribozymes, or separate promoters for each gRNA, we believe this approach has the potential to be applied to a wide variety of suppression gene drive designs, providing similar benefits in many species. Data availability The data underlying this article are available in the article and in its online supplementary material. All supporting code is available on GitHub (https://github.com/MesserLab/HomingSuppressionDrive and https://github.com/MesserLab/Binomial-Analysis). Supplemental material is available at G3 online. Funding This study was supported by the National Institutes of Health awards R21AI130635 to JC, AGC, and PWM, award F32AI138476 to JC, and award R01GM127418 to PWM.
5820890
s2orc/train
v2
2017-08-02T18:08:41.330Z
2016-12-27T00:00:00.000Z
Genetic dissection of sorghum grain quality traits using diverse and segregating populations Key message Coordinated association and linkage mapping identified 25 grain quality QTLs in multiple environments, and fine mapping of the Wx locus supports the use of high-density genetic markers in linkage mapping. Abstract There is a wide range of end-use products made from cereal grains, and these products often demand different grain characteristics. Fortunately, cereal crop species including sorghum [Sorghum bicolor (L.) Moench] contain high phenotypic variation for traits influencing grain quality. Identifying genetic variants underlying this phenotypic variation allows plant breeders to develop genotypes with grain attributes optimized for their intended usage. Multiple sorghum mapping populations were rigorously phenotyped across two environments (SC Coastal Plain and Central TX) in 2 years for five major grain quality traits: amylose, starch, crude protein, crude fat, and gross energy. Coordinated association and linkage mapping revealed several robust QTLs that make prime targets to improve grain quality for food, feed, and fuel products. Although the amylose QTL interval spanned many megabases, the marker with greatest significance was located just 12 kb from waxy (Wx), the primary gene regulating amylose production in cereal grains. This suggests higher resolution mapping in recombinant inbred line (RIL) populations can be obtained when genotyped at a high marker density. The major QTL for crude fat content, identified in both a RIL population and grain sorghum diversity panel, encompassed the DGAT1 locus, a critical gene involved in maize lipid biosynthesis. Another QTL on chromosome 1 was consistently mapped in both RIL populations for multiple grain quality traits including starch, crude protein, and gross energy. Collectively, these genetic regions offer excellent opportunities to manipulate grain composition and set up future studies for gene validation. Electronic supplementary material The online version of this article (doi:10.1007/s00122-016-2844-6) contains supplementary material, which is available to authorized users. Introduction pet foods, and packaging materials (Fang and Hanna 2000;Udachan et al. 2012;Zhu 2014). These various products can require different grain characteristics and thus can alter crop ideotype. Identifying genes influencing sorghum grain composition would help manipulate grain texture and quality to accommodate existing end-use markets and promote new product development ). In addition, understanding the chemical and genetic components underlying the gross energy content of sorghum would enable breeders to increase the overall feed efficiency when the grain is grown for livestock feed through selective breeding and trait introgression. The traditional selection schema implemented by plant breeders throughout history has justifiably focused primarily on yield and stress resistance (Morris and Sands 2006). Priority must first be given to developing hybrids and cultivars capable of producing for the farmer. However, a global malnutrition crisis has shifted emphasis toward improving the grain quality of staple cereal crops and ensuring food security (Rosegrant and Cline 2003), and significant investments have been made to address these concerns (Varmus et al. 2003). Additionally, the livestock industry is continuously searching for agricultural products that accelerate growth and enhance the nutritional quality of their animals (Cowieson 2005;Kriegshauser et al. 2006;Smith et al. 2015). Animals consume approximately one-third of worldwide grain production (Pimentel et al. 1997), substantiating the need for improved grain products for this end-use. Preferred grain composition varies depending on end-use, and unlocking the network of genes regulating grain quality traits will help facilitate the manipulation of macronutrient content and digestibility for plant breeders. The first critical component is identifying genes or gene regions useful for sorghum biofortification that do not hinder its agronomic yield or productivity. Results in wheat found that improvement of certain grain quality parameters do not lead to a decrease in grain yield (Anderson et al. 1997). Additionally, Jampala et al. (2012) evaluated sorghum recombinant inbred lines (RILs) segregating for multiple grain quality traits and found several high quality grain lines to be among the top yielding. Cereal crops, including sorghum, produce grains rich in carbohydrates, primarily starch. Cereals also store considerable concentrations of protein and fat in the caryopsis to support embryogenesis (Hubbard et al. 1950), which can be energy-rich and provide essential nutrients needed for adequate growth and development of both humans and animals. However, digestibility of these macronutrients, particularly starch and protein, can vary widely among different grain sorghums (Rooney and Pflugfelder 1986;Axtell et al. 1981;Sang et al. 2008). Therefore, gross energy, the amount of heat generated during combustion, was evaluated in this study as an estimate of sorghum digestibility because a strong positive correlation between gross energy and digestible energy was previously identified in cereals (Bhatty and Wu 1974). Additionally, specific sorghum genotypes contain high levels of polyphenols that have antioxidant properties and other potential health benefits (Rhodes et al. 2014). Sorghum was shown to have extensive variation for several grain quality traits including the three primary macronutrients (starch, protein, and fat) across diverse germplasm (Shewayrga et al. 2012;Sukumaran et al. 2012;Rhodes et al. 2016), which gives promise to advance sorghum biofortification and breeding for specific end-use products. There have been numerous genetic studies on grain quality traits across the major cereal crops including maize (Séne et al. 2000;Wilson et al. 2004;Cook et al. 2012), rice (He et al. 1999;Aluko et al. 2004;Li et al. 2004), sorghum (Ibrahim et al. 1985;Rami et al. 1998;Sukumaran et al. 2012;Rhodes et al. 2016), and wheat (Huang et al. 2006;McCartney et al. 2006). The starch biosynthesis pathway in maize has been well described (Séne et al. 2000;Whitt et al. 2002), and the major genes involved in maize starch biosynthesis are highly conserved in sorghum for valuable comparative analyses (Table S1). It is in the final steps of this pathway where starch synthases and starch branching and debranching enzymes work collectively to determine the ratio of amylose to amylopectin, the two major components of starch. Low amylose sorghums, better known as waxy sorghums, have been shown to have increased feed and ethanol conversion efficiency compared to normal, non-waxy genotypes (Sherrod et al. 1969;Yan et al. 2011). Recent analysis of whole genome resequencing data across sorghum accessions indicated many of the starch biosynthesis genes are under selection in historic cultivars (Campbell et al. 2016). Grain storage protein and fat biosynthesis pathways are less characterized in cereals, but genome-wide association studies (GWAS) in maize have revealed multiple candidate genes for each macronutrient (Cook et al. 2012;Li et al. 2013). Generally, the majority of amino acids within sorghum grains are stored in protein bodies, called kafirins, which are similar to zeins in maize (Saito et al. 2012). Kafirins are located in the endosperm and can form a tight matrix with starch granules to reduce both protein and starch digestibility, which lowers feed efficiency (Duodu et al. 2003). Loss-of-function mutants of floury-2 and opaque-2, major genes regulating kafirin levels and protein digestibility, were identified by Singh and Axtell (1973), and these mutants contain high lysine levels compared to normal genotypes. Although crude fat, or etherextracted lipids, has the lowest concentration (2-4% dry matter) of macronutrients in sorghum grain, its high calorie density makes the trait a valid target to increase sorghum nutritional value for the animal feed industry (Kriegshauser 1 3 et al. 2006). Of the identified major effect genes compiled by Mace and Jordan (2010), none were connected to fat biosynthesis. This reinforces that genetic dissection of this trait would be of value. In an effort to identify rare and major effect loci through multiple mapping strategies, this research included a diverse association panel as well as two RIL populations. This multi-population approach was designed to combine the statistical resolution of a diversity panel with the statistical power (high allele frequency) of segregating RILs. Combining populations also enables detection of both additive and dominance effects. Parental lines used to develop the RILs had contrasting protein and starch digestibility among other grain quality traits (Miller et al. 1992;Weaver et al. 1998;Harris et al. 2007;Jampala et al. 2012). Transgressive segregation for additional traits such as starch and protein content resulted in large variation present within each population although parental lines contained similar trait values. In addition, the association panel contained greater variation than the biparental families for most quality traits, suggesting favorable alleles that have yet to be utilized for grain quality improvement exist in diverse germplasm. Loci containing these favorable alleles were mapped to different locations across the genome to understand the genetic basis of macronutrient content as well as the gross energy value generated from the ratio and interactions of these macronutrients. Identifying these grain quality loci could allow breeding efforts to develop elite grain sorghums that contain enhanced properties for food, feed, and biofuel end-uses without compromising productivity. Grain sorghum diversity panel A total of 390 diverse grain accessions were planted in 2013 and 2014 in Florence, South Carolina. The majority (n = 332) of the 390 accessions within the grain sorghum diversity panel (GSDP) were in the original U.S. sorghum association panel (Casa et al. 2008), and the additional accessions were included because they have a historical relevance, diverse origin, or distinctive phenotype. Experimental field design parameters are fully described in Boyles et al. (2016). Briefly, the experiment was planted 15 May 2013 and 7 May 2014 in a 2× replicated randomized complete block design with plot dimensions of two rows, 6.1 m length, and a row spacing of 0.762 m. Plant density was approximately 130,000 plants ha −1 and plots were irrigated as needed to characterize sorghum grain quality traits in favorable environments. No fungicides were applied in the GSDP to control grain mold (Fusarium spp.) and anthracnose (Colletotrichum sublineolum) populations. Recombinant inbred line populations Two biparental RIL populations segregating for grain quality traits were also studied to compare linkage mapping with association mapping in diverse sorghum germplasm. Both populations share a common parent, BTxARG-1, which has a white pericarp color, waxy endosperm (low amylose), and additional qualities that make it an attractive parental line for food grade hybrid development (Miller et al. 1992). The other parents were P850029, a highly digestible protein breeding line with high lysine content (Weaver et al. 1998;Jampala et al. 2012), and BTx642, a yellow pericarp sorghum with post-flowering drought resistance (Rosenow et al. 2002;Harris et al. 2007). Both populations were phenotyped in the F 4:5 generation and DNA from tissue of F 5 plants was genotyped. Populations, BTx642/BTxARG-1 and BTxARG-1/P850029, are hereafter referenced according to their unique parents, BTx642 and P850029. BTx642 contained 191 individuals and P850029 consisted of 279 lines with quality genotyping data. Populations were planted with an Almaco cone planter in a 2× replicated randomized complete block design across two years (2014 and 2015) in Blackville, SC and College Station, TX. These experiments are hereafter referred to as SC14, SC15, TX14, and TX15. The SC experiments were planted between 4 and 15 May, depending on year and population, with replicates always planted on the same day. Individual plots in SC14 and SC15 consisted of 6.1 m single row plots with a row spacing of 0.965 m, with the exception of the BTx642 population in SC14, which contained plot lengths of 3.05 m as a result of seed limitations. Agronomic practices for SC14 and SC15 were similar to those of the GSDP except that only 60 units of lay-by N was applied prior to anthesis because a legume crop (SC14: peanut, SC15: soybean) was planted the prior year to provide an additional N source. In the SC14 experiment, the field was cultivated 44 days after planting (DAP) to reduce grass weed pressure. To control for headworms, 0.36 L ha −1 of Endigo ZC (pyrethroid) in SC14 and 1.2 L ha −1 of Prevathon (chlorantraniliprole) in SC15 were applied during the grain filling stage. In SC15, two applications of Transform WG (sulfoxaflor) at 0.1 L ha −1 were administered 3 weeks apart to reduce plant stress from sugarcane aphids (Melanaphis saccari). Additionally, a fungicide treatment (1 L ha −1 Quilt Xcel) was applied approximately 100 DAP in both SC14 and SC15 to reduce the confounding effect of biotic damage on grain quality. The TX14 and TX15 experiments were planted on 21 April and 8 April, respectively. Plots were planted using John Deere Max-Emerge II units. The TX14 experiment was planted following a soybean crop in the previous growing season while the TX15 experiment followed cotton. 3 The plots in the TX environment were 5.5 m long with row spacing of 0.762 m. In 2014, each plot consisted of one row; in 2015, the experiment was grown as two-row plots. Both TX environments had supplemental irrigation available. Due to above average summer rainfall, irrigation was not applied during the TX14 growing season. The TX15 experiment was flood irrigated once on 10 July. Both TX environments were grown with ridge till cultivation. Rolling cultivation was employed twice in the 2015 season to control heavy weed pressure. Plots in TX14 were fertilized with 15 units N and 52 units P prior to planting. Slightly higher pre-plant fertilizer rates of 22 units N and 66 units P were required in TX15. Approximately 80 and 60 units of lay-by N (UAN) were, respectively, administered in 2014 and 2015. Pre-and post-emergent herbicide applications in both years were similar to those of the SC experiments. Agronomic traits and grain processing Number of days to anthesis was recorded for each plot when approximately 50% of plants reached mid-bloom. Plant height was measured from the ground to the apex of the main panicle from a representative plant in each plot. For the GSDP and RIL experiments in SC, three random panicles were harvested per plot at physiological maturity. The first and last plants in each row were not harvested to eliminate confounding results caused by border effect. For the RIL experiment in the TX location, ten representative panicles were harvested at random from each plot at postphysiological maturity (mid-August). Harvested panicles from both environments were dried to constant moisture and subsequently hand-threshed. Threshed grain in SC was run through an air aspirator (AT Ferrell Company, Inc.) and then through a wheat dehuller (Precision Machine Co., Inc.) to remove glumes and other plant debris. Grain from the TX location was cleaned using a Wintersteiger LD18 (Wintersteiger Ag). A 25 g homogenized subsample of grain was ground to 1-mm particle size with a CT 193 Cyclotec Sample Mill (FOSS North America) prior to compositional analysis. Generally, near-infrared spectroscopy (NIRS) performs better on ground versus whole grain samples in sorghum (de Alencar Figueirido et al. 2010) as pericarp thickness has been shown to affect whole grain NIRS predictability (Guindo et al. 2016). Grain quality phenotypes Near-infrared spectroscopy was used to analyze grain composition traits simultaneously. Cereal grain quality traits including the five evaluated in this study have been previously measured with high predictability using NIRS (Kays and Barton 2002;de Alencar Figueirido et al. 2010). Ground grain was evenly placed in a 43 mL Teflon dish that gradually rotated during NIRS analysis to improve sampling accuracy. All traits were measured with a DA7250 NIR analyzer (Perten Instruments). For initial calibration of the Perten DA7250, wet chemistry was performed on a subset of 100 samples within the GSDP. The wet chemistry was performed at Dairyland Laboratories, Inc. (Arcadia, WI) and the Quality Assurance Laboratory at Murphy-Brown, LLC (Warsaw, NC). The Quality Assurance Laboratory also estimated gross energy using bomb calorimetry. Existing calibrations of amylose, crude fat, and starch content were improved with quantification of 35 (15 unique and ten blind duplicates) P850029 samples from SC15 performed at Dairyland Laboratories, Inc. The samples were chosen based on extreme values in SC15 that fell outside the existing calibration curve. Final calibration equations resulted in r 2 values of 0.691 for amylose (% starch), 0.893 for starch (% dry basis), 0.964 for crude protein (% dry basis), 0.554 for crude fat (% dry basis), and 0.883 for gross energy (kcal kg −1 ). Statistical analysis Trait variability, correlations, and mapping were calculated using replicate means within each year and location. The cor() and cor.test() functions in R software (R Core Development Team 2015) were used to generate Pearson pairwise correlation coefficients and determine significance. Broad-sense heritability (H 2 ) estimates were calculated from variance components generated with the "lme4" R package as described in Boyles et al. (2016). All components were treated as random effects. For the RIL populations, replicates were nested in location-by-year interaction as shown below: where G is genotype, L is location, R is replication, and Y is year. Multiplication symbols indicate interactions between variables while nesting is denoted by "%in%". Heritability was calculated across populations using the following equations: where G is genotype, L is location, R is replication, Y is year, and E is error. Pairwise linkage disequilibrium (LD) between SNPs was generated using the R package "genetics" (Warnes et al. 2012). Resulting LD values were grouped based on physical distance between SNP pairs into 100 bp windows, and the average LD for each window was plotted to determine the genome-wide pattern of LD decay for each population. Chromosome-wise recombination fractions for the two RIL populations were produced using the "qtl" (R/qtl) package in R (Broman et al. 2003). Phenotypic variance explained (PVE) for each associated SNP from the grain quality GWAS was estimated with the Genome Association and Prediction Integrated Tool (GAPIT) package in R. In the RIL populations, genetic variance explained by individual quantitative trait loci (QTLs) was calculated based on the maximum LOD score within the quantitative trait locus (QTL) interval as follows: 1-10 −2 LOD/n , where LOD is the LOD score produced from the R/qtl scanone() function and n is the number of RILs included in the QTL analysis (Broman et al. 2003). To determine PVE by each QTL, the genetic variance explained was multiplied by the overall broad-sense heritability (Broman et al. 2003): PVE = H 2 (1-10 −2 LOD/n ). Genotype-by-sequencing (GBS) Genotyping for the GSDP has been thoroughly described in Morris et al. (2013) and Boyles et al. (2016). To allow for comparison to associations found in the RIL populations, the raw data reads of the GSDP were realigned to the new Sorghum bicolor v3.1 reference genome (https:// phytozome.jgi.doe.gov/), and SNPs were re-called using the previously described pipelines (Morris et al. 2013;Boyles et al. 2016). A total of 268,896 SNPs were used in GWAS that passed the minor allele frequency (MAF) cutoff of 0.05. Of the 390 accessions, a total of 368 and 378 contained quality genetic and phenotypic data for GWAS in 2013 and 2014, respectively. For the RIL populations, single plant leaf tissue from each individual RIL and each parent (BTxARG-1, BTx642, and P850029) was harvested from 2-week-old seedlings. Plant tissue was lyophilized and sent to the Cornell University Genomic Diversity Facility for genotyping. Individual DNAs were extracted using the CTAB protocol (Mace et al. 2003) and digested with the restriction enzyme ApeKI. Digested DNA fragments of 96 individuals were ligated to a unique barcode adaptor and subsequently pooled for sequencing. Five 96-plex GBS libraries were single-end sequenced using an Illumina HiSeq 2500 to obtain 64-bp reads (excluding adaptor sequences). Reads were aligned, and SNPs were both called and imputed with the TASSEL 5.0 GBS pipeline . Reads were aligned to the Sorghum bicolor v3.1 reference genome (https://phytozome.jgi.doe.gov/). The TASSEL plugin FSFHap (Swarts et al. 2014), specific for biparental populations, was used for imputing missing genotypes. Imputation was performed independently on each population and chromosome. The "cluster" algorithm was used to infer haplotypes and sites were filtered when the correlation (r) with neighboring sites was <0.4 or missing genotype frequency was >0.9 across individuals. All other parameters were maintained at their default values. Following imputation, individual sites with MAF <0.05 were removed. This resulted in 71,856 and 49,617 genome-wide SNPs for the BTx642 and P850029 populations, respectively. The full SNP data sets were used to characterize and compare genomic properties including heterozygosity and LD. Recombination bin and genetic map construction For genetic map construction and subsequent linkage analysis in R/qtl (Broman et al. 2003), RIL genotypes were first converted to ABH allele format where allele "A" is from Parent A, "B" is from Parent B, and "H" is heterozygous. To reduce computational burden and accommodate maximum marker thresholds in the software, SNPs for each RIL population were placed into recombination bins using a method designed by Huang et al. (2009). A 15 SNP sliding window was used to track the Parent1:Parent2 genotype ratio, where Parent1 was the female from the original F 1 cross. Recombination breakpoints were categorized as either homozygous/homozygous (hom/hom) or homozygous/heterozygous (hom/het). As fully described in Huang et al. (2009), a hom/hom breakpoint was determined when the Parent1:Parent2 ratio of a RIL(s) passed the 8:7 to 7:8 (or vice versa) threshold only once before transitioning into the alternate parent genotype. On the other hand, a hom/ het breakpoint was defined when the Parent1:Parent2 ratio passed the same boundary multiple times prior to transitioning into the alternate parent genotype. In cases where sites contained different RILs with a hom/hom and hom/ het breakpoint, the site was classified as hom/hom. There were 3178 hom/hom and 1423 hom/het breakpoints for a total of 4601 recombination breakpoints in BTx642. A total of 4154 (3337 hom/hom and 777 hom/het) recombination breakpoints was found in the P850029 population. Marker sites located between recombination breakpoints were combined into bins. Bin windows were treated as individual markers to construct a genetic map for each population. Genetic distances were estimated from physical positions of each recombination breakpoint using the Kosambi mapping function (Kosambi 1943), with a maximum iteration number of 1000 and the error probability set to 1 × 10 −4 . In R/qtl, the genetic maps for BTx642 and P850029 were converted to cross type "riself", an abbreviation for "RIL by selfing" (Broman et al. 2003). Because this cross type does not allow for heterozygosity, heterozygous sites across the data set were treated as missing. To take this into account, bin markers with a MAF < 0.05 as a result of both high heterozygosity and residual missing data were eliminated from the genetic map. Finally, markers with severe segregation distortion (P < 10 −20 ) were removed, resulting in a total of 4589 and 4149 bin markers for BTx642 and P850029, respectively. Association and linkage mapping Genome-wide association studies were performed using the GAPIT package (Lipka et al. 2012) implemented in R. A regular mixed linear model was designated by setting the group.from and group.to GAPIT parameters equal to the number of individuals in the population, thus each individual genotype was considered a group (Lipka et al. 2012). A kinship matrix was estimated using GAPIT's default VanRaden method (VanRaden 2008) and incorporated into the model as a random effect. A population structure matrix was not included because kinship adequately accounted for relatedness to control false positives (Fig. S1). Permutation tests as described in Zhang et al. (2015) were previously conducted on the diversity panel SNP data set to determine the association significance cutoff of P = 10 −5 R/qtl software (Broman et al. 2003) was used for linkage mapping. The genome-wide LOD significance threshold for BTx642 and P850029 was determined by running n = 1000 permutations of the expectation-maximization algorithm. The average LOD score threshold of 1000 permutations with an α = 0.05 was LOD = 3.33 for BTx642 and LOD = 3.36 for P850029; therefore, a consistent LOD score of 3.3 was used as the significance threshold for both populations. Interval mapping using the R/qtl scanone() function was used for QTL analysis. This function allowed phenotypic covariates to be incorporated into the model, which was critical to eliminate potential confounding effects on grain quality traits such as grain yield and pericarp color. The same model parameters that were chosen for LOD score threshold determination were used for QTL mapping. Variation of grain quality traits All grain quality traits displayed normal distributions except amylose, which was bimodal. The GSDP exhibited greater phenotypic variation than the RIL populations in four of the five grain quality traits, with amylose being the lone exception. Lower amylose variation within the GSDP is not surprising given the few known waxy sorghum accessions within the panel. Starch in the GSDP had the greatest range of phenotypic variation (>30%) followed by crude protein, crude fat, and then gross energy (Table 1). Although gross energy in the GSDP contained the smallest amount of phenotypic variation, a 400 kcal kg −1 (10.2%) difference was observed between the lowest and highest accession. Between RIL populations, BTx642 had a larger range for crude fat at 4.71% and P850029 had a larger observed range for starch (16.7%), crude protein (8.7%), and gross energy (349 kcal kg −1 ). All grain quality traits were influenced by transgressive segregation, especially starch and crude protein. Mean starch contents from the three parent lines were within 1% of one another; however, both RIL populations contained >15% variation. Transgressive segregants were commonly found in previously studied biparental populations for the three grain macronutrients (Murray et al. 2008) as well as other grain quality traits (Klein et al. 2001). Estimated genetic variance in the GSDP ranged from 18% (amylose) to above 56% (crude fat). Because the diversity panel was grown in one location over two years, variance due to year accounted for a large percentage of the total phenotypic variation observed (Fig. 1). The one exception was crude fat, which had <1% of phenotypic variance explained by evaluation year. Variance due to genotype in BTx642 and P850029 was less than the estimated genotypic variance in the GSDP, irrespective of trait. Across RIL populations, genotype-by-environment interaction accounted for considerably less of the total variation in crude fat and gross energy content than starch and crude protein. In general, environmental variance alone was very low (<5%) while variance between replicates was (Fig. 1). GSDP Phenotypic correlations (r) between years for each grain quality trait ranged from r = 0.42 to 0.74. The highest broad-sense heritability (H 2 ) among the five traits evaluated was 0.82 for gross energy. Amylose had the lowest broad-sense heritability in the GSDP at 0.56. Heritability across macronutrients was similar (Table 2). Broad-sense heritability for starch (H 2 = 0.73) was consistent with prior results but calculated heritability estimates for crude protein and fat were both slightly lower than previously observed (Murray et al. 2008). RILs Heritability of each trait in BTx642 was relatively similar, ranging from 0.67 (gross energy) to 0.77 (crude fat). Amylose content had the greatest broad-sense heritability in P850029 at 0.86. Starch and crude protein (H 2 = 0.62) possessed lower heritability in P850029 when compared to both BTx642 and the GSDP. Residual variation was the primary cause of lower heritability for specific grain quality traits (Fig. 1). Primary sources of residual variation include grain processing and trait prediction by NIRS. GSDP Because grain macronutrients were measured on a percent dry matter basis, starch, the major macronutrient, expectedly had a strong negative correlation with crude protein and fat (Table 2). Crude protein and fat were positively correlated. Starch content and gross energy also had a strong negative relationship. Meanwhile, crude protein and fat were positively correlated with gross energy. Amylose percentage had a positive correlation with gross energy, meaning low amylose accessions tended to have lower gross energy values. Amylose had no significant correlations with any of the macronutrients in the GSDP. Three yield component traits were evaluated in the study to examine the phenotypic relationship between grain yield and quality. The three yield components were grain number per primary panicle, 1000-grain weight, and grain yield per primary panicle. Raw data on these yield phenotypes are available in Boyles et al. (2016). In general, yield components had a positive relationship with starch content and negative correlations with crude protein and fat (Table 2). This also resulted in a negative relationship between grain yield traits and gross energy. The positive grain starch-yield relationship was consistent with previous findings (Murray et al. 2008). Amylose was not correlated with yield components, except a slight negative correlation (r = −0.15) with 1000-grain weight was detected in 2014. Yield component traits grouped into three levels (low, middle, and high) clearly delineate existing tradeoffs with crude protein and fat as well as gross energy in the GSDP (Table S2). RILs Trait correlations between the BTx642 and P850029 RIL populations were very similar (Table S3, S4), and grain quality relationships were analogous to those observed in the GSDP. In both RIL populations, amylose had a positive correlation with gross energy, which was consistent with the GSDP. This relationship could be attributed to the tradeoff between amylose and crude fat, which had a strong positive correlation with gross energy. RIL relationships between grain quality and yield traits were similar to GSDP correlations, but several exceptions were identified. In P850029, amylose was negatively correlated with grain number (r = −0.27) and yield (r = −0.26). There was no tradeoff between 1000-grain weight and amylose in P850029. Also, no yield components were significantly correlated with amylose in BTx642. Another contrast from the GSDP, there was no negative relationship between 1000-grain weight and crude protein in either RIL population. While a negative correlation (r = −0.27) between grain weight and crude fat was observed in P850029, no tradeoff was found in BTx642. In fact, there was a slight positive relationship between these traits. Genomic characterization of RIL populations The greater number of polymorphisms and recombination breakpoints in BTx642 (71,856 SNPs; 4601 breakpoints) than the P850029 population (49,617 SNPs; 4154 breakpoints) is indicative of the greater genetic distance of parent BTx642 from BTxARG-1 (data not shown). Of the retained genome-wide SNP sites, there was lower residual heterozygosity in the BTx642 (5%) and P850029 (6.5%) populations than the expected 7% at the F 5 generation. While the majority of progeny in both populations were largely homozygous across all markers, there were three BTx642 and ten P850029 individuals with >20% of sites heterozygous. One RIL in each population had over 60% of SNP sites carrying an allele from each parent, and these genotypes were removed from downstream analyses given the high likelihood of them being outcrosses. In the BTx642 population, 45.5% of alleles came from parent BTx642 and 49.5% of alleles from parent BTxARG-1. BTxARG-1 was also slightly overrepresented in the P850029 population (47.5 to 46%). The GSDP had a much faster LD decay than both RIL populations (Fig. 2a), as expected. Population P850029 pairwise LD average fell below r 2 = 0.2 after 5.7 Mb. The BTx642 population reached r 2 = 0.2 slightly faster at 5.1 Mb. For comparison, average LD decay in the GSDP was r 2 > 0.2 only when SNPs were physically located within 100 bp of each other. The extent of LD varied both within and across chromosomes for P850029 and BTx642 (Fig. 2b, Fig. S2), which led to a variable mapping resolution that was dependent on QTL position. Using recombination bins as individual markers, the constructed genetic maps of BTx642 and P850029 had total lengths of 1574.2 and 1416.7 cM, respectively. Average intermarker distance for both populations was ≤0.5 cM for all ten sorghum chromosomes. Segregation distortion, marker deviation from the expected 1:1 Mendelian ratio, was present at various genomic regions in both RIL populations, with P850029 containing more distorted markers than BTx642 (Fig. S3). There were several regions with segregation distortion (P < 1e −10 ) in BTx642, one large region on chromosome 1 and other smaller windows on chromosomes 2 and 3. All chromosomes except 1, 3, 8, and 10 contained distorted regions in P850029. Two of these distorted genomic locations in P850029 were near known height loci, Dw3 and Dw1, both of which were segregating in this population. Marker segregation distortion at Dw3 has been previously identified in a different recombinant inbred population (Murray et al. 2008). Genetic mapping consistency within populations P values generated from GWAS results in the GSDP were far less consistent (r = 0.11 amylose to r = 0.25 gross energy) across years than the mapping reproducibility found in the two RIL populations. In the GSDP, starch associations were positively correlated with crude protein (r = 0.34) and gross energy (r = 0.32). Crude protein and fat both had a strong positive relationship with gross energy content, which is consistent with QTL mapping results across RIL populations. Genetic Pearson pairwise correlations using LOD scores for each grain quality trait between years and between environments were highly reproducible. In other words, QTL mapping for each trait resulted in relatively consistent LOD scores across the genome regardless of year and environment combination. RIL populations maintained consistency between years, excluding starch in BTx642 and crude fat in P850029 in the SC environment. All grain quality traits had year-to-year genetic correlations above 0.43 in the TX environment with the exception of crude fat in P850029 (r = 0.22). Correlations between environments in the same year ranged from 0.15 (BTx642-2015-starch) to 1 (P850029-2014-amylose). LOD scores were more consistent between environments in 2014, with all correlations >0.5 in both RIL populations. Fig. 2 Linkage disequilibrium (LD) decay and recombination fractions of different sorghum populations. a Genome-wide average LD (r 2 ) in the grain sorghum diversity panel (GSDP), RIL population BTx642, and RIL population P850029. Average LD shown is from the mean of all ten sorghum chromosomes. b Pairwise recombination fractions in BTx642 and P850029 highlight regional blocks of LD on chromosome 1. The full SNP data set was used to increase marker density. Chromosome-wise recombination fractions for additional chromosomes are shown in Fig S1 Amylose The production of amylose in grain starch is a Mendelian inherited trait (Karper 1933) regulated by the well-characterized waxy (Wx) gene on chromosome 10 that encodes granule-bound starch synthase I (McIntyre et al. 2008). Because of the very low frequency (4 out of 390 individuals) of waxy sorghum accessions in the GSDP, no strong associations in LD of Wx were identified in the amylose GWAS from an obvious lack of statistical power (Fig. 3a). There were several associations from GWAS that surpassed the empirical significance threshold, which could be false positives or possibly additional small effect modifiers of starch composition. The Wx locus was, however, easily mapped in both RIL populations at considerably high resolution using both GAPIT and R/qtl software (Fig. 3b, c). The SNP (S10_1877459) with greatest significance between association scans for BTx642 and P850029 was physically located 12 kb from Wx (Sobic.010G022600). This QTL explained anywhere from 46 to 61% of the genetic variance for amylose, depending on population and environment. Estimated PVE was consistently 42% in P850029 but varied between 30 and 41% in BTx642 (Table S5). Fig. 3 Association mapping of amylose across the grain sorghum diversity panel (GSDP) and the two RIL populations segregating for the waxy trait (low amylose %). a As a result of few waxy genotypes in the GSDP and thus a very low minor allele frequency (MAF), no significant associations surrounding the waxy (Wx) locus (black vertical line) are detected. Strong association peaks at the Wx locus are detected using phenotypic data from b BTx642 and c P850029. GAPIT software (Lipka et al. 2012) using the full SNP data set (blue circles) and R/qtl software (Broman et al. 2003) using recombination bin markers (red lines) both easily identified Wx at high resolution. The SNP with highest average significance between the two RIL populations was located 12 kb from Wx (Sobic.010G022600) Starch There were only two QTLs and three SNPs in total surpassing the empirical significance cutoff (P < 10 −5 ) in the starch GWAS. Two of these SNPs at 66.6 Mb on chromosome 2 were separated by 14 bp. The other statistically significant SNP was positioned at 13.8 Mb on chromosome 7. This chromosome 7 QTL explained nearly 10% of the genetic variance for starch in the GSDP. The strong segregation of yield components in the P850029 biparental population affected association analyses for grain quality traits, including starch content. This was apparent from the significant phenotypic and genetic correlations between grain yield and quality traits (Table S3, S4). Grain number per primary panicle and 1000-grain weight were therefore included into the model as an additive and interactive covariate to account for this strong bias and reduce spurious QTLs. When including both yield components into the model, a QTL on chromosome 5 fell below LOD significance threshold except for crude fat and gross energy in P850029. The QTL for starch, crude protein, and gross energy still co-located with the grain number QTL on chromosome 1. Significant LOD scores across this genomic region were identified in both environments for these three grain quality traits. The peaks for this QTL across grain quality traits in P850029 ranged from 260 to 6.9 Mb, with the starch QTL peak located at 2.3 Mb. The same marker (S1_2294632) had the maximum LOD score for both starch and crude protein in TX15. This SNP was located within a gene encoding a golgi nucleotide sugar transporter (Sobic.001G029500), which has been shown to regulate multiple plant development processes in rice (Zhang et al. 2011). Waxy parent BTxARG-1 contained the favorable starch allele while parent P850029 possessed the allele for increased crude protein and gross energy at this multi-trait locus on chromosome 1. Besides the chromosome 1 QTL interval, no additional starch QTLs surpassed the LOD significance threshold in SC15. Another starch QTL in P850029 was mapped near Ma1 (Murphy et al. 2011), which regulates sorghum flowering time in long days (Murphy et al. 2011). This QTL on chromosome 6 was identified in SC14 and in TX15 (Table 3). In addition to yield components, pericarp color confounded QTL mapping in BTx642 and thus was introduced into the model as a covariate for starch and other grain quality traits excluding amylose. The P850029 population was not segregating for pericarp color. In BTx642, starch QTLs were identified on chromosomes 1, 2, 3, 4, 6, and 10. The two significant starch SNPs on chromosome 2 from the GSDP GWAS were located within a starch QTL that was identified in SC15. This QTL interval spanned 63.1-68.8 Mb, but the LOD peak was <1 Mb from the significant GWAS SNPs. This locus, along with an additional starch QTL on chromosome 2, co-located with QTLs for crude protein. This observed co-localization was not surprising given the positive genetic correlation between these two traits (Table S3). At both co-localized QTLs on chromosome 2, BTx642 contained the favorable allele for increased starch while the BTxARG-1 allele correlated with increased crude protein. These two QTLs explained more genetic variance for crude protein (12.3 and 20.1%) than starch (9.5 and 8.2%). A minor effect QTL on chromosome 4 was identified in TX15 that was in LD with brittle endosperm1 (Bt1), which encodes an ADP-glucose translocator. This enzyme plays a role in the maize starch biosynthesis pathway (Table S1) (Séne et al. 2000). Crude protein As with starch content, very few SNPs were strongly associated with crude protein variation in the GSDP. In fact, there were no statistically significant associations identified in 2013. In 2014, the protein GWAS in the GSDP identified a genetic variant on chromosome 1 near 60.4 Mb. Two other SNPs associated with crude protein were positioned at 18.8 Mb and 49.6 Mb on chromosome 2. Also on chromosome 2, there was a QTL interval identified in BTx642 from 62.2 to 69.9 Mb. This region was associated with crude protein in the SC and TX environments in both years although the QTL peak in SC14 was ~5 Mb from the peaks observed in 2014. In the SC15 experiment, the chromosome 2 QTL had a peak LOD score of 9.3 and explained 20.1% of the genetic variance. This locus explained 9.9 and 12.2% genetic variance in TX14 and TX15. The strongest QTL for BTx642 mapped in SC14 was nearby at 58.6 Mb on chromosome 2, which was near previously identified protein QTLs (Murray et al. 2008;Rhodes et al. 2016). In TX15, the BTx642 marker with the highest LOD score in the crude protein QTL analysis was on chromosome 1 within a glutamate dehydrogenase gene (Sobic.001G059100). Glutamate dehydrogenase plays an important role in N metabolism (Robinson et al. 1991) and maintenance of the C-N balance (Miflin and Habash 2002). In P850029, there were seven different QTLs for crude protein identified in the SC14 environment alone, with three of these located on chromosome 1. Additional loci were found on chromosomes 2, 3, 6, and 9. The only additional protein QTL found in the TX environment was located on chromosome 7. The crude protein QTL on chromosome 9 detected in SC both years co-located with Dw1 (Hilley et al. 2016). While protein and height in P850029 had a positive correlation in the SC environment, this protein QTL remained significant when including height into the model as a covariate (Fig. S4). The marker with maximum LOD score within this QTL was fewer than 20 kb from a putative trehalose-6-phosphate synthase transcript, which was highly expressed in sorghum seed tissues during multiple grain filling stages (Davidson et al. 2012). Trehalose-6-phosphate synthase is important in embryo maturation and regulates sugar metabolism within the caryopsis (Eastmond et al. 2002). This QTL near Dw1 and a protein QTL interval spanning from 260 to 8.6 Mb on chromosome 1 were the most robust, being significant in multiple years and/or environments (Table 3). The chromosome 1 QTL co-located with a protein QTL identified in BTx642 in TX15. Crude fat Initial QTL analysis for crude fat in BTx642 revealed that the trait was confounded by pericarp color and amylose content (Fig. 4a, b). To account for this relationship, amylose was incorporated as an additive covariate and pericarp color as an interactive covariate. This covariate model identified a strong QTL for crude fat located on chromosome 10 (Fig. 4c). The QTL was identified in the GSDP and the BTx642 biparental population. In the GSDP, four SNPs in tight linkage at 50 Mb on chromosome 10 were significantly associated with crude fat content in both 2013 and 2014. In fact, these SNPs were all ranked in the top five each year based on P values generated from the GWAS mixed linear model. SNPs with peak LOD scores in BTx642 were located between 49.1 and 51.5 Mb although there was a larger confidence interval spanning across chromosome 10 that enveloped the centromere (Fig. 5). This QTL explained 28.1% of the genetic variance for 2014 crude fat content in TX and 21.7% in SC. In TX15, the chromosome 10 QTL had nearly identical significance, explaining 27.6% genetic variance. The next QTL (LOD = 6.2) of largest significance was found in the TX14 environment and had an estimated 10.6% PVE, which was located at 50.9 Mb on chromosome 6. A third crude fat QTL of similar significance was identified at 1.5 Mb on chromosome 3 in both years in SC. The P850029 RIL population contained several significant QTLs across the genome for crude fat, although none with near the effect of the chromosome 10 QTL identified in BTx642. There was a consistent QTL found across environments on chromosome 4 at 14.9 Mb (Table 3), and this locus explained 8 and 5.4% of the genetic variance for crude fat in SC14 and TX14, respectively. Other minor but significant crude fat QTLs were located on chromosomes 1, 2, 5, and 6. These QTLs, however, were environment-specific. Gross energy Multiple gross energy QTLs identified by association and interval mapping co-located with QTLs for crude protein and fat. This finding is consistent with the positive phenotypic relationship of gross energy with these two macronutrients. There were eight physically unlinked (>20 kb) SNPs significantly associated with gross energy in the GSDP. The top ranked SNP (smallest P value) associated with crude fat in 2013 and 2014 represented two of these eight SNPs, and no significant gross energy SNP was ranked below 516 in the crude fat GWAS results out of 286,896 total markers. Significant SNPs were scattered across five different chromosomes: 1, 2, 5, 6, and 10. The gross energy GSDP associations and RIL linkage analysis consistently identified the 49-51 Mb region on chromosome 10 as a major QTL. Including the crude fat phenotype as a covariate did not reduce the LOD score of the chromosome 10 QTL in BTx642 to confirm against the possibility of a confounding effect. This locus contributed significantly more to crude fat variation in the TX environment, explaining 29.1 and 24.5% of the total genetic variance in 2014 and 2015, respectively. Aside from the chromosome 10 locus, a SNP (S5_65131230) located at 65.1 Mb on chromosome 5 was strongly associated with gross energy content in 2013 (Rank: 15) and 2014 (Rank: 6) in the GSDP. This SNP fell within a gross energy QTL that was identified in both environments and years in the P850029 population. The peak of this QTL, which explained 14.6 and 11.7% of the genetic variance in SC14 and TX14, respectively, was 500 kb from a cluster of kafirin genes previously identified using comparative genomics (Xu and Messing 2008;Wu et al. 2013). In total, there were seven non-overlapping QTLs of significance in BTx642 and eight QTLs in P850029. Five of these 15 QTLs were identified in multiple experiments ( Table 3). The P850029 gross energy QTL on chromosome 5 was identified in each of the four experiments along with the chromosome 10 QTL in BTx642. In addition, the P850029 multi-trait QTL on chromosome 1 also co-located with a gross energy QTL that was significant in all experiments except SC15. The multi-trait locus included starch and crude protein but, interestingly, did not co-locate with crude fat like the QTLs on chromosomes 5 and 10. Grain quality QTL and allele effects on grain yield components Inspection of each significant crude protein QTL revealed the majority of alleles for increased protein appeared to lower grain yield, supporting the phenotypic tradeoff between these two traits within the GSDP ) and RIL populations. Only five of the 25 significant QTLs across both RIL populations contained a favorable allele for protein without causing an apparent decrease in grain yield. Two of these QTLs were identified on chromosome 2 in the BTx642 population, with one peak at 58.6 Mb and another at 68.3 Mb, but the QTL intervals did not overlap. The other three protein QTLs were found in P850029 on chromosomes 6, 8, and 10. Parent BTxARG-1 contained the allele for increased protein at all three loci while the BTx642 allele increased crude protein at the two QTLs on chromosome 2. The three P850029 protein QTLs each contributed small effects (<5% PVE), but the BTx642 QTL at 58.6 Mb on chromosome 2 explained 9.4% of the phenotypic variance. Additional accessions within the GSDP that contained this favorable allele were primarily milo types as well as three broomcorn sorghums. This locus co-located with a protein QTL previously mapped in another biparental population (Murray et al. 2008) as well as in diverse germplasm (Rhodes et al. 2016). An average 1000-grain weight of 21 g and grain yield per panicle of 29.3 g were both slightly lower among lines carrying the high crude fat allele (S10_50089573) than the respective grand mean of the entire GSDP ). There were several genotypes, however, containing the favorable allele with high yield component traits, including the top grain-yielding line (Standard Early Hegari). This suggests incorporating this allele into additional elite germplasm to increase crude fat and thus gross energy content will not impose an adverse effect on grain yield. Conversely, selecting for the predominant allele to lower crude fat content in the grain would potentially allow for more percent starch desired for biofuel conversion. At the gross energy locus located at 65.1 Mb on chromosome 5, there were 22 and 12 accessions in the GSDP carrying one and two copies of the favorable 'T' allele, respectively. Both 'AT' and 'TT' genotypes increased gross energy content by nearly 3%, thus displaying dominance over the major allele. Interestingly, while accessions heterozygous at this locus had a 3 g lower grain yield per primary panicle on average than homozygous 'AA' genotypes, the 12 homozygous 'TT' accessions did not possess lower grain yields. Introgression of the favorable minor allele from one of these 12 accessions into elite grain lines could result in high yielding sorghums with increased digestible energy for the feed industry. Assessment of mapping strategies Inability of GWAS to detect many consistent grain quality QTLs in the GSDP can likely be attributed to low allele frequency, genetic background effects, and lack of statistical power (Korte and Farlow 2013). Lack of consistent results from GWAS is reiterated by lower year-to-year pairwise SNP correlations when compared to LOD score correlations found in the two RIL populations. Additionally, the amylose trait epitomizes how rare alleles in diverse association panels go undetected. Previous research on sorghum (Rami et al. 1998;de Alencar Figueirido et al. 2010) and in other crops has shown that grain macronutrients are quantitative traits, regulated by multiple genes. Small effect genetic variants influencing grain quality traits within the GSDP were unable to be elucidated in this study, especially across environments. However, there were larger effect QTLs that were consistently identified across environments in both years and, in some instances, in multiple traits. There were also grain quality QTLs found in one or both RIL populations that encompassed significant SNPs identified in GWAS (Table 3; Fig. 6). When this overlap occurred, SNPs identified in GWAS were physically located closer to the predicted candidate gene than the corresponding QTL peak although it remains speculative whether the candidate gene is in fact causative. Nevertheless, these robust genomic regions provide excellent targets for molecular breeders to manipulate grain composition in a non-GMO crop and adapt sorghum genotypes to meet the needs of diverse grain markets. Amylose content was included in this multi-population study for proof of concept. The Wx locus located on chromosome 10 was not detected with association analysis in the GSDP, but was fine-mapped in both RIL populations. 3 The allele frequency at this locus in the GSDP was 0.01 with only four waxy sorghums included in the panel (BTxARG-1, BTx615, RTx2907, and Standard White Milo). This low MAF allows Wx to go undetected in GWAS. While the average LD was quite large in the biparental populations, certainly much greater than in the GSDP, the amylose QTL peak was located just 12 kb from the Wx locus (Sobic.010G022600). This finding suggests that merging high-throughput genotyping with segregating populations can potentially narrow down QTL intervals, benefitting future gene discovery. Genetic targets and candidate genes for grain quality improvement Based on historic US accessions including BOK11, Combine Kafir-60, Dwarf Yellow Milo, and Wheatland-all of which have high starch content-development and utilization of high grain starch sorghums occurred long ago. The positive relationship between grain yield and grain starch is likely the underlying root cause of why high starch lines are so widely used in the sorghum breeding pipeline. As grain sorghum selections focused on plants with large panicles and high grain set, starch content concurrently increased over time in breeding populations. However, there were a few starch loci identified where favorable alleles were much more prominent in non-elite accessions. The minor allele at a starch QTL at 58.6 Mb on chromosome 2 that was identified in the P850029 population had a negative effect on starch content. This starch QTL colocated with crude protein. Among the accessions containing the unfavorable allele at this locus were most elite grain sorghums such as Caprock, Combine 7078, Dwarf Yellow Milo, Plainsman, and Wheatland. A second QTL, located on chromosome 10, contained a minor allele that was associated with increased starch content. There were 135 accessions in the GSDP with the favorable starch variant, with a mixture of elite and exotic sorghums carrying the allele. While most high starch genotypes in the GSDP possessed the majority of favorable alleles at identified QTLs, no one accession possessed all of the beneficial variants across the 12 significant starch loci. Each of the accessions PI34911 and PI656031 contained 11 favorable alleles. PI34911 (F.C.I. 4201) had the fourth highest starch content (72.6%) while PI656031 (CE151-262-A1) had an average starch content of 72%, which was ninth highest in the GSDP. Introgression of additional favorable starch alleles into elite breeding lines may further increase percent starch in the grain, which would be desirable for ethanol conversion (Wu et al. 2007;Wang et al. 2008) and other starchbased sorghum products. Based on allelic variation in the GSDP, gene action among the 12 starch loci was a combination of dominant and additive. Several QTLs displayed underdominance in which heterozygotes had a lower mean starch than homozygous accessions although this observation is subject to selection bias as heterozygous lines were underrepresented within the GSDP. While the grain quality GWAS failed to detect many significant associations, one association with crude protein was located <10 kb from a putative gene (Sobic.001G315800) encoding quinate dehydrogenase. This enzyme is responsible for one of the seven steps in the shikimate pathway, the pathway responsible for synthesis of the aromatic amino acids phenylalanine, tryptophan, and tyrosine (Weaver and Herrmann 1997). Additionally, one of the two significant SNPs on chromosome 2 significantly associated with protein was in LD with Sobic.002G160600 that encodes an indole-3-glycerol phosphate synthase, which catalyzes an important reaction in tryptophan biosynthesis (Tzin and Galili 2010). The chromosome 1 SNP Fig. 6 Physical map highlighting the positions of significant SNP associations and QTL intervals identified throughout the study for five grain quality traits. Genome-wide association studies using data from the grain sorghum diversity panel (GSDP) generated SNP associations and two segregating RIL populations were studied to map QTLs. Asterisks denote SNP associations and vertical lines correspond to locations of QTL intervals. Specific locations for all QTLs are listed in Table S5, along with corresponding information located near Sobic.001G315800 had a high MAF of 0.48 while the SNP association near putative indole-3-glycerol phosphate synthase was just above the MAF cutoff (0.06). This rare minor allele had a positive effect on crude protein content, based on results generated from GAPIT. Elite breeding line Tx430 and waxy sorghum Tx2907, both, contained this favorable allele at 18.8 Mb on chromosome 2. At 15.2%, Tx2907 had the seventh highest average protein content between years in the GSDP. Accessions heterozygous at this locus (n = 32) possessed low grain protein to suggest dominance for decreased protein. Association and QTL mapping identified a QTL associated with crude fat content, which was consistently found in two different environments and years. This QTL located on chromosome 10 also associated strongly with gross energy content. Given the positive relationship between crude fat and gross energy, this finding is not surprising. This QTL, which explained up to 9.7% of the crude fat phenotypic variation in the GSDP and 21.5% in the BTx642 RIL population, encompassed the diacylglycerol O-acyltransferase 1 (DGAT1) maize homologue (Sobic.010G170000). The QTL peak in TX15 was positioned within the DGAT1 transcript. DGAT is responsible for the final enzymatic and rate-limiting step in the maize lipid biosynthesis pathway (Zheng et al. 2008). A single amino acid insertion within maize DGAT1 was shown to have a strong effect on lipid content and composition (Zheng et al. 2008). In sorghum, Sobic.010G170000 is predominantly expressed in the developing embryo in genotype BTx623 (Davidson et al. 2012). Alleles at this locus were not segregating in the P850029 population; however, there were 57 genotypes within the GSDP carrying the favorable allele at this QTL. These accessions consisted of both elite and exotic lines and comprised all five major sorghum botanical races based on classifications from the USDA Germplasm Resources Information Network and Casa et al. (2008). Although digestible energy was not directly evaluated, crude fat is energy dense and gross energy has a strong correlation with digestible energy in wheat and barley (Bhatty and Wu 1974). Thus, this crude fat QTL and the gross energy locus identified on chromosome 5 are targets to potentially increase the energy value of grain sorghum. The latter QTL was located near a large cluster of kafirin-related genes at 67.6 Mb. These protein bodies can reduce protein and overall digestibility in sorghum (Oria et al. 2000;Duodu et al. 2003). This chromosome 5 QTL could be pleiotropic, given it co-located with the most significant 1000-grain weight QTL in P850029. Grain yield and quality tradeoffs Trait correlations in the GSDP between previously published grain yield components ) and grain quality traits under study suggest increasing crude protein and fat will lower grain yield (Table 2). This correlation was reiterated with phenotypic data collected in both RIL populations (Tables S3, S4). Based on trait relationships, the grain yield-protein tradeoff primarily arises from a decrease in grain number while increasing grain fat content caused a greater reduction in 1000-grain weight in the GSDP (Table S2). This relationship between crude fat and 1000-grain weight may be influenced by the size ratio of embryo and endosperm in the caryopsis (Zheng et al. 2008). The strong negative relationship that grain yield has with protein content has been observed repeatedly across cereal grains (Slafer et al. 1990;Simmonds 1995;Feil 1997). Because these two macronutrients have roughly equal (protein) or higher (fat) caloric value than starch, a tradeoff between grain yield and gross energy is observed (USDA-NAL-Food and Nutrition Information Center 2016). This tradeoff is disadvantageous to the animal feed industry, which is in search of grains that have higher feed efficiency. Feed rations including grain from high-oil maize cultivars resulted in increased weight gain in cattle, poultry, and swine (Lambert et al. 2004). Thus, identified crude protein and fat QTLs that appear to be exceptions and do not reduce yield were highlighted previously in the results. On the other hand, the positive starch-grain yield correlation makes breeding for high starch grain amenable to ethanol fermentation a feasible task. Furthermore, no tradeoff was observed between amylose and grain yield in the GSDP and BTx642 population although this evaluation of inbred lines conflicts with yield results from waxy and non-waxy sorghum hybrids (Rooney et al. 2005). On average, waxy RILs in the P850029 population actually yielded more grains per panicle than non-waxy lines in 2015. The ability to develop high starch, low amylose (waxy) genotypes would increase ethanol conversion efficiency in sorghum (Wu et al. 2007;Yan et al. 2011). The monogenic waxy trait could be easily introgressed once elite, high starch genotypes are developed. This breeding target is important given the demand for ethanol has increased sharply as a result of the incorporation of this renewable fuel into gasoline blends (Wang et al. 2008). Conclusions Grain quality improvement in cereal crops continues to be an important area of research as cereals represent the largest constituent of global food supplies (Gilland 2002). The combination of association and linkage mapping across environments identified both robust genomic regions that affect grain quality in different genetic backgrounds and also environment-specific QTLs for the SC Coastal Plain and Central TX. In some instances, high-density SNP markers provided high resolution linkage mapping as shown for the amylose QTL peak located 12 kb from the waxy locus (Fig. 4b, c). Furthermore, several markers with maximum LOD scores for grain quality traits were located within transcripts of candidate genes including a glutamate dehydrogenase candidate for protein and the crude fat DGAT1 gene. However, there were also large QTL intervals identified across multiple environments with QTL peaks separated by several Mb, suggesting resolution is likely dependent on regional LD in the population. While this study corroborates previous findings (Simmonds 1995) of a tradeoff between grain quality (high crude protein, crude fat, and gross energy) and grain yield, favorable alleles for quality traits were also identified that exhibit no adverse impact on yield components. These exceptions provide a means to develop genotypes with higher-quality grains for the food and feed industries that will still be productive for the farmer. Incorporation of genetic markers within these beneficial QTLs into marker-assisted and genomic selection pipelines will be useful for grain quality improvement in sorghum and potentially additional cereal crops. Author contribution statement REB, BKP, SK, and WLR conceived the study. REB and BKP designed the experiments. BKP and WLR provided seed for the recombinant inbred study. REB, BKP, BLR, KJZ, MTM, and ZB collected field data. REB and EAC developed genetic data sets and carried out association and linkage mapping. REB wrote the manuscript. All authors read and approved the manuscript.
237291510
s2orc/train
v2
2021-08-26T01:16:10.105Z
2021-08-24T00:00:00.000Z
Robustness Evaluation of Entity Disambiguation Using Prior Probes: the Case of Entity Overshadowing Entity disambiguation (ED) is the last step of entity linking (EL), when candidate entities are reranked according to the context they appear in. All datasets for training and evaluating models for EL consist of convenience samples, such as news articles and tweets, that propagate the prior probability bias of the entity distribution towards more frequently occurring entities. It was shown that the performance of the EL systems on such datasets is overestimated since it is possible to obtain higher accuracy scores by merely learning the prior. To provide a more adequate evaluation benchmark, we introduce the ShadowLink dataset, which includes 16K short text snippets annotated with entity mentions. We evaluate and report the performance of popular EL systems on the ShadowLink benchmark. The results show a considerable difference in accuracy between more and less common entities for all of the EL systems under evaluation, demonstrating the effect of prior probability bias and entity overshadowing. Introduction The task of entity linking (EL) refers to finding named entity mentions in unstructured documents and matching them with the corresponding entries in a structured knowledge graph (Milne and Witten, 2008;Oliveira et al., 2021). This matching is usually done using the surface form of an entity, which is a text label assigned to an entity in the knowledge graph (van Hulst et al., 2020). Some mentions may have several possible matches: for example, "Michael Jordan" may refer either to a well-known scientist or the basketball player, since they share the same surface form. Such mentions are ambiguous and require an additional step of entity disambiguation (ED), which is conditioned on the context in which the mentions appear in the text, to be linked correctly. Following van Erp and Groth (2020) we refer to a set of entities that share the same surface form as an entity space. To decide which of the possible matches is the correct one, an ED algorithm typically relies on: (1) contextual similarity, which is derived from the document in which the mention appears, indicating the relatedness of the candidate entity to the document content, and (2) entity importance, which is the prior probability of encountering the candidate entity irrespective of the document content, indicating its commonness (Milne and Witten, 2008;Ferragina and Scaiella, 2012;van Hulst et al., 2020). The standard datasets currently used for training and evaluating ED models, such as AIDA-CoNLL and WikiDis-amb30 (Ferragina and Scaiella, 2012), are collected by randomly sampling from common data sources, such as news articles and tweets. Therefore, they are expected to mirror the probability distribution with which the entities occur, thereby favouring more frequent entities (head entities) (Ilievski et al., 2018). From these considerations, we conjecture that the performance of existing EL algorithms on the ED task is overestimated. We set out to explore this effect in more detail by introducing a new dataset for ED evaluation, in which the entity distribution differs from the one typically used for training ED algorithms. We perform a systematic study focusing on a particular phenomenon we refer to as entity overshadowing. Specifically, we define an entity e 1 as overshadowing an entity e 2 if two conditions are met: (1) e 1 and e 2 belong to the same entity space S, i.e., share the same surface form and, therefore, can be confused with each other outside of the local context; (2) e 1 is more common than e 2 in some corresponding background corpus (e.g. the Web), i.e., it has a higher prior probability P (e 1 ) > P (e 2 ). For example, e 1 = "Michael Jordan" (basketball player) overshadows e 2 = "Michael Jordan" (scientist) because P (e 1 ) > P (e 2 ) in a typical dataset sampled from the Web. We use an unambiguous text sample that contains this mention to evaluate three popular state-of-the-art EL systems, GENRE (De Cao et al., 2020), REL (van Hulst et al., 2020), and WAT (Piccinno and Ferragina, 2014), and empirically verify that the overshadowing effect that we hypothesized, indeed, takes place (see Fig. 1a). Even when more information is added to the local context, including the directly related entities that were correctly recognised by the system ("machine learning"), the ED components still fail to recognise the overshadowed entity (see Fig. 1b). The concept of overshadowed entities introduced in this paper is related to long-tail entities (Ilievski et al., 2018). However, these two concepts are distinct: a long-tail entity may be unambiguous and therefore not overshadowed, while an overshadowed entity may still be too popular to be considered a long-tail one. To systematically evaluate the phenomenon of entity overshadowing that we have identified, we introduce a new dataset, called ShadowLink. Shad-owLink contains groups of entities that belong to the same entity space. Following van Erp and Groth (2020), we use Wikipedia disambiguation pages to collect entity spaces. Disambiguation pages group entities that often share the same surface form and may be confused with each other. We then follow the links in the Wikipedia disambiguation pages to the individual (entity) Wikipedia pages to extract text snippets in which each of the ambiguous entities occur. Note that we do not extract the text from these Wikipedia pages directly, since pre-trained language models such as BERT (typically used in state-of-the-art ED systems) also use Wikipedia as a training corpus, and can learn certain biases as well. Instead, we parse external web pages that are often linked at the end of a Wikipedia page as references. This data collection approach helps us to minimise the possible overlap between the test and training corpus. Thereby, every entity in ShadowLink is annotated with a link to at least one web page in which the entity is mentioned. We then proceed to extract all text snippets in which the corresponding entity mention appears on the page. An extracted text snippet typically consists of the sentence in which the mention occurs. Next, we use ShadowLink to answer the following research questions: RQ1: How well can existing ED systems recognise overshadowed entities? RQ2: How does performance on overshadowed entities compare to long-tail entities? RQ3: Are ED predictions biased and how can we measure this bias? Our contribution is twofold: (1) a new dataset for evaluating entity disambiguation performance of EL systems specifically focused on overshadowed entities, and (2) an evaluation of current state-of-the-art algorithms on this dataset, which empirically demonstrates that we correctly identified the type of samples that remain challenging and provide an important direction for future work. The ShadowLink Dataset This section describes the ShadowLink dataset: its construction process, structure, and statistics. Dataset construction The process of dataset construction consists of 3 steps: (1) collecting entities, (2) retrieving context examples for each entity, and (3) filtering the data based on the validity requirements detailed below. Collecting entities. Similar to van Erp and Groth (2020), we use Wikipedia disambiguation pages to represent entity spaces. We retrieve a set (1) For each disambiguation page (DP), we only include candidate entity pages with names containing the title of the DP as a substring. This step is required to exclude synonyms and redirects. (2) If at least two candidate pages for the same DP match the criterion described above, then the DP and all its matching candidates are included as a new entity space. During the first stage of the data collection, 170K out of 316.5K Wikipedia disambiguation pages matched the filtering criteria described above. Filtering pages by year. To make sure that all pre-trained EL systems we evaluate in our experiments can potentially recognise all of the entities in the dataset, we also exclude pages that are more recent than the Wikipedia dumps used by these systems during training. The oldest dump used by a system in our experiments was the 2016 Wikipedia dump over which TagMe was trained, i.e we excluded all the pages that were created after 2016. Collecting context examples. To retrieve context examples for each entity, we follow the external links extracted from the references section of the corresponding Wikipedia page and parse them to extract the text snippets which contain the entity mention. Then, every target entity mention is replaced with its corresponding entity space name, yielding an ambiguous entity mention. For example, if we have entities "John Smith" and "Paul Smith" that both belong to the entity space "Smith", then the mentions of both names will be replaced with "Smith". Looking for an entity name and replacing it with the corresponding entity space name (instead of looking for the entity space name in the first place) allowed us to make sure that the text snippets refer to the correct entity. Using this method, however, significantly reduced the number of retrieved snippets, as many of the entity mentions in natural texts do not include the full titles of the entities. To extract the text snippets, we used a simple greedy algorithm that starts with the mention boundaries and tries to include more text, expanding the boundaries to the left and to the right, until it either covers one sentence on each side, or reaches the end (or beginning) of the document text. Our decision was to use relatively short spans similar to other popular ED benchmarks: WikiDisamb30 (Ferragina and Scaiella, 2012) and KORE50 . Our manual evaluation confirmed that these spans provide sufficient context for entity disambiguation. We also release the full-text of all web pages as part of our dataset, making the context of different lengths available for future experiments. Commonness score. We estimate the commonness (popularity) of an entity as the number of links pointing to the entity page from other Wikipedia pages, that is, the in-degree of the entity page in the web graph of Wikipedia hyperlinks. Intuitively, this is proportional to the probability of encountering this entity when sampling a page at random. To obtain this metric for all the entities in the dataset, we use the Backlinks MediaWiki API 1 . Quality assurance. We conduct manual evaluation to assess the quality of the dataset and provide the upper bound performance for the ED task. The details of the setup and the results are discussed in Section 3. Dataset structure and statistics The ShadowLink dataset consists of 4 subsets: Top, Shadow, Neutral and Tail. The Top, Shadow and Neutral subsets are linked to each other through the shared entity spaces. On the other hand, the Tail subset, which contains (typically unambiguous) long-tail entities, is not connected to the other three through the same entity spaces. Nevertheless, it is collected in a similar way as the other three subsets. Top and Shadow subsets. The structure of the Top and Shadow subsets is shown in Figure 2. Every entity e belongs to an entity space S m , derived from the Wikipedia disambiguation pages, where m is an ambigous mention that may refer to any of the entities in S m . Every S m contains at least two entities: one e top and one or more e shadow entities. Every entity e ∈ S m is annotated with a link to the corresponding Wikipedia page and provided with context examples. A context example is a text snippet extracted from one of the external pages which contains the mention m , with a length of 25 words on average. Neutral subset. To quantify the strength of the prior of each ED system, we synthetically generate data points for which the context around an entity mention is not useful for disambiguating that mention. To do that we use 7 hand-crafted templates. An example of such a template is the following: "It was the scarcity that fueled our creativity. This reminded me of m today." For each entity space, we generated 7 random contexts. Tail subset. To evaluate the performance of ED systems on long-tail but typically not overshadowed entities, we collect an additional set of entities by randomly sampling Wikipedia pages that have a low commonness score (<= 56 backlinks) 2 . Context examples for these pages were collected in the same manner as described above. The resulting dataset matches the size and structure of other ShadowLink subsets, containing 904 entities. The sampling process used to collect this subset follows the existing definition of long-tail entities (Ilievski et al., 2018), and is controlled for popularity but not for ambiguity. The Tail subset serves as a control group for the experiments conducted in our study, showing that the concept of entity overshadowing differs from the previously studied long-tail entity phenomena. ShadowLink statistics. The dataset statistics across all the subsets are summarised in Table 1. Note that the Top, Shadow and Neutral subsets are grouped around the same entity spaces, while the Tail subset is constructed by sampling the same number of non-ambiguous entities. Every entity space contains at least 2 entities, with the mean number of entities per space being 2.63, median 2, and maximum 10. Figure 3 shows the distribution of commonness in the three subsets: Top, Shadow and Tail. For the experiments we used a smaller subset of ShadowLink, with only one randomly selected shadow entity per entity space and one text snippet per entity. Thus, every subset contained 904 enti- ties, with the total size of 9K text snippets. The rest of the data is left out as a training set and can be used in future experiments. Manual Evaluation We perform manual evaluation of a random sample from ShadowLink to assess its quality, with the goal of ensuring that the extracted text snippets provide context sufficient for disambiguation. Human performance also sets the skyline for automated approaches on this dataset. In the following subsections, we describe the evaluation setup and the results of the manual evaluation. Manual evaluation setup We conduct a manual evaluation to assess the quality of the dataset and evaluate how well human annotators can disambiguate overshadowed entities. A sample of 91 randomly selected dataset entries was presented to two annotators, who examined the entries independently. For each entry, the annotators were presented with a text snippet containing an ambiguous entity mention m, and two entities, Top and Shadow, from the same entity space S m , where one of the two entities was the correct answer. The annotators were instructed to either indicate the correct entity or mark the text snippet as ambiguous, which indicates that the provided context is not sufficient for the disambiguation decision to be made. Note, however, that the commonness scores were not displayed to the annotators. Results of the manual evaluation We used Cohen's kappa coefficient to evaluate the inter-annotator agreement (Bobicev and Sokolova, 2017) on all entries reviewed by the annotators. The value of the coefficient is 0.845, indicative of strong agreement. Next, we discarded the samples labelled as ambiguous by at least one of the annotators. The resulting dataset included 77 entries out of 91, which shows that 85% of the context examples were sufficient for making ED decisions. These unambiguous entries were split into two subsets, resulting in the 37 top-entities and 40 shadowentities. We then discarded 3 randomly selected shadow-entities to achieve the same size of the two subsets, and used these subsets to evaluate the performance of manual ED for the top-and shadowentities separately. The averaged F-score of the two annotators is 0.95 on the top-entities and 0.96 on the shadow-entities. The detailed results of the evaluation are shown in Table 2. The results of manual evaluation show that (1) a majority of samples (85%) in ShadowLink are suitable for ED evaluation, i.e., automatically extracted snippets provide sufficient context for correct disambiguation; (2) human annotators can correctly disambiguate entities regardless of their commonness. Therefore, the performance of an automatic system that only depends on context is only bound by the 15% of the cases for which the context is not helpful. This bound can be further elevated if longer contexts are considered. Experiments on longer contexts are possible using the ShadowLink dataset 3 but we leave it for future work. In the next section, we report and analyse the results produced by state-of-the-art systems on Shad-owLink. Benchmark Experiments In this section, we describe the benchmark experiments designed to evaluate the baseline systems' performance on the ShadowLink dataset. For these experiments, we created a subset of the original dataset by sampling only one of the shadow entities at random to make the number of Top and Shadow equal. Note that in our task setup the model's predictions are not restricted to the top versus shadow entity binary decision. The model can predict any entity from the same or different entity space. We describe the experimental setup in Section 4.1, report the benchmarking results and analyse them in more detail in Section 4.2. Evaluation setup To answer the first two research questions (RQ1 & RQ2), we compare the performance of eight entity linking systems on the ShadowLink dataset. We used the GERBIL framework (Röder et al., 2018) for six of the baselines (AGDISTIS/MAG, AIDA, DBpedia Spotlight, FOX, TagMe 2 and WAT) 4 under the D2KB experimental setup 5 . We also performed an evaluation with the same setup using GENRE and REL, two novel state-of-the-art systems not available in GERBIL. We used microaveraged precision, recall and F-score as evaluation metrics. To answer the last research question (RQ3), we want to verify whether the baseline systems utilise context or simply rely on their priors to make the predictions. To this end, we compare the predictions made on the Top, Shadow and Neutral subsets. We used the predictions made on the Neutral subset as an indication of priors. That is, for each entity space, we generate context for the Neutral subset by using the same 7 random sentences as templates. The context was generated as neutral, i.e., it is not useful for the disambiguation task by design. Therefore, we considered the predictions for such neutral contexts to exhibit the default priors of an EL system for the given entity space. We can then compare these prediction to the predictions on the original examples from the Top and Shadow subsets. If the entity predicted for nonneutral context differs from the prediction made for the neutral context, we consider that the model updated its default prediction (prior) based on the local context. We performed this type of analysis to examine the predictions of the best-performing systems in our experiments: REL, GENRE, AIDA and WAT. Benchmark Results This section presents the results of our experiments and summarizes the answers to the research questions introduced in Section 1. RQ1: How well can existing ED systems recognise overshadowed entities? Table 3 shows the evaluation results across the subsets of ShadowLink. All systems achieve the lowest scores on the Shadow subset, with the maximum F-score of 0.35 achieved by AIDA. While REL and GENRE ourperform WAT on several existing datasets (van Hulst et al., 2020;De Cao et al., 2020), their results on ShadowLink are considerably lower than the results of WAT. The difference in the results on Top and Shadow entities indicates that EL predictions are biased towards more common entities. RQ2: How does the performance on overshadowed entities compare to long-tail entities? All systems show the highest precision on the Tail subset, i.e., they achieve much higher performance on the less ambiguous long-tail entities, compared to both top and overshadowed entities in ShadowLink. These results indicate that the main challenge in EL is the combination of ambiguity and uncommonness, while uncommon but non-ambiguous entities are relatively easy to re-solve. These findings are also consistent with Ilievski et al. (2018), who suggest that rare and ambiguous entities constitute the hardest cases for the EL task. In this study, we showed that such overshadowed entities indeed consititute a major challenge for the state-of-the-art systems and that ShadowLink provides a suitable benchmark for their evaluation. RQ3: Are ED predictions biased and how can we measure this? Our experiments show that all systems under evaluation are often insensitive to the context change, i.e., the systems are actually unable to exploit local context for entity disambiguation but solely rely on their priors learned from the data. The error analysis results presented in Table 4 indicate that the majority of correct answers on the Top dataset coincide with the predictions observed on the Neutral subset. On the Shadow subset, opposite is the case: most of the errors are due to priors, and most of the correct predictions differ from them. Figure 4 shows the number of cases in which overshadowing occurs for each of the systems, i.e., when the model's prediction remains the same for both Top and Shadow mentions. We see that this effect correlates with the number of cases in which the prediction of the system does not change regardless of the context, i.e., also for the Neutral context the prediction of the system remains the same. This observation confirms our initial hypothesis about the phenomena: the more common entities not only overshadow the less common ones but they are also used as the default predictions made completely independent of the given context, which we call the system priors. Figure 4 shows that among the four best EL systems, REL is the most prone to overshadowing and Baseline Shadow Top Tail P R F P R F P R F AGDISTIS/MAG (Usbeck et al., 2014) 0.14 0.14 0.14 0.25 0.25 0.25 0.79 0.79 0.79 AIDA 0 Table 4: Error analysis, which shows the percentage of errors and correct predictions that either coincide (pred=prior) or differ (pred =prior) from the predictions made for the neutral contexts, which we consider as predictions with the highest prior probability. prior bias. This also explains its poor performance on the Shadow subset in comparison with the high performance demonstrated on T ail. AIDA and WAT appear to be more sensitive to the local context, which allows them to achieve better results on the overshadowed entities in comparison to both GENRE and REL. Moreover, AIDA, which outperforms all other systems on the Shadow subset, turns out to be the least affected by the overshadowing phenomena. These results indicate that the main reason behind the poor ED performance on overshadowed entities is due to systems overrelying on the prior bias and failing to incorporate contextual information. Lastly, we also look at the confidence scores for each of the subsets to check if they can be used as an additional indicator (see Figure 5). Interestingly, the systems have very different distributions of their confidence scores. For example, WAT has lower confidence when given neutral samples, which can be used to detect context ambiguity and filter out such samples. However, this approach can not be used for REL's and GENRE's predictions 6 . 6 GENRE's confidence scores were rescaled before the comparison. Related Work Datasets for ED evaluation. Evaluation of ED performance was on the research radar for several years, and many benchmark datasets were proposed to date (Hachey et al., 2013;Röder et al., 2018;Ehrmann et al., 2020). Among the most popular ones are AIDA-CONLL , which consists of 1.4K annotated news articles with 27.8 entity mentions; AQUAINT dataset (Milne and Witten, 2008) with 50 news articles and 727 mentions; MSNBC (Cucerzan, 2007) with 20 news articles and 656 mentions. However, the standard benchmarks used for ED evaluation do not reflect the challenges that are often encountered in practice, such as limited context, long-tail, emerging and complex entities (Meng et al., 2021). Guo and Barbosa (2018) construct two datasets by sampling hard ED examples from Wikipedia and ClueWeb corpora on which a simple baseline using priors does not succeed. Their experiments show that this prior-based baseline achieves a high performance, which also indicates the need for more challenging evaluation datasets. ShadowLink aims to close this gap. In this work, we focus specifically on the long-tail entities since the existing benchmarks are known to be biased towards the head of the distribution, i.e., the popular entities (Ilievski et al., 2018;Guo and Barbosa, 2018). Similarly to ShadowLink, WikiDisamb30 (Ferragina and Scaiella, 2012) contains short text snippets annotated with Wikipedia entities designed for ED evaluation. In contrast to WikiDisamb30, the text snippets in ShadowLink were extracted from web pages outside of Wikipedia to avoid the effects of overfitting since Wikipedia is often used for training language models. Moreover, ShadowLink examples were collected using Wikipedia disambiguation pages as entity spaces while WikiDis-amb30 represents a random sample from Wikipedia that does not allow to examine the effect of overshadowing. The idea of entity spaces was previously introduced by van Erp and Groth (2020), who showed that predicting entity spaces largely improves recall. Their results also hint on the conclusion that disambiguation within entity spaces constitutes a bottleneck in the ED performance. We take this idea further by designing a dataset centered around entity spaces to evaluate ED performance within entity spaces directly. This dataset allows us to measure the gap the state-of-the-art ED systems still have on this task. KORE50 (Hoffart et al., 2012) was created to evaluate the impact of low commonness and high ambiguity on the ED performance but it contains only 50 hand-crafted sentences with 148 entity mentions including ambiguous mentions and longtail entities. ShadowLink continues this line of work, providing a considerably larger number of samples that can be used for training and evaluation of ED approaches. We also introduce a subset of neutral samples designed to uncover the model priors. Table 5 summarizes how ShadowLink differs from the previously introduced datasets for entity disambiguation. Robustness evaluation. Our approach to ED evaluation taps into the fast-growing area of research aimed at assessing model robustness especially relevant for data-driven machine learning techniques. One of the first studies on this topic (Sturm, 2014) argued that the state-of-the art music information retrieval systems show very good performance on the standard benchmarks without the real understanding of the task at hand since their predictions relied solely on the confounds present in the ground truth. Sturm (2014) also coined the term for this phenomena: the "Clever Hans" effect, named after the infamous horse that appeared to solve arithmetic problems while only following unintentional body language cues given by the trainer. More recently, Lapuschkin et al. (2019) showed that the same effect is demonstrated by other state-of-the-art machine learning models, and the standard performance evaluation metrics fail to detect it. Kauffmann et al. (2020) further explored this phenomenon, showing that it also affects the reliability of unsupervised models in the field of anomaly detection. Therefore, not surprisingly we also observed this effect in the ED task: Guo and Barbosa (2018) used a rudimentary system that merely learned the prior distribution of entities to disambiguate them, and demonstrated that it performs on par with stateof-the-art approaches. These findings specifically calls for new datasets that allow for a more robust evaluation and deeper analysis of the model performance, similar to the one demonstrated here with ShadowLink. We hope that this paper might inspire similar datasets in other fields, where the priors from large public datasets may also overshadow the local context. Conclusion We introduced ShadowLink, a new benchmark dataset for evaluating entity disambiguation performance, and used it for an extensive analysis of the state-of-the-art systems' results. Our experimental results indicate that all systems under evaluation are prone to rely on their priors, which explains their higher performance on more common entities, and much lower performance on the lexically similar overshadowed entities. Our work thereby shows that the ED task is still far from solved for overshadowed entities, and ShadowLink paves the way for further research in this direction. The shortcomings of existing disambiguation approaches uncovered by the ShadowLink dataset stimulate further research towards developing more robust ED algorithms that are better at exploiting context without overrelying on the prior bias. We would also like to explore ways to account for more context around the entity mentions, and when expanding the context is actually needed.
220840630
s2orc/train
v2
2020-07-29T13:06:22.721Z
2020-07-23T00:00:00.000Z
Comparison Study on the Adsorption Behavior of Chemically Functionalized Graphene Oxide and Graphene Oxide on Cement Chemical functionalization of graphene oxide (GO) is one kind of advanced strategy to eliminate the negative effects on the flowability of cement with GO. The adsorption behavior of admixture on cement plays a vital role in the flowability of cement-based materials. Herein, the comparison study on the adsorption behavior (including adsorption amount, adsorption kinetics, adsorption isotherms and adsorption layer thickness) of three kinds of chemically functionalized graphene oxides (CFGOs) with different polyether amine branched-chain lengths and GO on cement is reported. The results of CFGOs and GO adsorption data on cement particles were all best fitted with the pseudo-second-order kinetic model, and also conformed to the Freundlich isothermal model, indicating that the adsorption of CFGOs and GO on cement both were multilayer type and took place in a heterogeneous manner. The adsorption of CFGOs and GO on cement was not just physical adsorption, but also engaged chemical adsorption. In contrast to GO, the adsorption behavior of CFGOs on cement represented a lesser adsorption amount, weaker adsorption capacity and thinner adsorption layer thickness. Moreover, the longer the branched-chain length of CFGOs, the greater the decreasing degrees of adsorption amount, adsorption capacity and adsorption layer thickness. Due to the consumption of the carboxyl group (-COOH) by chemical functionalization, the anchoring effect of CFGOs was weaker than GO, and the steric hindrance effect generated from branched-chains which weakened the van der Waals forces among CFGOs layers. Moreover, the steric hindrance effect strengthened with the increasing branched-chain length, thus preventing the cement particles from aggregation, which resulted in satisfactory flowability of CFGOs with incorporation of cement rather than GO. Introduction In recent years, carbon materials have been widely utilized to improve various properties of cement-based materials [1][2][3][4][5]. As a new kind of carbon material, graphene oxide (GO) is the intermediate of graphene, which has attracted much research attention because of its high reactivity, high specific surface area and excellent mechanical properties [6][7][8]. The existing research on the performance of cement-based materials considered the introduction of GO and exhibited the enormous potential to enhance the mechanical properties and durability of hardened cement-based materials [9][10][11][12][13][14][15][16][17][18][19][20]. Devi et al. [21] explored the compressive and tensile strengths of the mixtures with 0.08% GO, showing a better result compared to the rest of the mixes, and the sorptivity and permeability of the concrete The different types of CFGOs were obtained by grafting polyether amine (M1000/M2070) with different molecular weights onto GO. The molecular weights of M1000 and M2070 were 1000 and 2000, respectively. The monomer ratios of M1000 and M2070 for CFGO-1, CFGO-2 and CFGO-3 were 1:0, 1:1 and 0:1, respectively. The dosages of the GO and polyether amine in the different CFGOs are listed in Table 2. As illustrated in our previous research [28], CFGOs were successfully synthesized by grafting polyether amine onto GO. In other words, GO and CFGOs represented the different chemical structure, CFGOs could be regarded as that of polyether amine branched on GO, as shown in Figures 1 and 2. In the CFGO structure, polyether amine acted as the branched-chains, the length of branchedchains increased with the polyether amine molecular weight. Additionally, due to the dosages of reactants for CFGOs being different, the branched-chain length of CFGO-1 was shorter than that of CFGO-2 and CFGO-3, the branched-chain length of CFGO-3 was the longest. on GO and -NH 2 in polyether amine. The chemical structure of GO and CFGOs was shown in Figure 1, and the detailly synthesized process of GO and CFGOs was described in our previous research [28]. Materials 2020, 13, x FOR PEER REVIEW 3 of 14 Graphene oxide (GO) and chemically functionalized graphene oxides (CFGOs) are lab-made. The modified Hummer's method was used to synthesize GO. CFGOs were prepared by the condensation reaction of -COOH/ on GO and -NH2 in polyether amine. The chemical structure of GO and CFGOs was shown in Figure 1, and the detailly synthesized process of GO and CFGOs was described in our previous research [28]. The different types of CFGOs were obtained by grafting polyether amine (M1000/M2070) with different molecular weights onto GO. The molecular weights of M1000 and M2070 were 1000 and 2000, respectively. The monomer ratios of M1000 and M2070 for CFGO-1, CFGO-2 and CFGO-3 were 1:0, 1:1 and 0:1, respectively. The dosages of the GO and polyether amine in the different CFGOs are listed in Table 2. As illustrated in our previous research [28], CFGOs were successfully synthesized by grafting polyether amine onto GO. In other words, GO and CFGOs represented the different chemical structure, CFGOs could be regarded as that of polyether amine branched on GO, as shown in Figures 1 and 2. In the CFGO structure, polyether amine acted as the branched-chains, the length of branchedchains increased with the polyether amine molecular weight. Additionally, due to the dosages of reactants for CFGOs being different, the branched-chain length of CFGO-1 was shorter than that of CFGO-2 and CFGO-3, the branched-chain length of CFGO-3 was the longest. The different types of CFGOs were obtained by grafting polyether amine (M1000/M2070) with different molecular weights onto GO. The molecular weights of M1000 and M2070 were 1000 and 2000, respectively. The monomer ratios of M1000 and M2070 for CFGO-1, CFGO-2 and CFGO-3 were 1:0, 1:1 and 0:1, respectively. The dosages of the GO and polyether amine in the different CFGOs are listed in Table 2. As illustrated in our previous research [28], CFGOs were successfully synthesized by grafting polyether amine onto GO. In other words, GO and CFGOs represented the different chemical structure, CFGOs could be regarded as that of polyether amine branched on GO, as shown in Figures 1 and 2. In the CFGO structure, polyether amine acted as the branched-chains, the length of branched-chains increased with the polyether amine molecular weight. Additionally, due to the dosages of reactants for CFGOs being different, the branched-chain length of CFGO-1 was shorter than that of CFGO-2 and CFGO-3, the branched-chain length of CFGO-3 was the longest. Graphene oxide (GO) and chemically functionalized graphene oxides (CFGOs) are lab-made. The modified Hummer's method was used to synthesize GO. CFGOs were prepared by the condensation reaction of -COOH/ on GO and -NH2 in polyether amine. The chemical structure of GO and CFGOs was shown in Figure 1, and the detailly synthesized process of GO and CFGOs was described in our previous research [28]. The different types of CFGOs were obtained by grafting polyether amine (M1000/M2070) with different molecular weights onto GO. The molecular weights of M1000 and M2070 were 1000 and 2000, respectively. The monomer ratios of M1000 and M2070 for CFGO-1, CFGO-2 and CFGO-3 were 1:0, 1:1 and 0:1, respectively. The dosages of the GO and polyether amine in the different CFGOs are listed in Table 2. As illustrated in our previous research [28], CFGOs were successfully synthesized by grafting polyether amine onto GO. In other words, GO and CFGOs represented the different chemical structure, CFGOs could be regarded as that of polyether amine branched on GO, as shown in Figures 1 and 2. In the CFGO structure, polyether amine acted as the branched-chains, the length of branchedchains increased with the polyether amine molecular weight. Additionally, due to the dosages of reactants for CFGOs being different, the branched-chain length of CFGO-1 was shorter than that of CFGO-2 and CFGO-3, the branched-chain length of CFGO-3 was the longest. Flowability Measurement of Cement Paste The flowability of the cement paste with GO/CFGOs was measured according to the Chinese National Standard GB/T 8077-2000 [26,28]. At first, 300 g cement, 100 g water and GO/CFGOs were mixed with each other for 3 min. Then, the mixture was poured into a cone mold (base diameter of 60 mm, top diameter of 36 mm and height of 60 mm) in a cleaned and moist glass plate. Then, the mold is lifted at about 15 cm above the glass plate, and the fresh cement paste will collapse and spread. The parallel diameter of the spread was d 1 , and the vertical diameter was d 2 . The value of the (d 1 + d 2 )/2 is the paste flowability. For time-dependent flowability testing, the cement paste was put back into the mold and covered with a wet towel after each measurement. In each test, the cement paste was stirred again for 2 min. Adsorption Experiment of GO/CFGOs on Cement Standard aqueous solutions of GO/CFGOs with different concentrations (250, 300, 350, 400, 450, 500, 550, 600 and 650 mg/L) were prepared by dispersing GO/CFGOs in deionized water. The adsorption results of CFGOs on the surface of cement and apparatus for total-organic carbon test were shown in Figure 3, and the adsorption results of GO on the surface of cement were displayed in our previous research [26]. The screened cement powder (0.09 g) was added into beaker flask containing GO/CFGOs aqueous solutions (60 g). The stirring continued for different time (10,20,30,60, 90 and 120 min) in the crystal oscillator (25 • C). Then, the mixture was vacuum filtered, and the GO/CFGOs supernatant concentration was measured by the total-organic carbon analyzer (TOC-II, Elementar Co., Frankfurt, German). The adsorption amount of GO/CFGO on the cement can be obtained by the following Equation (1): where Q e (mg/g) presents the adsorption amount of the GO/CFGO by unit mass of cement; C 0 (mg/L) and C t (mg/L) are the initial concentration and concentration of the GO/CFGO at t min; V (mL) represents the volume of the solution; m (g) represents the mass of cement. Flowability Measurement of Cement Paste The flowability of the cement paste with GO/CFGOs was measured according to the Chinese National Standard GB/T 8077-2000 [26,28]. At first, 300 g cement, 100 g water and GO/CFGOs were mixed with each other for 3 min. Then, the mixture was poured into a cone mold (base diameter of 60 mm, top diameter of 36 mm and height of 60 mm) in a cleaned and moist glass plate. Then, the mold is lifted at about 15 cm above the glass plate, and the fresh cement paste will collapse and spread. The parallel diameter of the spread was d1, and the vertical diameter was d2. The value of the (d1 + d2)/2 is the paste flowability. For time-dependent flowability testing, the cement paste was put back into the mold and covered with a wet towel after each measurement. In each test, the cement paste was stirred again for 2 min. Adsorption Experiment of GO/CFGOs on Cement Standard aqueous solutions of GO/CFGOs with different concentrations (250, 300, 350, 400, 450, 500, 550, 600 and 650 mg/L) were prepared by dispersing GO/CFGOs in deionized water. The adsorption results of CFGOs on the surface of cement and apparatus for total-organic carbon test were shown in Figure 3, and the adsorption results of GO on the surface of cement were displayed in our previous research [26]. The screened cement powder (0.09 g) was added into beaker flask containing GO/CFGOs aqueous solutions (60 g). The stirring continued for different time (10,20,30,60, 90 and 120 min) in the crystal oscillator (25 °C). Then, the mixture was vacuum filtered, and the GO/CFGOs supernatant concentration was measured by the total-organic carbon analyzer (TOC-II, Elementar Co., Frankfurt, German). The adsorption amount of GO/CFGO on the cement can be obtained by the following Equation (1): where Qe (mg/g) presents the adsorption amount of the GO/CFGO by unit mass of cement; C0 (mg/L) and Ct (mg/L) are the initial concentration and concentration of the GO/CFGO at t min; V (mL) represents the volume of the solution; m (g) represents the mass of cement. X-ray Photoelectron Spectroscopy (XPS) Measurement of GO/CFGOs on Cement For the XPS measurement, the same concentrations (750 mg/L) of GO/CFGOs aqueous dispersion were prepared. Then, 0.09 g cement and 60 g GO/CFGOs aqueous dispersion was added into a beaker flask. For the pure cement sample, 0.09 g cement was added in 60 g water. The mixture was vibrated in the oven-controlled crystal oscillator at room temperature for 5 h. Finally, the mixture was vacuum filtered. The filter cake can be used to test the adsorption layer thickness of GO/CFGOs on the surface of cement. The XPS analysis was carried out on an AXIS-Ultra instrument from Kratos Analytical (Manchester, UK) using monochromatic Al Ka radiation (225 W, 15 mA, 15 kV) and low- X-ray Photoelectron Spectroscopy (XPS) Measurement of GO/CFGOs on Cement For the XPS measurement, the same concentrations (750 mg/L) of GO/CFGOs aqueous dispersion were prepared. Then, 0.09 g cement and 60 g GO/CFGOs aqueous dispersion was added into a beaker flask. For the pure cement sample, 0.09 g cement was added in 60 g water. The mixture was vibrated in the oven-controlled crystal oscillator at room temperature for 5 h. Finally, the mixture was vacuum filtered. The filter cake can be used to test the adsorption layer thickness of GO/CFGOs on the surface of cement. The XPS analysis was carried out on an AXIS-Ultra instrument from Kratos Analytical (Manchester, UK) using monochromatic Al Ka radiation (225 W, 15 mA, 15 kV) and low-energy electron Materials 2020, 13, 3274 5 of 14 flooding for charge compensation. To compensate for surface charges effects, binding energies were calibrated using C1s hydrocarbon peak at 284.80 eV. Mechanical Properties of Cement Paste with GO/CFGOs Firstly, GO/CFGOs and water were added to a stainless-steel container in turn and mixed well. Secondly, the mixture of GO/CFGOs and water was divided into three equal parts. Finally, these three-part mixtures were added into cement in time intervals of 3 min and mixed well. Three specimens for each test were immediately poured into the mold of 40 mm × 40 mm × 160 mm size. The specimens were allowed to cure in the mold for 24 h. After 24 h, the specimens were cured in water at 20 ± 2 • C for 6 days and 21 days. The flexural strength was determined using a DKZ-500 concrete three-point flexural strength tester (Tianjin, China) at a load increasing rate of 0.05 KN/s. The compressive strength was tested using a JES-300 concrete compressive strength tester (Tianjin, China) with an increase rate of 2.4~2.6 MPa/s. To check for reproducibility of the results, three/six samples were tested each for flexural/compressive test, respectively and averaged the results. Dispersibility of GO/CFGOs In order to investigate the dispersibility of GO and CFGOs with different dosages in cement paste, a mini-slump test for the cement paste was implemented, and the water-cement ratio was 1:3. As listed in Table 3, we could observe that as an admixture, GO led to the decreasing flowability of cement paste, and the flowability of cement paste reduced with the dosages of GO. In the case where the GO dosage was 0.05%, the flowability of cement paste was 180.3 mm, and the reduction in flowability was 18.6%. However, for CFGOs, the flowability tendencies of cement paste were all contrary to GO, which increased with the dosages of CFGOs. This is direct evidence that the chemically functionalized process of GO was a very efficient way to change the GO adsorption behavior on the cement, and this is further elaborated in the following section. At the same CFGO dosage, the dispersibility of CFGO-3 was superior to CFGO-2 and CFGO-1. Moreover, CFGO-1 showed the worst dispersibility in cement paste. In the case where the CFGO dosages were 0.05%, the flowability of CFGO-3, CFGO-2 and CFGO-1 with the incorporation of cement paste was 288.5 mm, 253.5 mm and 245.2 mm, respectively. As a result, the flowability of CFGO-3, CFGO-2 and CFGO-1 with the incorporation of cement paste increased by 30.2%, 11.4% and 11.7%, respectively. In a word, these results indicated that CFGOs were in opposite to GO, CFGOs could increase the flowability of cement, and the CFGOs with longer branched-chains bring better flowability for cement. The flowability retention behavior of cement paste with GO, CFGO-1, CFGO-2 and CFGO-3 was investigated per 30 min for 120 min at 0.03 wt.% dosage. As shown in Figure 4, the flowability of cement paste with GO rapidly decreased with time, the flowability reduced by 20.0% at 60 min and 44.4% at 120 min, respectively. For cement paste with CFGO-1, CFGO-2 and CFGO-3, all of them had good flowability retention ability. Furthermore, the flowability retention ability was strong with the Flexural Strength of Cement Paste with GO/CFGOs The flexural strength and compressive strength of cement paste with different curing time at 0.03 wt.% GO and CFGO dosage were shown in Figure 5. The results indicated that the flexural strength and compressive strength of cement paste increased after the addition of GO and CFGOs. After 3d, the flexural strength of cement paste with GO was lower than CFGOs, and increased with branched-chain length (Figure 5a). At 7d and 28d, there was little change in flexural strength of cement paste with GO and CFGOs. This means that the chemically functionalized process also improved the toughening action of GO in the cement matrix. As shown in Figure 5b, the compressive strength of cement paste with GO was higher than CFGOs, and decreased with branched-chain length at 3d and 7d. The compressive strength of cement paste GO and CFGOs was almost equal at 28d. Figure 6 demonstrates the effect of the GO and CFGO concentration (C0) on Qe at adsorption equilibrium. As shown in Figure 6, it is observed that all of the Qe rapidly raised with the increase of C0, and the Qe of GO and CFGOs approached saturation when C0 was higher than 550 mg/L. In the case where the C0 was 500 mg/L, the corresponding Qe of GO was 277.44 mg/g; for CFGOs, the same values of CFGO-1, CFGO-2 and CFGO-3 were 270.56 mg/g, 255.45 mg/g and 245.00 mg/g, respectively. At the higher concentration, the Qe of GO stayed around 300.00 mg/g; for CFGOs, the Flexural Strength of Cement Paste with GO/CFGOs The flexural strength and compressive strength of cement paste with different curing time at 0.03 wt.% GO and CFGO dosage were shown in Figure 5. The results indicated that the flexural strength and compressive strength of cement paste increased after the addition of GO and CFGOs. After 3d, the flexural strength of cement paste with GO was lower than CFGOs, and increased with branched-chain length (Figure 5a). At 7d and 28d, there was little change in flexural strength of cement paste with GO and CFGOs. This means that the chemically functionalized process also improved the toughening action of GO in the cement matrix. As shown in Figure 5b, the compressive strength of cement paste with GO was higher than CFGOs, and decreased with branched-chain length at 3d and 7d. The compressive strength of cement paste GO and CFGOs was almost equal at 28d. Flexural Strength of Cement Paste with GO/CFGOs The flexural strength and compressive strength of cement paste with different curing time at 0.03 wt.% GO and CFGO dosage were shown in Figure 5. The results indicated that the flexural strength and compressive strength of cement paste increased after the addition of GO and CFGOs. After 3d, the flexural strength of cement paste with GO was lower than CFGOs, and increased with branched-chain length (Figure 5a). At 7d and 28d, there was little change in flexural strength of cement paste with GO and CFGOs. This means that the chemically functionalized process also improved the toughening action of GO in the cement matrix. As shown in Figure 5b, the compressive strength of cement paste with GO was higher than CFGOs, and decreased with branched-chain length at 3d and 7d. The compressive strength of cement paste GO and CFGOs was almost equal at 28d. Figure 6 demonstrates the effect of the GO and CFGO concentration (C0) on Qe at adsorption equilibrium. As shown in Figure 6, it is observed that all of the Qe rapidly raised with the increase of C0, and the Qe of GO and CFGOs approached saturation when C0 was higher than 550 mg/L. In the case where the C0 was 500 mg/L, the corresponding Qe of GO was 277.44 mg/g; for CFGOs, the same values of CFGO-1, CFGO-2 and CFGO-3 were 270.56 mg/g, 255.45 mg/g and 245.00 mg/g, respectively. At the higher concentration, the Qe of GO stayed around 300.00 mg/g; for CFGOs, the Figure 6 demonstrates the effect of the GO and CFGO concentration (C 0 ) on Q e at adsorption equilibrium. As shown in Figure 6, it is observed that all of the Q e rapidly raised with the increase of C 0 , and the Q e of GO and CFGOs approached saturation when C 0 was higher than 550 mg/L. In the case where the C 0 was 500 mg/L, the corresponding Q e of GO was 277.44 mg/g; for CFGOs, the same values of CFGO-1, CFGO-2 and CFGO-3 were 270.56 mg/g, 255.45 mg/g and 245.00 mg/g, respectively. At the higher concentration, the Q e of GO stayed around 300.00 mg/g; for CFGOs, the Q e of CFGO-1, CFGO-2 and CFGO-3 were about 280.00 mg/g, 270.00 mg/g and 260.00 mg/g, respectively. These results elucidate that the adsorption amount of GO on cement was higher than that of CFGOs, and the adsorption amount of CFGOs on cement decreased with the increasing length of branched-chains. Adsorption Kinetics of GO/CFGOs on Cement Pseudo-first-order and pseudo-second-order rate models are the most widely used models in solid-liquid interface adsorption [26,[29][30][31], therefore, we applied both the pseudo-first-order and pseudo-second-order rate models to the adsorption data of GO/CFGOs to explore the timedependent adsorption process. These two equations are listed as Equations (2) and (3): where K1 (min −1 ) presents the equilibrium rate constant of pseudo-first-order model; K2 (g· mg −1 · min −1 ) presents the equilibrium rate constant of pseudo-first-order model; Qe (mg· g −1 ) is the adsorption amount of equilibrium adsorption; Qt (mg· g −1 ) represents the adsorption amount at time t (min). The fitting results of pseudo-first-order and pseudo-second-order kinetic models at different concentrations of GO, CFGO-1, CFGO-2 and CFGO-3 on cement are shown in Figures 7 and 8, respectively. The linear correlation coefficients (R 2 ) are exhibited in Table 4. The results showed that R 2 of pseudo-second-order kinetic model for GO were more satisfactory than pseudo-first-order kinetic model. For CFGOs, although the lengths of the branched-chains were different, all of the R 2 of the pseudo-second-order kinetic model was more relevant than that of the pseudo-first-order kinetic model. These results elucidate that the adsorption amount of GO on cement was higher than that of CFGOs, and the adsorption amount of CFGOs on cement decreased with the increasing length of branched-chains. Adsorption Kinetics of GO/CFGOs on Cement Pseudo-first-order and pseudo-second-order rate models are the most widely used models in solid-liquid interface adsorption [26,[29][30][31], therefore, we applied both the pseudo-first-order and pseudo-second-order rate models to the adsorption data of GO/CFGOs to explore the time-dependent adsorption process. These two equations are listed as Equations (2) and (3): where K 1 (min −1 ) presents the equilibrium rate constant of pseudo-first-order model; K 2 (g·mg −1 ·min −1 ) presents the equilibrium rate constant of pseudo-first-order model; Q e (mg·g −1 ) is the adsorption amount of equilibrium adsorption; Q t (mg·g −1 ) represents the adsorption amount at time t (min). The fitting results of pseudo-first-order and pseudo-second-order kinetic models at different concentrations of GO, CFGO-1, CFGO-2 and CFGO-3 on cement are shown in Figures 7 and 8, respectively. The linear correlation coefficients (R 2 ) are exhibited in Table 4. The results showed that R 2 of pseudo-second-order kinetic model for GO were more satisfactory than pseudo-first-order kinetic model. For CFGOs, although the lengths of the branched-chains were different, all of the R 2 of the pseudo-second-order kinetic model was more relevant than that of the pseudo-first-order kinetic model. These results indicate that the GO and CFGO adsorption process cannot be well fitted with the pseudo-first-order kinetic model, but agree with the pseudo-second-order kinetic model. This means that the nature of the adsorption of CFGOs on cement was a chemical-controlling process and the rate-controlling steps [26,28,29], which was the same as GO. For GO, as we reported before, the -COOH on GO reacted with the Ca 2+ during the hydration process, producing -COO − which could act as the anchor points on the positively charged sites at the surface of cement particles [26,32]. For CFGOs, the adsorption process on the cement also included a chemical reaction. This is attributed to the fact that there remained -COOH on the structure CFGOs, which did not react with -NH 2 in polyether amine during the chemically functionalized process. Once CFGOs were added into the cement system, the residual -COOH also reacted with the metal cations. In the case where C 0 was 450 mg/L, as illustrated in Table 5, the pseudo-second-order rate constant of GO was larger than that of CFGOs, and the pseudo-second-order rate constant of CFGOs reduced with the length of branched-chains. This suggested that adsorption capacity of GO on cement was stronger than CFGOs, and the adsorption capacity of CFGOs on cement weakened with the increasing of branched-chain lengths. Adsorption Isotherms GO/CFGOs on Cement The equilibrium adsorption state is dynamic in nature, as the amount of adsorbate migrating onto the adsorbent is counterbalanced by the amount of adsorbate migrating back into solution. The relation between the amount adsorbed by an adsorbent and the equilibrium concentration of the adsorbate at a constant temperature in a solid-liquid interface can be expressed by the linearized Langmuir adsorption isotherm and the Freundlich isotherm [31,[33][34][35], therefore the adsorption results of GO/CFGOs on cement were fitted with Langmuir and Freundlich isothermal models. The Langmuir isothermal model assumed that the adsorption was a monolayer type on a homogeneous surface [36]. The Langmuir isothermal model can be expressed as: As another isothermal model, the Freundlich isothermal model was based on the assumption that the adsorbate concentration on the adsorbent surface enhanced with the concentration of adsorbate, and it is usually used to describe the multilayer type and heterogeneous systems [37]. The Freundlich isothermal model can be expressed as: where Q e (mg·g −1 ) and Q em (mg·g −1 ) present the adsorption amount and saturated adsorption amount; C e (mg·L −1 ) is the equilibrium concentration; b represents the constant contingent on the nature of the adsorbate and adsorbent; K f and n are the constants depending upon the adsorption capacity and adsorption amount. The fitting results of Langmuir and Freundlich isothermal models of GO, CFGO-1, CFGO-2 and CFGO-3 adsorbed on cement are respectively plotted in Figures 9 and 10, respectively. The linear correlation coefficients (R 2 ) are exhibited in Table 6. The R 2 of GO for the Freundlich isothermal model was more satisfactory than the Langmuir isothermal model, and R 2 of CFGOs for the Freundlich model was also closer to 1 than that of the Langmuir model. In other words, the experimental results of GO and CFGOs were fitted better with the Freundlich isothermal model than the Langmuir isothermal model. where Qe (mg· g −1 ) and Qem (mg· g −1 ) present the adsorption amount and saturated adsorption amount; Ce (mg· L −1 ) is the equilibrium concentration; b represents the constant contingent on the nature of the adsorbate and adsorbent; Kf and n are the constants depending upon the adsorption capacity and adsorption amount. The fitting results of Langmuir and Freundlich isothermal models of GO, CFGO-1, CFGO-2 and CFGO-3 adsorbed on cement are respectively plotted in Figures 9 and 10, respectively. The linear correlation coefficients (R 2 ) are exhibited in Table 6. The R 2 of GO for the Freundlich isothermal model was more satisfactory than the Langmuir isothermal model, and R 2 of CFGOs for the Freundlich model was also closer to 1 than that of the Langmuir model. In other words, the experimental results of GO and CFGOs were fitted better with the Freundlich isothermal model than the Langmuir isothermal model. where Qe (mg· g −1 ) and Qem (mg· g −1 ) present the adsorption amount and saturated adsorption amount; Ce (mg· L −1 ) is the equilibrium concentration; b represents the constant contingent on the nature of the adsorbate and adsorbent; Kf and n are the constants depending upon the adsorption capacity and adsorption amount. The fitting results of Langmuir and Freundlich isothermal models of GO, CFGO-1, CFGO-2 and CFGO-3 adsorbed on cement are respectively plotted in Figures 9 and 10, respectively. The linear correlation coefficients (R 2 ) are exhibited in Table 6. The R 2 of GO for the Freundlich isothermal model was more satisfactory than the Langmuir isothermal model, and R 2 of CFGOs for the Freundlich model was also closer to 1 than that of the Langmuir model. In other words, the experimental results of GO and CFGOs were fitted better with the Freundlich isothermal model than the Langmuir isothermal model. The results demonstrate that the adsorption of CFGOs and GO on cement occurred in a heterogeneous manner, and the adsorption of GO and CFGOs on cement could be regarded as a multilayer type [26,29]. As listed in Table 7, it is observed that the values of n and K f of GO adsorbed on cement were larger than that of CFGOs, and these values also reduced with the length of branched-chains. The variation tendency of n and K f values indicated that adsorption capacity of GO on cement was stronger than CFGOs, and the adsorption capacity of CFGOs on cement weakened with the length of branched-chains [38]. These results were consistent with pseudo-second-order rate constant. There were two reasons for the difference in adsorption capacity. Firstly, the chemical reaction between GO and polyether amine consumed -COOH on GO, so that the anchoring effect of CFGOs was weaker than GO. Secondly, the van der Waals force among GO layers on the surface of the cement was strong. However, for CFGOs, the branched-chains provided the steric hindrance effect which weakened the van der Waals force among CFGOs layers, and the steric hindrance effect strengthened with the length of branched-chains. 3.6. XPS Spectra of Cement Surface before and after the Adsorption of GO/CFGOs Figure 11 indicates an XPS survey scan of cement surface before and after the adsorption of GO and CFGOs. GO and CFGOs were mainly constituted of C 1s (signal at 284 eV), O 1s (signal at 532 eV), the peaks of Ca 2p (signal at 347 eV) were observed in pure cement, and the cement after the adsorption of GO and CFGOs. It is interesting to note that the signal intensity of Ca 2p for pure cement was the strongest, and it decreased after GO or CFGO adsorption on the cement. Additionally, the signal intensity of GO adsorbed on cement was weaker than CFGOs. For CFGOs, the signal intensity of CFGO-1 adsorption on cement was weaker than CFGO-2 and CFGO-3, the signal intensity of CFGO-3 was stronger than CFGO-2. These results prove the fact that the adsorption layer thickness of GO adsorbed on cement surface was thicker than CFGOs, and the adsorption layer thickness of CFGOs thinned with the increasing length of branched-chains. The results demonstrate that the adsorption of CFGOs and GO on cement occurred in a heterogeneous manner, and the adsorption of GO and CFGOs on cement could be regarded as a multilayer type [26,29]. As listed in Table 7, it is observed that the values of n and Kf of GO adsorbed on cement were larger than that of CFGOs, and these values also reduced with the length of branched-chains. The variation tendency of n and Kf values indicated that adsorption capacity of GO on cement was stronger than CFGOs, and the adsorption capacity of CFGOs on cement weakened with the length of branched-chains [38]. These results were consistent with pseudo-second-order rate constant. There were two reasons for the difference in adsorption capacity. Firstly, the chemical reaction between GO and polyether amine consumed -COOH on GO, so that the anchoring effect of CFGOs was weaker than GO. Secondly, the van der Waals force among GO layers on the surface of the cement was strong. However, for CFGOs, the branched-chains provided the steric hindrance effect which weakened the van der Waals force among CFGOs layers, and the steric hindrance effect strengthened with the length of branched-chains. Figure 11 indicates an XPS survey scan of cement surface before and after the adsorption of GO and CFGOs. GO and CFGOs were mainly constituted of C 1s (signal at 284 eV), O 1s (signal at 532 eV), the peaks of Ca 2p (signal at 347 eV) were observed in pure cement, and the cement after the adsorption of GO and CFGOs. It is interesting to note that the signal intensity of Ca 2p for pure cement was the strongest, and it decreased after GO or CFGO adsorption on the cement. Additionally, the signal intensity of GO adsorbed on cement was weaker than CFGOs. For CFGOs, the signal intensity of CFGO-1 adsorption on cement was weaker than CFGO-2 and CFGO-3, the signal intensity of CFGO-3 was stronger than CFGO-2. These results prove the fact that the adsorption layer thickness of GO adsorbed on cement surface was thicker than CFGOs, and the adsorption layer thickness of CFGOs thinned with the increasing length of branched-chains. Illustration of GO and CFGOs with Incorporation of Cement The illustration of GO and CFGOs with the incorporation of cement is shown in Figure 12. As displayed in Figure 12a, the aggregation of cement particles was severe, which resulted from the GO adsorbed on cement particles, and the strong van der Waals force among GO layers led to the reduction of the spacing among cement particles. Moreover, Figure 12b-d exhibited the illustration of CFGO-1, CFGO-2 and CFGO-3 with the incorporation of cement. The steric hindrance effect, which was provided by the branched-chains, weakened the van der Waals forces among CFGOs layers. Additionally, the steric hindrance effect strengthened with the increase of branched-chain length. Consequently, the particle spacing of CFGO incorporation with cement was further than that of GO incorporation with cement. These are the essential reasons why CFGOs improved the flowability of cement but GO reduced the flowability of cement. Illustration of GO and CFGOs with Incorporation of Cement The illustration of GO and CFGOs with the incorporation of cement is shown in Figure 12. As displayed in Figure 12a, the aggregation of cement particles was severe, which resulted from the GO adsorbed on cement particles, and the strong van der Waals force among GO layers led to the reduction of the spacing among cement particles. Moreover, Figure 12b-d exhibited the illustration of CFGO-1, CFGO-2 and CFGO-3 with the incorporation of cement. The steric hindrance effect, which was provided by the branched-chains, weakened the van der Waals forces among CFGOs layers. Additionally, the steric hindrance effect strengthened with the increase of branched-chain length. Consequently, the particle spacing of CFGO incorporation with cement was further than that of GO incorporation with cement. These are the essential reasons why CFGOs improved the flowability of cement but GO reduced the flowability of cement. Conclusion Chemically functionalized graphene oxide (CFGO) was obtained through a condensation reaction between graphene oxide (GO) and polyether amine. The main template of CFGOs was the GO sheet, and the branched-chains were polyether amine with different molecular weights. CFGOs improved the flowability, whereas GO reduced the flowability of cement. The adsorption results of GO and CFGOs were all best fitted with the pseudo-second-order kinetic model and the Freundlich isothermal model. This means that the adsorption of GO and CFGOs on the surface of cement particles both occurred in a heterogeneous manner, and it was multilayer adsorption. Additionally, chemical reactions were engaged in the adsorption process of GO and CFGOs on cement. The different chemical structures of GO and CFGOs resulted in the distinguished difference in adsorption behavior. For GO, the anchoring effect from -COOH and the strong van der Waals force among the GO layers led to a larger adsorption amount, stronger adsorption capacity and thicker adsorption layer thickness than CFGOs, which resulted in a spacing reduction among cement particles, subsequently leading to the aggregation of cement particles. This is the reason why the introduction of GO reduced the flowability of cement. As for CFGOs, due to the consumption of -COOH by -NH2 in polyether amine, the anchoring effect was weaker than GO. The branched-chains provided the steric hindrance effect which weakened the van der Waals force among CFGOs layers, then reduced adsorption amount, weakened adsorption capacity and thinned adsorption layer thickness. Moreover, the steric hindrance effect strengthened with the increase of branched-chain length. The steric hindrance effect endowed the decreased aggregation of cement particles. It is therefore reasonable that chemically functionalized processes that generate CFGOs improved the flowability of cement, and the flowability of cement improved with the increase of branched-chain length. Conclusions Chemically functionalized graphene oxide (CFGO) was obtained through a condensation reaction between graphene oxide (GO) and polyether amine. The main template of CFGOs was the GO sheet, and the branched-chains were polyether amine with different molecular weights. CFGOs improved the flowability, whereas GO reduced the flowability of cement. The adsorption results of GO and CFGOs were all best fitted with the pseudo-second-order kinetic model and the Freundlich isothermal model. This means that the adsorption of GO and CFGOs on the surface of cement particles both occurred in a heterogeneous manner, and it was multilayer adsorption. Additionally, chemical reactions were engaged in the adsorption process of GO and CFGOs on cement. The different chemical structures of GO and CFGOs resulted in the distinguished difference in adsorption behavior. For GO, the anchoring effect from -COOH and the strong van der Waals force among the GO layers led to a larger adsorption amount, stronger adsorption capacity and thicker adsorption layer thickness than CFGOs, which resulted in a spacing reduction among cement particles, subsequently leading to the aggregation of cement particles. This is the reason why the introduction of GO reduced the flowability of cement. As for CFGOs, due to the consumption of -COOH by -NH 2 in polyether amine, the anchoring effect was weaker than GO. The branched-chains provided the steric hindrance effect which weakened the van der Waals force among CFGOs layers, then reduced adsorption amount, weakened adsorption capacity and thinned adsorption layer thickness. Moreover, the steric hindrance effect strengthened with the increase of branched-chain length. The steric hindrance effect endowed the decreased aggregation of cement particles. It is therefore reasonable that chemically functionalized processes that generate CFGOs improved the flowability of cement, and the flowability of cement improved with the increase of branched-chain length. The branched-chains of CFGOs have significant impact on the flowability and mechanical properties of cement and adsorption behavior on the surface of cement. Other than the length of branched-chains, the performance of cement-based materials also could be adjusted by controlling the category and density of branched-chains. This will be studied in further work. Conflicts of Interest: The authors declare that they have no conflict of interest.
247026420
s2orc/train
v2
2022-02-23T14:09:38.861Z
2022-02-22T00:00:00.000Z
Japan's development assistance for health: Historical trends and prospects for a new era Summary The year 2020 marked an important turning point in Japan's global health policy. While the global health community has been suffering serious damage to sustainable health financing due to the COVID-19 pandemic, an independent commission on Japan's Strategy on Development Assistance for Health (DAH) launched an ambitious policy recommendation to double the amount of Japan's DAH during the post-COVID-19 era. This paper examines historical trends in DAH in Japan over the past 30 years based on published literature and comprehensive DAH tracking data and highlights priority areas for discussion on how DAH can be advanced to ensure equitable and efficient use of limited resources to support the achievement of the Sustainable Development Goals, including universal health coverage and pandemic preparedness, in low- and middle-income countries. Priority areas for discussion include: how and where to focus DAH for equitable health gains; how to provide DAH to support health system strengthening, including pandemic preparedness; and clarifying the role of DAH in global health functions. Introduction The coronavirus disease 2019 (COVID-19) pandemic has sparked an unprecedented level of interest in the past, present, and future of global health financing, in part because of the enormous and ongoing costs to countries around the world, regardless of socioeconomic level, of responding to the pandemic. 1 Global gross domestic product in 2020 fell globally by 3.1% from the previous year. 2 It is projected to grow 5.9% in 2021, 2 but with disproportionate vaccination coverage, the accelerating spread of SARS-CoV-2 variants, and the exhaustion of public health resources, the projection is subject to a great deal of uncertainty. 3 The financial impact of the health crisis has led to long-term economic stagnation in some countries, and progress toward achieving the Sustainable Development Goals (SDGs), including universal health coverage (UHC), has stalled or reversed. 4−6 Development assistance for health (DAH) is a key component of foreign policy for many donor countries and has an important role in supporting improvements in population health and helping to build human capital in low-and middle-income countries (LMICs). 7,8 DAH then can enhance economic development, which in turn promotes and supports health security and the development and access to global public goods. 9 In addition, in response to the health and economic crises caused by COVID-19, DAH has played a unique role in financing emergency health systems in many countries and brought about the rapid expansion of necessary health services. 10 In 2000, Japan hosted the G8 Kyushu-Okinawa Summit, led the establishment of the Global Fund to Fight AIDS, Tuberculosis and Malaria (Global Fund), and highlighted infectious disease control as a major global health agenda. It became a monumental opportunity as G8 countries confirmed the need for additional funding and international partnerships. Two decades later, the year 2020 marked a major turning point in Japan's DAH strategy. To date, Japan has made significant contributions to the global health debate in the areas of infectious diseases, health system strengthening, UHC, and health emergency response. 11,12 The Basic Design for Peace and Health (BDPH), which was formulated in 2015 as an issue-specific policy for the health sector in the Development Cooperation Charter (a fundamental policy document that details the development cooperation policies of Japan), will soon be revised for the post-COVID-19 era. The current BDPH outlines three basic policies: establishing resilient global health governance that can respond to public health emergencies and disasters; promoting seamless utilization of essential health and medical services and UHC throughout lifecycle; and leveraging Japanese expertise, experience, and medical products and technology. 13 In order to contribute to this revision, an independent commission chaired by Yasuhisa Shiozaki, former Minister of Health, Labor and Welfare of Japan, with members from the National Diet, government, private sector, academia, and civil society, was set up to review Japan's global health strategy and propose recommendations. 14 Doubling the amount of Japan's DAH within five years was set as the new global health contribution target, as one of several proposals, including clarification of the governmental command post for DAH strategy development and implementation, and active participation in the governance of international development agencies. However, there remains no clear direction as to how DAH can be advanced moving forward. This paper has two objectives. First, we assess the historical trends of Japan's DAH over the last 30 years. Then, we highlight priority areas for discussion that donor countries, including Japan, should address to keep the future of DAH on track to support the achievement of the SDGs (including UHC) as well as pandemic preparedness in LMICs (DAH recipient countries). Priority areas for discussion include how and where to focus DAH for equitable health gains, how to provide DAH to support health system strengthening, including pandemic preparedness, and clarifying DAH's role in funding the core functions of global health, such as global public goods provision and the management of cross-border externalities. They are developed in the context of the demographic and epidemiological transitions in LMICs, the new challenges revealed by the COVID-19 pandemic, and a global political shift toward increasing prioritization of donor funding into aid for global functions. We believe that this paper will inform the revision of BDPH by proposing new venues of DAH allocation for Japan to explore. Methods In addition to the published literature, we used data on the latest estimates of DAH for each recipient country published by the Institute for Health Metrics and Evaluation (IHME). 15,16 It relies primarily on open databases on disbursement at project levels (including the Organisation for Economic Cooperation and Development's Creditor Reporting System (OECD CRS)), and, where possible, on direct data collection for individual agencies. For the purposes of this study, DAH, as defined by IHME, refers to disbursements (the amount actually distributed), rather than pledges or commitments (the amount the donors agreed to make available), made by donors to maintain and improve health in LMICs. These data include which health focus areas were targeted, which countries received DAH, and even through which international development agencies and bilateral agencies the funds were distributed. Health focus areas (including health system strengthening as well as pandemic preparedness) relevant to the projects were uniquely defined by IHME through keyword searches of project descriptions downloaded from the databases. The most recent estimates are for 1990−2018, and those for 2019 and 2020 are projections. DAH is reported in inflation-adjusted 2020 USD. Unless otherwise indicated, this study presents the DAH for 2020 excluding funds for COVID-19 as COVID-19-related disbursements constitute an emergency response, unlike the DAH strategy for peacetime. Also, unless otherwise noted, health system strengthening refers to sector-wide approaches and does not include efforts for a specific health focus area (i.e. diagonal approach). By IHME's definition, pandemic preparedness is a subset of health systems strengthening. Therefore, our working definition of pandemic preparedness is much narrower than the extensive discussion of pandemic preparedness often found in the literature. Data on gross national income (GNI) and population size were extracted from the World Bank's World Development Indicators. 17,18 Data on disability-adjusted life years (DALYs) − an overall health loss indicator that takes into account premature death and disability − were extracted from the Global Burden of Disease Study 2019 (GBD 2019). 19,20 Scale and scope of Japan's DAH in the past and present DAH from Japan has maintained an increasing trend since 1990, the beginning of the analysis period ( Figure 1A), with an average annual growth rate of 6.2% during this period, growing at a scale of 19.2 million USD annually. In 2018, 1.3 billion USD of DAH was provided. 20 years ago, the share of DAH through international development agencies was roughly 50%, whereas in the last few years it has increased from 60% to over 70% ( Figure 1B). These increases are mostly explained by the increase in DAH channeled through international development agencies (especially Global Fund) ( Figure 1C); the Global Fund now accounts for about 30% of DAH channeled through international Figure 1C). The share of DAH channeled through these international development agencies since 2000 has averaged 32.0%, 21.8%, and 17.5%, respectively ( Figure 1D). The share of DAH through the Coalition for Epidemic Preparedness Innovations (CEPI) and Gavi, the Vaccine Alliance (Gavi) is not very large (about 1.1% for CEPI and 2.0% for Gavi in 2018). CEPI is a public-private partnership founded in 2016 that has played a vital role in research and development (R&D) as well as the manufacturing of vaccines through global collaboration, in which Japan was involved as one of the founding members; Gavi is a public-private partnership to promote procurement and delivery of vaccines to LMICs. By health focus area, the increase in DAH since 2000 has mostly corresponded to communicable, child, and maternal diseases control (Figure 2A). This may be due to the fact that the Global Fund is an organization whose main mission is combatting the big three infectious diseases: HIV/AIDS, tuberculosis, and malaria. Excluding the 'other' and 'unallocable' health focus areas, it now has a share of about 70%, while it is noteworthy that only about 2% of DAH are allocated for non-communicable diseases (NCDs) ( Figure 2B). Among health system strengthening, which have recently had a share of about 30%, pandemic preparedness has accounted for only about 5%, except in the last few years (projections) (Fig 2C and D). Note that pandemic preparedness, as defined by IHME, is a subset of health system strengthening and refers to epidemiological surveillance, contact tracing and control, biosafety measures, early warning, etc. 15 The channels with a particularly large share of DAH from Japan in health system strengthening are Japan International Cooperation Agency (JICA) (43.9% in 2018), WHO (39.6%), development banks (33.3%), and NGOs (19.4%). Among health system strengthening, WHO (32.8%) and JICA (9.0%) have the largest share in pandemic preparedness (as a subset of health system strengthening). While donors, including Japan, have their own methodologies for deciding how to distribute DAH, the criteria for assessing equity in the distribution of their DAH can include national income (an indicator of economic needs) and disease burden (an indicator of health needs). 21,22 Figure 3 shows that the relationship between Japan's DAH per capita and GNI per capita in 2018 is weak. Although the general tendency is for higher GNI per capita to be associated with lower DAH per capita, there is wide variation in the amount of DAH received even among countries with similar levels of GNI. DAH per DALYs in 2018 also varied widely across recipient countries (Figure 4). Priority areas for discussion The SDG era is characterized by the expansion of global goals to include more interdependent and multisectoral areas of focus. In particular, climate change, natural disasters, which are increasing in scale and frequency, as well as refugee crises and conflicts have the potential to directly and indirectly affect health needs, and the DAH landscape must change accordingly. 23 It is not just a matter of increasing the amount of DAH, but a paradigm shift is needed to ensure that limited resources are provided in a more equitable, efficient, and sustainable manner, and that global health goals are achieved in partnership with recipient countries. Specifically, we have established three priority areas for discussion on how DAH should be delivered. How to focus DAH for equitable health gains There are many possible reasons for the variation in the allocation of DAH from donor countries, including Japan, that cannot be explained by economic or health needs. Donors are likely to focus on the expected impact of DAH in terms of changes in health or determinants of health (including health service coverage), the ability (beyond financial factors) of countries to implement and expand services, and the fair distribution of resources and services. 21 In addition, DAH allocations from donor countries may be guided by a number of additional factors, such as historical diplomatic relations, geographic proximity, strategic political interests, and especially in the case of bilateral aid, trade-related considerations. 24,25 A fundamental challenge for donors, including Japan, is critically reviewing each country's own criteria and rationale for DAH allocation. 26 For Japan, in order to meet the SDGs and the UHC principle of leaving no one behind, addressing inequality is essential, and this requires moving beyond eligibility based solely on national income to criteria that include national disease burden, socioeconomic status, national inequality, and each country's ability to address them. It is also important to consider how to deliver aid to marginalized populations in each country, including migrants and refugees. The size of most vulnerable populations should also be considered in DAH allocation calculations and strategies. 26 Where to focus DAH for equitable health gains There is considerable debate about which health areas DAH from donors should prioritize. DAH allocations from major donors, including Japan, are known not to be closely linked to disease burden. 26 While communicable, child, and maternal diseases continue to be the main focus of Japan's DAH, NCDs accounted for 33.9%, 55.2%, and 78.9% of the total burden of disease (measured in DALYs) in low, lower-middle, and upper-middle SWAps=sector-wide approaches. *"Other" captures development assistance for health for which source information was available but was not identified as being allocated to any of the health focus areas listed. Health assistance for which no health focus area information was available was designated as unallocable. Health Policy www.thelancet.com Vol 22 Month May, 2022 income countries (defined by World Bank) in 2019. 19 There are important arguments for why Japan and other major donor countries should increase their investments in NCDs. 27,28 For example, it was highlighted by the GBD 2019 study that some NCDs, which have continued to increase globally over the past three decades, are risk factors for severe COVID-19 illness and are driving the increase in deaths from This suggests that insufficient action has been taken to address key risk factors for NCDs such as obesity, hypertension, high blood sugar, smoking, and alcohol, and that the world needs to take concerted action. 30,31 This might include supporting the development and implementation of new policies to promote population health, such as regulation, taxation, and subsidies. 32,33 The disease burden of most NCDs increases with increasing age, such as cardiovascular diseases, neoplasms, diabetes, neurological disorders (e.g. Alzheimer's disease), mental disorders, etc. 19 Even in LMICs, more attention needs to be paid to the needs of aging populations in order to effectively respond to long-term changes in disease structure. 34 Importantly, greater investment in NCDs does not justify sacrificing funding for communicable, child, and maternal diseases. Evaluating where to direct DAH from donors should reflect an assessment of the potential health benefits and cost-effectiveness of aid programmes, 35−37 the comparative advantages of the aid implementation agencies or channels, and considerations tailored to the context of each recipient country's health needs. How to provide DAH to support health system strengthening, including pandemic preparedness While more diseases and sectors are becoming targets for DAH investments from donors, without meaningful investment in strengthening key health system pillars (e.g. WHO's building blocks of service delivery, health workforce, health information systems, information systems, access to essential medicines, financing, and leadership/governance 38 ), health gains are less likely to be sustained and achieving UHC and other SDGs will be more costly. 39,40 Aligning DAH spending with recipient countries' priorities will also support their efforts to improve population health outcomes and achieve global health goals. Investing in health system strengthening and broader system support is one way to do this and can help align long-term goals. 26,41 Donors must consider that there are mismatches between current DAH allocations and recipient countries' priorities (e.g. based on national health strategic plans). 42,43 COVID-19 highlights an important role that health systems play in ensuring health security. 40 Reducing mortality from COVID-19 will require, at the very least, a strong health system with sufficient capacity to test, trace, and treat patients and the ability to obtain and deliver the vaccine quickly and efficiently. 44 The impact of COVID-19 on providing essential health services has been significant, largely because of the need to re-organize health care resources, including health care workers, equipment at health care facilities, services, data, and funding, to meet the urgent demands of the pandemic, making it much more difficult for patients to safely access health care services. 45 What is the role of DAH in global health functions 47 In addition, in 2013, 5.5% of DAH was invested in the management of cross-border externalities, including pandemic preparedness and control of antimicrobial resistance, which increased to 10.2% in 2015 and then declined to 7.2% in 2017. 47 Fully addressing the impact of a pandemic in most LMICs will require not only a strong health system that can respond, but also affordable access to critical tools such as vaccines. As highlighted by the G20 High Level Independent Panel on Financing the Global Commons for Pandemic Preparedness and Response June 2021 report, 48 concerted action to invest in global public goods is essential to mitigate the health and economic losses associated with the COVID-19 pandemic and to prevent and rapidly respond to the next global health emergency. In addition to CEPI, which plays an important role in generating global public goods for those in need, including vaccine development, relatively large DAH shares from donors in global health functions are provided by WHO (62% in 2013), the Joint United Nations Programme on HIV/AIDS (UNAIDS) (40%), the United Nations Population Fund (UNFPA) (22%), and Gavi (20%). 46 As of 2018, Japan had a very small share of its DAH channeled through both CEPI and Gavi. Additionally, at present, there is no established methodology for monitoring funding for global public goods, and it is necessary to establish such a mechanism in the future, including the agreed definition of global public goods and appropriate data collection. Conclusion According to the previous literature, Japan has disbursed a total of 2.3 billion USD toward addressing the health-related effects of COVID-19 in LMICs. 15 This figure is the largest among all donor countries and international development agencies and is equivalent to 1.8 times Japan's DAH in peacetime (2018), proving that Japan can rapidly scale up its resources as needed. However, it cannot be ruled out that the economic downturn associated with the COVID-19 pandemic may affect policy decisions by donor countries seeking to maintain record development assistance resources for LMICs. 49 The United Kingdom (UK), for example, has already cut its aid budget in order to prioritize addressing domestic challenges, 50 raising concerns that this will affect the health systems of the countries that have been receiving UK aid, leaving them vulnerable to disruption of foreign aid. 51,52 Nevertheless, as the COVID-19 pandemic has progressed over time, there have been even more calls for assistance to LMICs. 53 These requests are not limited to COVID-19, but also in response to efforts to achieve global goals such as the SDGs including UHC and to address ever-changing global health challenges such as climate change, refugee crises, conflict, terrorism, and emerging infectious diseases. 23 Therefore, priority areas for discussion to ensure equitable and efficient use of limited resources should include: how and where to focus DAH for equitable health gains; how to provide DAH to support health system strengthening, including pandemic preparedness; and clarifying the role of DAH in global health functions, including global public goods provision.
73328010
s2orc/train
v2
2018-04-13T11:32:43.349Z
2013-01-16T00:00:00.000Z
AN AUDIT OF THE PHYSIOTHERAPY MANAGEMENT OF PARAPLEGIC PATIENTS WITH SACRAL PRESSURE SORES Correspondence Author: Denisha Pather University of the Witwatersrand Faculty of Health Sciences Physiotherapy Department 7 York Road, Parktown 2193 Johannesburg Email: denishapather@hotmail.com ABSTRACT: Introduction: Pressure sores are the most common complication post spinal cord injury that requires patients to be on bed rest. Patient bed rest delay rehabilitation and may lead to other complications associated with immobility. This study sought to establish the treatment interventions physiotherapists provide to patients with sacral pressure sores and the factors that they consider when deciding whether the patient should receive physiotherapy in the ward or gym. Methods: This was a questionnaire based survey of physiotherapists working in spinal cord injury rehabilitation units in South Africa. The self designed questionnaire was sent to all the main spinal rehabilitation units in the country (14) located in Gauteng, Kwa-Zulu Natal, Western Cape, Eastern Cape and Free State provinces. Results: Thirty-nine physiotherapists from a total of 51 completed the questionnaires (76% response rate). The most common treatment practice for patients with sacral pressure sores was bed rest (98%). The most common physiotherapy practices (70%) included were upper limb muscle strengthening, upper and lower limb passive movements, positioning into prone and side lying and passive stretching. The choice of treatment environment was influenced by doctors’ orders and the size, grade and duration of the pressure sores. Conclusion: Direct involvement in pressure sore management in South Africa seem to be less than in other parts of the world. If we are to minimise the pressure sore impact, it appears like we need more focus on gait re-education and standardised ADL programmes and patient treatment in the gym to possibly maximise healing and rehabilitation. INTRODUCTION Pressure sores are the commonest complication post spinal cord injury (Aito 2003). Patients with pressures sores demonstrate significantly im paired physical and social function, selfcare and mobility (Franks et al 2002). Mortality is associated with pressure sores, however, it should be noted that most often pressure sores do not cause death; rather the pressure sore is associated with a sequential decline in health status which is then associated with mortality (Allman et al them (OotGiromini 1989). There is a dearth of information on the cost of pres sure sores in South Africa. The national pressure ulcer advisory panel in America documented the inci dence of pressure sores among spinal cord injured patients to be around 62% (NPUAP 2001). The prevalence of pres sure sores post spinal cord injury remains worryingly high with 27% being established by Chen et al (2005) and 38% by Ash (2002) in the United Kingdom. Thirty two percent of spinal cord injured patients were admitted to the hospital and rehabilitation setting with preexisting pressure sores (Ash 2002). About 46% of pressure sores are sacral sores (Ash 2002) and about two thirds of these occur in the pelvic region i.e. affecting the sacrum, coccyx, ischial tuberosities and trochanters (Garber and Rintala 2003). The high number of ischial tuberosity pressure sores among 1995; Thomas et al 1996). Morbidities commonly associated with pressure sores include pain, depression, local infection, anaemia, osteomyelitis, and sepsis (Redelings et al 2005;Meaume et al 2005;Roth et al, 2004;Scivoletto et al 2004). The presence or development of a pressure sore can increase the length of a patient's hospital stay by an average of 10.8 days (Scott et al 2006) and the increased hospital stay is associated with higher costs and increased inci dence of nosocomial infection and/ or other complications (Allman et al 1999). The average hospital treatment cost associated with stage IV pressure sores and related complications was US$129,248 for hospitalacquired ulcers during one admission, and US$124,327 for communityacquired ulcers over an average of four admissions among spinal cord injured patients (Brem et al 2010). The cost of treating pressure ulcers is 2.5 times the cost of preventing of the study and it also stated that by completing the questionnaire they were consenting to participating in the study. It requested that all responses were to be returned within two weeks. After a further three weeks a second reminder was sent out to all the participants. To ensure anonymity, email responses were sent to a different person who then printed them and gave them to the researcher. Statistical Analysis All data needed for the objectives were analysed using descriptive statistics and were presented either as numbers and frequencies in tables or were presented using graphs. Data were computed using International Business Machines Statistical Package for the Social Sciences (IBM SPSS) version 19. Demographics of the Study Sample and Response Rate The physiotherapists in this study were from 14 spinal rehabilitation facilities from five provinces in the country, namely Gauteng, KwaZulu Natal, Western Cape, Eastern Cape and the Free State. A total of 51 (the total number of physiotherapists in the spinal rehabilitation facilities) questionnaires were sent out and 39 were received back, which amounts to a 76% response rate. Of the 39 respondents, 27(69%) were from the private sector and the remainder from the public sector. The sample was made up of 98% females and most of the participants had ≤5 years experience. Some sections of the questionnaire were not completed by some of the respondents and hence the total (n) for each section vary. Since this was a across sectional descriptive study, the available data were computed. Use of Protocols in the Rehabi litation Centres Twenty physiotherapists (51%) reported having physiotherapy treatment proto cols for treatment of patients with sacral pressure sores. However, in some cases physiotherapists working in the same facilities provided different responses on the presence of treatment protocols. paraplegics is because patients with paraplegia exert 18.8mmHg higher inter face pressure over the ischial tuberosities than unaffected people (Markhous et al 2007) due to limited postural stability. When a patient develops pressure sores of grade II and above, especially in the sacral region, the medical prescription of choice tends to be bed rest (Rappl 2008;Virani et al 2004). The bed rest is usually for a prolonged period of time due to slow healing rates of pressure sores (Rappl 2008) and this can decrease patients' functional outcomes due to immobility associated complications. In South Africa, epidemiological data on pressure sores is limited and no pu blished studies from South Africa could be found that established either the direct or indirect intervention by physio therapists in spinal cord injured patients with pressure sores. The aim of the study was thus to determine how patients with paraplegia with sacral pressure sores are being managed by physiotherapists and to establish the factors that physiothera pists take into account when deciding the treatment environment in which to manage these patients. Study Design and Participants A descriptive cross sectional design using a questionnaire was used for data collection. Physiotherapists for this study were from all the 14 specialised spinal rehabilitation facilities i.e. hospitals/ clinics/practices in South Africa, which rehabilitate patients with spinal cord injuries (SCI). For inclusion in the study, physiotherapists were supposed to be involved in the treatment of patients with spinal cord injuries and were not locum or temporary employees at the rehabilitation hospitals. All physiotherapists meeting the inclusion criteria were considered for the study. Questionnaire Development For data collection, a selfdesigned questionnaire was developed. The ques tionnaire was developed from published clinical guidelines (literature) on the management of patients with SCI. The content validation was done using a panel of four experienced physiotherapists (greater than five years working with SCI) in the field of neurology. This process also checked that the questions in the questionnaire were appropriate for the South African context. A round table discussion was held to reach agreement on which questions were to be included in the questionnaire. The questionnaire covered in part the following aspects: the demographic details of the physiotherapists and their level of experience, the use of proto cols for management of patients with sacral pressure sores and establishing if the participating physiotherapists were involved in direct wound care management of sacral pressure sores. The questionnaire also sought to establish whether bed rest was often prescribed for pressure sores above grade II and the length of time this tended to be, whether the patients were receiving treatment from the therapist when in the bed or gym and the physiotherapy techniques that they used. A pilot study was done to check the physiotherapists' understanding of the questionnaire and to iron out any unfore seen data collection difficulties. The questionnaire did not seek information on pressure sore incidence rates. Ethical Considerations Ethical clearance was applied for and obtained from the committee for Research on Human Subjects of the University of the Witwatersrand. Con fiden tiality of all information collected was ensured as the questionnaire did not require that the health professional state their name or put any identifiable data on the questionnaire. Procedure The head of each spinal rehabilitation facility was contacted telephonically to inform them of the study. The aims and objectives of the study were explained and any questions they may have had at that point were addressed. The email addresses of each employed physiotherapist who fitted the inclusion criteria were then obtained. The questionnaires were then emailed to all the study participants. The ques tionnaires contained an information letter which described the exact details The interventions covered in proto cols by the various hospitals for patients with pressure sores were placed into themes and are shown in Figure 1. Positioning of patients in prone was the commonest (n=8) intervention followed with a set protocol. From those who had protocols, 45% said the intervention followed depended on the grade of the pressure sore while 40% said that the interventions remained the same irrespective of the grade of the pressure sore and a further 15% said that they were uncertain on whether the interventions changed with changing grades of pressure sores. Physiotherapists' Involvement in Wound Care Management The majority of the study sample (62%) stated that they were not involved in direct wound care management. The distribution of the modalities used by therapists to manage pressure sores are shown in Figure 2. Ultrasound and laser were the commonest (54%) modalities used by physiotherapists to manage pressure sores. Interventions Provided When Patients Were on Bed Rest or in the Gym. Ninety two percent of the physio therapists (n = 36) prescribed bed rest for paraplegic patients with sacral pressure sores and 98% (n=38) treated the patients when they were on bed rest. The period of bed rest ranged from hours to months depending on the severity of the pressure sore. The interventions provided to patients with sacral pressure sores while they were on bed rest or in the gym are shown in Table 1. Upper limb muscle strengthening was the most common intervention provided to the patients while on bed rest (94%) and when in the gym (70%). The majority of the therapists (98%) stated that their patients received treatment in the gym setting as well as in the ward. Table 2 shows the factors that physio therapists felt influenced their decision on whether to treat patients in the ward or the gym. The majority of the physiotherapists (71%) reported that the doctors' orders influenced their decisions on whether the patient would receive treatment in the gym or the ward. Figure 3 shows the rationale physio therapists used when choosing physio therapy interventions. The choice of physiotherapy modality for use when managing paraplegic patients with pressure sores was guided by among other factors their past clinical experience (71%). Physiotherapists' Perceived Level of Knowledge of Pressure Sore Management Only 12 (31%) of the physiotherapists felt their knowledge of management of patients with pressure sores was adequate. The reasons given for this perceived inadequacy in knowledge are shown in Figure 4. The most common reason given for the per ceived inadequacy of knowledge was poor knowledge on direct wound care management. Use of Protocols and Involvement in the Treatment of Patients with Sacral Pressure Sores Fifty one percent of respondents stated that they had protocols they followed when treating patients. However the presence or absence of protocols varied between respondents from the same facilities which indicated either a lack of set protocols or different interpretations of the word protocol. A protocol usually consists of a set of bestpractice guidelines which are to be followed for certain conditions (Field and Lohr, 1990). The respondents may have taken hospital protocol to mean the same as principles followed. Principles for the healing of pressure sores such as relieving pressure and reducing friction are followed by all therapists, however protocols in terms of: at which stage the patient is able to sit and for what time periods, the different positions the Wall suction dressings or vacuum dressings 2 5 patient can assume and direct treatment interventions do not appear to be set out in the various hospital environments. It is therefore possible that the 51% of physiotherapists that stated that they used protocols may not be an accurate percentage. A small percentage (38%) of the study sample reported being involved in direct wound care management. This differs to Guihan et al (2009)'s findings where they reported more than 75% of the physiotherapists to be involved in direct wound care. Eighteen percent of the respondents from this study used electrotherapy to manage pressure sores and the modalities most commonly used were ultrasound and laser. It should however be noted that the use of both laser therapy and ultrasound has not been shown to have conclusive benefits in pressure sore management which might point towards poor use of evidence by therapists when managing patients with pressure sores (Regan et al 2009;Reddy et al 2008). Physiotherapy Interventions for the Paraplegic Patient with Sacral Pressure Sores It was found to be common practice to place patients on bed rest if they develop sacral pressure sores. This appears to be a global practice as seen in the literature for the management of sacral pressure sores of grade II and above (Goodman et al 1999;New et al 2004;Post et al 2005). Despite patients being prescribed bed rest, the majority of the respondents (n=38) stated that the patients still received treatment in both the ward and gym settings meaning patients were taken to the gym for physiotherapy when on bed rest. Interventions that were carried out by 70% or more of the physiotherapists were taken to represent "common practice", which is fairly similar to the benchmark of 75 % which was set as usual practice in Guihan et al (2009)'s study. Seventy percent of the physiotherapists indicated carrying out the following interventions when the patient was in bed: upper limb muscle strengthening, lower limb passive movements, positioning into prone and side lying as well as upper limb passive movements and passive stretching. Interventions indicated to be done in the gym setting were the same as for when on bed rest except for bed mobility training and the use of a tilt table for passive standing. The strengthening of upper limb muscles is in keeping with general rehabilitation principles that are carried out during the rehabilitation phase of a patient with SCI (Bromley 2006;Somers 2001). However upper limb muscle strengthening in the form of functional training is needed to gain functional independence, i.e. in the form of transfers, bed mobility and gait reeducation (Kloostermann et al 2009;Somers 2001). In this sample, functional upper limb strengthening does not appear to be common practice except for bed mobility practice which occurs in the gym (n=28) which could affect patient ability to pressure relief to prevent pressure sores. Standing interventions are an integral component of the rehabilitation phase post SCI to improve bone mineral density, reduce spasticity, improve digestive function and also to further prevent sacral pressure sores by removing pressure from the sacrum (BieringSoering et al 2009;Bromley 2006;WannHansson et al 2007). How ever standing interventions for a SCI patient with a sacral pressure sore do not appear to constitute common practice in this study sample for the patients being treated in the ward environment. However, in the gym setting, the use of the tilt table constituted common practice (n=27). Other standing interventions such as the use of the standing frame (n=21) and sit to stand practice (n=16) were not common. In the ward setting, no standing interventions were indicated which again could possibly predispose patients to pressure sore development. Twentyone percent of the physio therapists indicated sitting balance re education as being done in the ward and 15% (n=6) indicated the positioning of the patient into high sitting. This is not in keeping with the Agency for Health Care Policy and Research Public Service (AHCPRs) guidelines which state that a patient with a pressure sore should still be encouraged to sit once a seating assessment has been done (Bergstrom et al 1994). These low percentages can be interpreted to mean a possible high risk for pressure sore development for this cohort of patients. Rapid healing of grade IIIIV pressure sores occurs when patients with sacral sores are seated correctly in positions that encourage weight shift onto the thighs and off the sacrum, i.e. upright or forward lean positions (Rosenthal 2003). Physiotherapists in Guihan et al (2009)'s study were routinely involved in the prevention of new pressure sores by means of pressure sore education. Eight percent of this sample indicated being involved in group therapy and education, however, this aspect was not fully explored in this study and that could possibly explain the small percentage. Factors Taken into Consideration When Deciding the Environment in which to Manage Patients The majority of the physiotherapists (71%) stated that doctors' orders were a factor in deciding whether patients should be taken to the gym or not and hence what could be accomplished with the patients. This is particularly worrying in our setting where physiotherapists have first line practitioner status and does not indicate good team work where decision making should be consultative. Similar findings were established by Guihan et al (2009) where direct wound care involvement was only done on the doctor's orders, although other inter ventions such as sitting periods were guided by protocols. Guihan et al (2009)'s study found that the reasons for putting patients on bed rest included the presence of infection, the patient being attached to a wall suction unit or if the patient had vacuum dressings in situ. Patients with infections would need periods of isolation and a patient attached to a wall suction unit would not be able to be moved. The participants took the grade, duration and size of the pressure sore into account before taking the patient to the gym. Pressure sores are associated with increased levels of pain and spinal rehabilitation is perceived to be painful by patients (Pellatt 2007). It is therefore possible that physiotherapists may be cautious about taking a patient who is already in pain to the gym as this may affect their adherence to therapy. In addition, the presence of pain may contribute to the development or worsen ing of pressure sores (Byrne et al 1996). Spasticity interferes with functioning in patients leading to further reduction in activity (Hasima et al 2007). This decrease in the level of mobility is then a factor which contributes to the worsening or development of further pressure sores (Gelis et al 2009;Rodriquez and Garber 1994;Bryne and Salzberg 1996). This is in keeping with the caution exercised by physiotherapists when deciding whether treatment should be done in the gym or in the ward when spasticity is present. Spasticity may lead to increased levels of shear and friction which would impose limits on patient transfers in order to prevent worsening of the pressure sore. Eighteen percent of the physio therapists indicated that bowel and bladder continence of a patient was considered in management. Sacral pressure sores are more difficult to prevent or manage in patients with incontinence because the skin becomes over hydrated and more susceptible to shearing and friction forces (Beldon 2008). Uncontrolled urine or faecal incontinence affect treatment environment options as they are recognised risk factors for the development of pressure sores (Gelis et al 2009;Rodriquez and Garber 1994;Byrne and Salzberg 1996;Beldon 2008). The majority of the physiotherapists indicated that their treatment inter ventions for patients with sacral pressure sores were guided by past clinical expe riences and the successful experiences of their colleagues. Only 18% (n=7) of the physio therapists reported using evidence based approaches to their treatment. This is worrying especially given the current emphasis on the need to base all our physiotherapy treatment modalities on evidence. Limitations of the study The study was purely descriptive and only sought to establish the treatment interventions physiotherapists provide to patients with sacral pressure sores and the factors that they consider when deciding whether the patient should receive physiotherapy in the ward or the gym. Consequently it does not compare the use of protocols and interventions between the private and state funded hospitals. The disparities in funding between these two institutions could possibly impact on the outcomes of the study objectives. It would also have added more depth to the study if the factors associated with the interventions could be established using regression analysis, this was however not possible given this study design. CONCLUSION AND RECOMMEN-DATIONS Improved education regarding the benefits and applications of direct wound care modalities need to be given to physiotherapists either during undergraduate education or with post graduate courses. Direct involve ment in pressure sore management in South Africa seem to be less than in other parts of the world. There is need to encourage more gait reeducation and standardised ADL programmes to possibly maximise healing and rehabilitation as well as to encourage treatment in the gym environment as often as possible as opposed to being treated in the ward to minimise pressure sore impact. The rehabilitation team should work together to determine a goal for these patients with a programme that eliminates pres sure but at the same time does not impede functional improvements.
117009470
s2orc/train
v2
2019-04-14T01:59:28.449Z
2002-02-21T00:00:00.000Z
Interlayer Exchange Coupling in Semiconductor Magnetic/Nonmagnetic Superlattices The interlayer spin correlations in the magnetic/non-magnetic semiconductor superlattices are reviewed. The experimental evidences of interlayer exchange coupling in different all-semiconductor structures, based on neutronographic and magnetic studies, are presented. A tight-binding model is used to explain interaction transfer across the non-magnetic block without the assistance of carriers in ferromagnetic EuS/PbS and antiferromagnetic EuTe/PbTe systems. INTRODUCTION The development of sensitive read devices consisting of magnetic layers boosted the extensive studies of electrical and magnetic properties of multilayer structures. In late 1980s, two discoveries contributed to the further increase of the potential storage capacity of magnetic materials. These were the "giant magneto-resistance", [ 1 ], and the antiferromagnetic (AFM) coupling between the ferromagnetic (FM) Fe layers separated by a nonmagnetic Cr layer, [ 2 ]. Further, it was established, [ 3 ], that it is the AFM coupling in FM layer structures that leads to the giant magnetoresistance effect (as shown schematically in Fig.1). The ANTIFERROMAGNETIC correlations between FERROMAGNETIC layers separated by a non-magnetic spacer, which in the case of metallic structures proved to play a key role in many technological applications, such as magneto-resistive sensors and magneto-optical devices, was recently discovered also in all-semiconductor, nearly insulating, EuS/PbS superlattices (SLs), [ 4 ]. This is of great interest, since the semiconductor structures have advantages over the all-metal ones owing to the possibility of controlling the carrier concentration by temperature, light or external electric field. The coupling between FM layers was also observed in another all-semiconductor system, i.e., in multilayers made of GaMnAs, the newest generation III-V-based FM diluted magnetic semiconductor, with the (Al,Ga)As or GaAs spacers, [5][6][7]. While the interlayer coupling in GaMnAs-based structures, with high concentration of free carriers, can be explained, at least qualitatively, in terms of the models tailored for metallic systems (compare [ 8 ] and the references therein), the results for EuS/PbS SLs point to a different mechanism capable to transfer magnetic interactions across thick non-magnetic layers without the assistance of mobile carriers. It should be emphasized that the interlayer exchange coupling was observed in other all-semiconductor structures, i.e., in the ANTIFERROMAGNETIC short period (111) SLs: EuTe/PbTe, MnTe/CdTe, MnTe/ZnTe, [9][10][11][12][13], in which not only the density of carriers is several orders of magnitude lower than in metals but also another factor playing an essential role in the known theories of the interlayer coupling -the net layer magnetization -is absent. For the AFM layers the notions of AFM and FM interlayer coupling are not applicable. Still, in these structures there are possible two types of co-linear correlations, i.e., the identical (in-phase) and reversed (out-of-phase) spin orientations in successive layers, as shown in Fig.2. . Fig.2: The in-phase (a) and out-of-phase (b) colinear spin structures for the correlated AFM layers Both types of these correlations were observed in the neutron scattering experiments for SLs, in which the magnetic material was MnTe or EuTe. MnTe is a type III antiferromagnet, which in MnTe/CdTe SLs, due to the tetragonal distortion of the lattice, forms a helical spin structure. On the other hand the EuTe layers in EuTe/PbTe SLs exhibit at helium temperatures the type II AFM structure with Eu spins ferromagnetically ordered in (111) planes, which are in turn antiferromagnetically coupled one to each other. In this review we will first present a model capable to explain the interlayer exchange coupling in both, FM and AFM, IV-VI-based semiconductor magnetic/nonmagnetic SLs. Then, we will present the neutron scattering techniques, which provide most of the convincing evidences of the interlayer exchange coupling in all-semiconductor multilayer systems. It should be noted that the only research tool capable of detecting correlations between AFM layers is the neutron diffraction. Finally we will present the results of the neutronographic studies and the comparison with the theoretical description. THEORETICAL MODEL Several attempts to explain the interlayer exchange coupling in all-semiconductor structures have been reported in the literature. For the II-VI zinc-blende SLs with MnTe magnetic barriers, the exchange coupling mediated by shallow donor impurities located in the nonmagnetic quantum wells was proposed, [ 14,15 ]. These models do not apply, however, to IV-VI structures, since in PbTe and PbS localized shallow impurity states were never detected. For the latter the interlayer spin-spin interactions mediated by valence-band electrons were suggested, [ 16]. The results obtained within the thigh-binding model have explained the origin of the interlayer correlations in the AFM EuTe/PbTe (111) SLs as well as in the FM EuS/PbS (100) SLs, with no localized impurity states. It should be noted that this model is based on the total energy calculations, which do not focus on a particular interaction mechanism, but account globally for the spin-dependent structure of valence bands. In [ 16 ] the total electronic energies for two magnetic SLs with different spin configurations were compared: one with the magnetic period equal to the crystallographic SL period ("in-phase" interlayer coupling, i.e., identical spin configurations in successive magnetic layers -like in Fig.1(a) and Fig.2(a) ) and the other with the double magnetic a) "in-phase" interlayer coupling b) "out-of-phase" coupling period ("out-of-phase" coupling, i.e., opposite spin configurations in successive magnetic layers - Fig.1(b) or Fig.2(b) ). These calculations were performed for all studied experimentally IV-VI structures, i.e., the grown on BaF 2 substrate (111) EuX/PbX (where X=Te or S) SLs and the grown on KCl along the [001] crystallographic axis EuS/PbS SLs. In [ 16 ] it was assumed that the proper description of the band structure of a (EuX) m /(PbX) n SL is reached, when the Hamiltonian reproduces in the n=0 and m=0 limits the known band structures of the bulk constituent magnetic and nonmagnetic materials, respectively. This criterion determines in principle the selection of the ionic orbitals and gives the values of nearly all parameters. It turned out that the band structures can be reproduced quite well with the Bloch functions in the form of linear combinations of s and d orbitals for Eu ions and s and p orbitals for Pb and Te (or S) ions. The nearest neighbor (NN) anion-cation interactions as well as next nearest neighbor (NNN) Te-Te (or S-S), Eu-Eu and Pb-Pb interactions had to be taken into account. Also the interactions of p-orbitals with the three NN d-orbitals belonging to the F 2 representation and the hybridization of anion p-orbitals with the cation forbitals had to be included. The values of the parameters describing all these interactions and the values of the on-site orbital energies were determined by a χ 2 minimization procedure, in which the band structure was fitted to the energies known for the constituent materials in the high symmetry points of the Brillouin zone. Then, these values were used in the calculation of the difference between the total valence electron energy in the two, in-phase and outof phase, spin configurations of the SL. This difference can be called "correlation energy" and regarded as a measure of the strength of the valence electron mediated interlayer exchange coupling, which correlates the Eu spins across the nonmagnetic layer. The sign of the correlation energy determines the spin configuration in consecutive magnetic layers. The main features of the obtained in [ 16 ] results are: 1) the lower energy was always obtained for the opposite orientations of the spins at the spacer borders. In the case of FM SLs this means that the interlayer coupling is AFM (compare Fig.1). For the AFM SLs this leads in the case of even m to the inphase coupling ( Fig.2(a) ) whereas for odd m to the out-of phase ( Fig.2(b) ) interlayer coupling. 2) in all SLs, for a given number of nonmagnetic monolayers n the results are essentially independent of the number of magnetic monolayers m, what indicates that the interlayer coupling depends primarily on the relative orientations of the spins at the two interfaces of the nonmagnetic spacer. 3) the coupling calculated in the same geometry and with the same parameters for AFM layers is approximately two times stronger than for the FM layers -the correlations do not depend solely on the spins at the interfaces; 4) in all studied SLs the correlation energy ∆E decreases monotonically, nearly exponentially, with the spacer thickness, as shown in Fig.3. The strongest and the least rapidly decreasing correlations were obtained for the FM (001) EuS/PbS SL. The comparison of the results for the two, (001) and (111), EuS/PbS SLs (in the latter case the interlayer , as calculated in [ 16 ]. The experimental values (solid squares) for EuS/PbS (001) structures, after [ 4 ]. exchange coupling was not yet observed), indicates (see Fig.3) that the valence electron mediated interlayer coupling depends strongly on the lattice geometry. NEUTRONOGRAPHIC TOOLS There are two powerful neutron scattering techniques that can be used for studying magnetic SLs: wide-angle diffractometry and neutron reflectometry. In diffraction regime, the neutrons directly probe the correlations between individual magnetic spins in the scattering system. Therefore, this method can be used for investigating any type of magnetic order in a crystal -FM, AFM, or any other more complicated arrangement. If the system consists of larger "blocks'' of ordered spins (e.g., of magnetically ordered layers in a SL structure), neutron diffraction is sensitive to correlations between such blocks. Let us consider a SL made up of alternating N magnetic and N nonmagnetic layers, each consisting of m and n atomic monolayers, respectively. The magnetic atoms have only two Ising-like spin states, "up" and "down". The scattering intensity I(Q z ) parallel to the growth axis (z) of the SL can be obtained, from a standard equation of diffraction theory, in the form: where Q=(0,0,Q z ) is the wave-vector transfer, f(Q z ) the magnetic form factor and F s.l. (Q z ) is the magnetic structure factor of a single layer. The structure factor describes the peak profile that would be obtained by measuring diffraction from a single layer. It has the shape of a broad maximum accompanied by weak subsidiary maxima (the dashed line in Fig.4). For FM layers the main maxima occur at Q z =(2π/d)ζ points (ζ=1,2,3,...), and for AFM ones at Q z =(2π/d)η points (η=1/2,3/2,5/2,...), where d is the spacing between monolayers, i.e., at the same positions, where Bragg peaks would occur for bulk crystals with the same spin structure. In Eq.(1) by D the SL period D=(m+n)d is denoted. The right-side sum runs over all N SL magnetic layers and the coefficient P l is +1 when the spin configuration in the l th layer is the same as in the l=1 layer, and -1 if it is reversed (compare Fig.1). The squared modulus of this sum can be divided into two terms: one describing "self-correlation" and the other the layer-layer correlations between different layers: If there are no interlayer correlations, then the P l coefficient for successive layers takes the value of +1 or -1 in random and, for large N, the layer-layer correlation term disappears on statistical averagingthe spectrum has essentially the same shape as the squared single-layer structural factor. If there are interlayer correlations in the system, the spectrum shape depends on the type of order in the layers (FM or AFM) and on how they are coupled. For ferromagnetically correlated FM layers all the coefficients have the same sign and the sum in Eq.(2) becomes equal to sin 2 (NDQ z /2) /sin 2 (DQ z /2). This function has a sequence of sharp maxima at Q z = π p/D points (p = 0,1,2,...). These maxima are "enveloped" by the single layer structure factor function and the neutron diffraction spectrum has the shape shown in Fig.4(a) For the AFM interlayer coupling, on the other hand, the P l coefficients are +1 for l odd and -1 for even l-s, and now the squared sum in Eq.(1) becomes cos 2 (NDQ z /2)/cos 2 (DQ z /2), which has maxima at Q z =π(p+1/2)/D points and produces a spectrum shape as shown in Fig.4(b). Note that for FM interlayer correlations there is a central peak at the Bragg point with symmetric pairs of "satellites", whereas in the case of AFM correlations there is an intensity minimum at the Bragg position in between two "fringes" with equal heights. Such a clear difference in the spectrum shapes enables an easy identification of the correlation type. For AFM layers the rules are less straightforward. Still, it can be readily checked that the spectrum shape shown in Fig.4(a) occurs in the case of the "in-phase" correlations in SLs for which (m+n) is an even number, and in the case of "out-ofphase" correlations for SLs with (m+n) odd; in other cases, the spectrum has the profile depicted in Fig.4(b). Neutron reflectometry. Neutrons impinging a flat surface of a material with the refractive index n at a grazing angle θ lower than the critical angle θ crit =[2(1-n)] 1/2 are totally reflected. The reflectivity R(θ) just above θ crit is a rapidly decreasing function. If on the reflecting surface a SL structure (made of two materials with different refractive indices n i ) is deposited, the R(θ) characteristic exhibits sharp maxima at θ values satisfying the Bragg equation pλ=2Dsinθ . Here λ is the neutron wavelength, D the SL period, and p=1,2,3,... In SL made of a magnetized material, additional magnetic peaks occur in the reflectivity spectrum, due to the interaction between the neutron magnetic moment and the atomic momenta. This enables the determination of the type of interlayer correlations in FM SLs. For layers which are ferromagnetically coupled the magnetic and atomic structures have the same periodicity and the magnetic peaks occur at the same positions as the structural ones (Fig.5(a) ). On the other hand, the AFM coupling doubles the magnetic periodicity, and the peaks occur halfway in between the structural ones ( Fig.5(b) ). It should be noted that the intensity and resolution in reflectometry is considerably better than in diffraction experiments. However, this method cannot be used for studying AFM layers with zero net moment. Experimental evidences The neutron experiments were performed at the NIST's Neutron Scattering Center. The instruments used were BT-2 and BT-9 triple-axis spectrometers set to elastic diffraction mode, with a pyrolitic graphite (PG) monochromator and analyzer, and a 5cm PG filter in the incident beam. The wavelength used was 2.35Å and the angular collimation 40 min. of arc throughout. Additionally, a number of diffraction experiments were carried out on the NG-1 reflectometer operated at neutron wavelength equal to 4.75 Å. The latter instrument yielded a high intensity, high-resolution spectra with a negligible instrumental broadning of the SL diffraction lines. The first neutronographic studies of the interlayer correlations in all-semiconductor structures considered the AFM SLs. Neutron diffraction measurements, carried out on a large population (∼50) of [(EuTe) m /(PbTe) n ] N SLs with many different combinations of m and n, have revealed distinct interlayer correlation satellites in samples with n up to 20 monolayers. They show that the interaction between adjacent EuTe layers can be transferred across non-magnetic PbTe spacers as thick as 70 Å. However, as can be seen in Fig.6, with increasing n the satellite peaks become less sharp, while a pronounced "hump" appears underneath. The initial set of well-resolved lines gradually changes into the smooth profile characteristic for the uncorrelated structure. This indicates that the interlayer correlations weaken with the increasing thickness of the PbTe spacer. It should be noted that the strength of the coupling between these AFM EuTe layers can not be directly measured by neutron diffraction. Recently, another structure based on the Eu chalcogenides, the ferromagnetic EuS/PbS SL grown on KCl substrates along the [001] direction, has been studied. Diffraction scans carried out at low temperatures revealed magnetic spectra with a characteristic double-peak profile ( Fig.7(a) ) -a clear signature of AFM coupling between the FM layers. This AFM interlayer coupling showed up even more clearly in reflectivity spectra ( Fig.7(b) ), which exhibited sizable maxima at positions corresponding to the doubled structural periodicity of the measured specimen. Such peaks were observed for systems with the PbS non-magnetic spacer thickness up to 90Å. To confirm the magnetic origin of these peaks, reflectivity spectra were also taken with an in-plane magnetic field. Application of a sufficiently strong, external magnetic field results in full parallel alignment of the FM EuS layers; thus the AFM peak disappears, while the intensity of the peak at the structural position, corresponding to the FM spin configuration, increases. In (a) the data set taken above T C were subtracted. The double-peaked profile is characteristic for the AFM coupling. In (b) the zero field spectrum is denoted by blank points -the small structural peak corresponds to the chemical periodicity and the large one to the doubled magnetic periodicity in the SL with AFM-coupled FM layers. The external magnetic field of 185G shifts the magnetic peak to the structural position (filled points). After [ 4 ] For the EuS/PbS SLs also the magnetic measurements, taken by a SQUID magnetometer with the in-plane field applied along the crystallographic [001] direction, were performed. The temperature and field dependences, as shown in Fig. 8, are clear indications of the presence of AFM interlayer coupling between adjacent FM layers. For SL with a given thickness of the PbS spacer, the neutron reflectivity and the magnetic measurements lead to the same value of the field needed to attain a full transition from the AFM to FM ordering of the magnetic layers (saturation field). We note that for the FM structures the saturation field provides a direct measure of the interlayer coupling strength. Comparison with the theoretical model For the FM (001) EuS/PbS SLs the sign of the interlayer exchange coupling and the rate of its decrease with the PbS nonmagnetic spacer thickness are in very good agreement with the predictions of the model, presented in [ 16 ], in which the interaction between the magnetic layers of the SL is mediated by valence electrons. The experimental values of the exchange constants J 1 estimated from the saturation fields in real structures are, however, about an order of magnitude smaller than the theoretical ones, obtained for perfect SLs (compare Fig.3). The interfacial roughness and interdiffusion, which were shown to reduce significantly the strength of the interlayer coupling in metallic structures are probably responsible for this discrepancy. The obtained theoretically features of the valence band electron mediated interlayer coupling, especially its very weak dependence on the number of spin planes in the magnetic layer, distinguish this mechanism from the AFM dipolar coupling possible in the FM multilayer structures with tiny magnetic domains. Further studies, which include preparation of samples with different thickness of the magnetic layers and different non-magnetic spacer materials, are in progress. The comparison of the theoretical predictions for the AFM EuTe/PbTe structures with the experimental data is more complicated -in this case not only the perfect tool to measure the strength of the interlayer coupling, i.e., the saturation magnetisation, is not applicable, but also the correlated spin configurations are much more sensitive to the morphology of the SL. The information about the chosen by the coupling spin configurations in consecutive layers comes from a detailed analyzis of the positions of the satellite lines, made under an extremely strong assumption that the structures are morphologically perfect, with the same, well defined m and n values throughout the entire (EuTe) m /(PbTe) n SL composed of several hundreds of periods. The observed spectra for the structures with nominally even m and even n reveal the preference for the in-phase spin configurations, whereas for those with odd m and even n they exhibit the preference for the out-of-phase configuration, both in agreement with the model predictions. None of the studied samples had even m and odd n. For the samples with m and n both nominally odd, the neutron diffraction spectra seem to indicate that the in-phase configuration is preferred, contrary to the theoretical result. Still, an opposite suggestion comes from the magnetic measurements. Namely, the single period of the odd m/odd n SLs deduced from the analysis of the neutronographic data should lead to a significant net magnetic moment of the SL -neither in EPR nor in magnetic measurements such net magnetic moment was detected, [ 19,20 ]. Investigations of these fascinating phenomena, which include studies of EuTe/PbTe with smaller number of SL periods, i.e., with even better controlled numbers of magnetic and nonmagnetic monolayers, are in progress. Despite the current not complete understanding of the experimental data in these structures, it should be noted that the valence electron mediated interlayer exchange is up to now the only effective mechanism capable to explain the origin of the observed in EuTe/PbTe correlations between the AFM, semiconductor layers.
253839010
s2orc/train
v2
2022-11-25T06:17:29.263Z
2022-11-23T00:00:00.000Z
Microplastics in surface water of Laguna de Bay: first documented evidence on the largest lake in the Philippines The pollution of aquatic systems by microplastics is a well-known environmental problem. However, limited studies have been conducted in freshwater systems, especially in the Philippines. Here, we determined for the first time the amount of microplastics in the Philippines’ largest freshwater lake, the Laguna de Bay. Ten (10) sampling stations on the lake’s surface water were sampled using a plankton net. Samples were extracted and analyzed using Fourier-transform infrared spectroscopy (FTIR). A total of 100 microplastics were identified from 10 sites with a mean density of 14.29 items/m3. Most microplastics were fibers (57%), while blue-colored microplastics predominated in the sampling areas (53%). There were 11 microplastic polymers identified, predominantly polypropylene (PP), ethylene vinyl acetate copolymer (EVA), and polyethylene terephthalate (PET), which together account for 65% of the total microplastics in the areas. The results show that there is a higher microplastic density in areas with high relative population density, which necessitates implementing proper plastic waste management measures in the communities operating on the lake and in its vicinity to protect the lake's ecosystem services. Furthermore, future research should also focus on the environmental risks posed by these microplastics, especially on the fisheries and aquatic resources. Introduction The United Nations Environment Program considers plastic pollution to be a significant environmental problem. It has been identified as an emerging issue that may impact biological diversity and human health alongside climate change (Blettler et al. 2017). The number of data in the Philippines on plastic pollution in freshwaters is limited when compared to studies on marine ecosystems, even though pollution in both ecosystems is comparable (Peng et al. 2017;Superio and Abreo 2020;Inocente and Bacosa 2022;Sajorne et al. 2022). Requiron and Bacosa (2022) studied macroplastics in Pulauan River, Dapitan City, where a total of 1,636 macroplastics items were identified for 10 days of observation. Furthermore, in the same study, a total of 996 plastic litter were also observed on the riverbank. This evidence is alarming because pollution levels in rivers, streams, and lakes are almost identical to those found in the sea (Peng et al. 2017). One of the emerging issues concerning plastics nowadays is microplastics. Microplastics are minute plastics usually less than 5 mm in size that could spread harmful substances, including toxins and polycyclic aromatic hydrocarbons (Abreo 2018). These microplastics can also be classified further into primary and secondary microplastic. Primary microplastics are the ones that are intentionally reduced and synthesized into smaller pieces for commercial uses, while secondary microplastics are the ones that are environmentally degraded from plastics (Xu et al. 2020). Microplastic abundance has been discovered in various environments ranging from freshwater to the poles (Isobe et al. 2017). However, there is a scarcity of data on microplastic research in freshwater, particularly in lakes and reservoirs (Ramadan and Sembiring 2020). Evidence of microplastics was observed to be floating in the surface waters controlled by the currents (Ivar do Sul et al. 2014). In a study conducted in China's Yellow River, microplastic fibers < 200 μm were dominant in its surface water, accounting for roughly three-fourths of the microplastics, and were mostly polyethylene (PE), polypropylene (PP), and polysterene (PS) in composition (Han et al. 2020). Another study conducted in surface water from eastern coastal areas of Guangdong, South China, showed mean abundance of microplastics of 8,895 items/ m 3 , with small white fragments dominating character ). In the Philippines, a study conducted in Molawin Creek in Makiling Forest Reserves indicated that it is also being polluted by these micropollutants, which are primarily coming from the area's residential, commercial, and university facilities, causing an intrusion into the ecological services provided by the watersheds (Limbago et al. 2021). Furthermore, microplastic fragments measuring < 2.5 mm in length were microplastic observed in the Pasig River (Deocaris et al. 2019). Pasig River flows through the urban areas from its upstream portion in Laguna de Bay (Deocaris et al. 2019). However, the amount of microplastics from Laguna de Bay to Pasig River remained to be ascertained. Laguna de Bay is the largest lake in the Philippines, with an area of 911.7 km 2 , and an economically important body of water that supports approximately 9,000 ha of fish pens and fish cages and provides a source of income for fishers in Laguna and Rizal provinces. It is split into four (4) bays: West Bay, Central Bay, East Bay, and South Bay. Talim Island separates the West and Central Bays. These divisions are caused by significant bathymetric differences between these areas (Delos Reyes 1995). Moreover, Laguna Bay serves as a source of irrigation water, industrial cooling water, hydroelectric power generation, a transportation route, a source of animal feed, a recreational venue, a source of fish supply, and a source of domestic water supply for the neighboring provinces including the capital Metro Manila (Laguna Lake Development Authority 2015; Guerrero 1996). The occurrence of microplastics in the environment is recognized as an environmental challenge. Microplastics in the studies conducted in Molawin Creek and Pasig River pose an environmental microplastic pollution threat to Laguna de Bay as the Pasig River flows through its upstream portion (Deocaris et al. 2019). Furthermore, the lake has been used by industries, businesses, and households for open water extraction and for waste disposal, almost indiscriminately (Laguna Lake Development Authority 2015) and unjustified exploitation, which is the use of natural resources for economic growth that can have a negative connotation due to environmental degradation. Determining the prevalence of microplastic in Laguna de Bay will provide science-based management options to prevent and control pollution. Despite the significant economic contribution of Laguna de Bay and the potential occurrence of microplastics in water that could threaten the resources it supports, the amount of this emerging pollutants in this economically important body of water is yet to be determined and quantified. Thus, this study aimed to assess the microplastic pollution in the surface water of Laguna de Bay. We determined the density, color, shape, type of polymer, and distribution of microplastics in the lake. To our knowledge, this is the first study to document the presence of microplastics in a lake in the Philippines. Materials and methods The study was conducted in Laguna Lake, popularly referred to as Laguna de Bay (Fig. 1), the largest lake in the Philippines. The lake supports approximately 9,000 ha of fish pens and fish cages and provides a source of income for fishers in Laguna and Rizal provinces. Sampling was carried out in ten sites ( Fig. 1). Sampling stations were grouped according to the lake's bathymetric differences. The four groups were South Bay, West Bay, Central Bay, and East Bay. South Bay was composed of Station 1 and Station 7. West Bay was comprised of Station 2, Station 3, and Station 4. Central Bay was composed of Station 5 and Station 6, and East Bay was composed of Station 8, Station 9, and Station 10 ( Fig. 1). West, Central, and East Bay are ca. 30-40-km long and 7-20-km wide. Within each sampling site, water sample was collected yielding 10 samples for microplastic analysis. Water sampling Microplastics were collected on February 25, 2022 using a plankton net from the surface water of Laguna de Bay, following the methods by Viršek et al. (2016) with some modifications. A plankton net with a mesh size of 20 μm was used at each sampling site. The boat traveled 10 m while the net was set at a depth of 20 cm to keep the entire net submerged in water. GPS coordinates were also recorded at the start and end to aid in calculating the distance trawled. This formula was used to calculate the volume of water passing through the net: where r is the radius of the plankton net and h is the distance traveled by the net. Sampling processing In the laboratory, all glassware were washed thoroughly with distilled water for decontamination. Solids collected in the plankton nets were soaked in KOH solution and heated in a 60 °C oven for 24 hours to digest organic material found during the water sampling. The sample is then floated to extract the plastic debris using density separation in NaCl at a 30% concentration. Vacuum filtration using Millipore set was performed after floatation. The probable microplastic particles were collected into a clean Whatman glass filter. Each glass filters were washed with distilled water and was oven-dried until dry. A clean Petri dish kept the filter paper for optical microscopy analysis using 40x water volume = r 2 h magnification. The particles' shape, size, and colors were also measured and identified (Hidalgo-Ruz et al. 2012). Microscopy All suspected microplastic particles were mounted on glass slides using a clean needle. A microscope with a magnification of 40x was used to determine the morphological characteristics of the sampled microplastics from various sampling sites through visual inspection. Additionally, a film was characterized as a tiny, very thin coating or a substantial chunk of plastic trash, while a fiber was described as a microplastic with a long, slender appearance. Pellets were described as spherical, rounded microplastic particles. The classification of fragment was used when a microplastic could not be classified as a fiber, pellet, or film. This technique was based on the classification method used by Su et al. (2016). All laboratory activities were done at the Chemistry Department of Caraga State University, Butuan City, Philippines. Abundance The density of microplastics was calculated by the total number of microplastic items divided by the total water volume. This method was modified based on the study of Egessa et al. (2020). FTIR (Fourier-transform infrared spectroscopy) produces an infrared absorption spectrum to identify chemical bonds in a molecule. The spectra generate a sample profile, a unique molecular fingerprint that can be used to screen and scan samples for various components. Furthermore, ATR-FTIR is the most used spectroscopy in identifying microplastic polymers. This was used to determine the plastic type present in each study sample. Samples were mounted in the Perkin-Elmer Spectrum Two FTIR Spectrometer, which features a software program called Spectrum Search Plus. This program has five different algorithms, a spectral interpretation expert system, and library search and match functionality. All laboratory activities were done at the Chemistry Department of the Caraga State University, Butuan City, Philippines. Quality control Quality control of this study was done following the methods by Egessa et al. (2020) with some modifications. During microscopy, all the dried samples to be analyzed were kept covered in glass Petri dishes. Background contamination from laboratory sources via the air and laboratory tools and equipment were tested using procedure blanks made from glass filter contained in the Petri dishes and distilled water. At each stage of sample collection and analysis, the Petri dishes were left open to the air. The contents of control Petri dishes were processed and screened for microplastic contamination. The procedural blanks contained no microplastic. Abundance and distribution Microplastics were recorded in all ten (10) sampling stations, in varying abundance (Table 1). Across all sites, the mean density microplastics in surface water of the lake was 14.29 items/m 3 . The density was highest in West Bay comprising of Station 2, Station 3, and Station 4, associated Density = number of microplastics water volume (m 3 ) with more intensive anthropogenic activities and lowest in Central Bay comprising of Station 5 and Station 6 areas of the lake with less anthropogenic activities. In West Bay, microplastic abundance ranged from 17.14 to 24.17 items/ m 3 , being highest at Station 2 and lowest at Station 3. Central Bay comprising Station 5 and 6 which were associated with fish landing beaches (Bagarinao 1998) in rural communities, was intermediate, with the density in the range of 7.14-11.43 items/m 3 being higher at Station 5 and lower at Station 6 ( Table 1). These findings are consistent with previous research that has found a high abundance of microplastics in areas of high anthropogenic activity, such as densely populated cities (Browne et al. 2011), tourist beaches, and aquaculture (Laglbauer et al. 2014), as well as fishing activities (Dowarah and Devipriya 2019). In this study, the contribution of sites to the microplastic abundance was significant for sites in West Bay and South Bay associated with intensive anthropogenic activities. West Bay of the Laguna Lake includes cities in the Philippines' National Capital Region (NCR). NCR is the most densely populated region in the country (Schachter and Karasik 2022). For instance, NCR has a population of approximately 14 million which accounts for roughly 13% of the total national population. The National Capital Region (NCR) remains the most densely populated region in the country in 2020. It is more than 60 times denser than the national average, with 21,765 people per square kilometer (Philippine Statistics Authority 2021). Population and domestic waste generation have a positive linear relationship (Han et al. 2020); thus, domestic wastes on the lake could be sources of a large amount of plastic (Wang et al. 2018) which are the likely sources of microplastic fibers in Laguna Lake. The observed densities of microplastics in the surface water of Laguna Lake is within the range of the baseline assessment of microplastic concentrations in freshwater environments in Southeast Asian countries and other regions (Strady et al. 2021) except for Dongting Lake and Hong Lake in China which recorded a high abundance of microplastics. Table 2 shows different freshwater environments that were studied in Asia and other regions, representing a total of six sampling sites chosen for their environmental characteristics and their accessibility. Microplastics were measured in six surface waters. This variation in the abundance of microplastics observed among different lakes from different parts of the world (Table 2) indicates differences in the lake's conditions and activities on the lake and that these conditions affect microplastic concentrations. Morphological characteristics Microplastics were classified into fiber, fragments, film, granule, and filament (Fig. 2) and were products of the degradation of large plastic materials. On average, fiber was the most numerically abundant microplastic in the surface water of the lake, contributing 57% of total abundance, followed by fragment (21%), film (17%), filament (3%), and granule (2%) (Fig. 3A). At the site-specific level, fibers were present in all stations (Fig. 3B). In East Bay (S8, S9, and S10) and West Bay (S2, S3, and S4), the abundance of microplastics varied in the order: fiber > film > fragment > filament > granule, where granule is only present in West Bay (S2) while in Central Bay (S5 and S6) and South Bay (S1 and S7), the order was fiber > fragment > film > granule where film and granules were only present in South Bay (S7). This is a similar observation in Dongting Lake and Hong Lake, where 41.9%-91.9% of fibers dominate in surface water samples of the lakes (Wang et al. 2018). Moreover, studies conducted in freshwater lakes in China revealed that filament microplastics were the most abundant in both surface waters of Lakes Poyang and Taihu (Su et al. 2016). Domination of microplastic fibers represent land-based origin (Browne et al. 2011), and some can be due to abrasion and fiber release from synthetic fabrics, especially in freshwater ecosystems. More than 1,900 microplastic fibers were shed while washing a single polyester garment, resulting in more than 100 per liter of effluent water (Browne et al. 2011). Additionally, sources of these type of microplastics include laundering of synthetic textiles, tire erosion, total city dust (Boucher and Friot 2017), household and office dust (Mishra et al. 2019), and materials from construction sites (Waldschläger et al. 2020). Polyester, the fiber form of polyethylene terephthalate, is also widely used in fabrics for apparel and other finished textile goods, accounting for nearly half of the global fiber market (Carr 2017). The presence of microplastic fibers is concerning because recent studies have revealed several adverse effects of microplastic fibers on aquatic organisms, including tissue damage, reduced growth, and body condition, and even mortality (Rebelein et al. 2021). Microplastics on the glass filter were identified primarily using morphological characteristics (such as color, surface structure, and shape) and detailed criteria described in previous research (Hidalgo-Ruz et al. 2012;Su et al. 2016). Microplastics were found in a wide range of colors, including black, white, brown, blue, transparent, and red. The color distribution across size classes in each microplastic type was consistent, indicating that small-sized particles were byproducts of larger particle breakdown (Fig. 4). The most prevalent color was blue, accounting for 53% of the total microplastic count, with transparent, black, brown, white, and red accounting for 19%, 10%, 9%, 5%, and 4%, respectively (Fig. 4). In today's world, consumers are bombarded with plastic products in a variety of colors to increase their market potential (Thetford et al. 2003;Zhao et al. 2015). All the microplastic debris were breakdown products of large plastic products, indicating that the various microplastic colors represented the original product colors, though bleaching as the plastic debris wears out may change the actual color of the plastic product (Stolte et al. 2015). Among the 132 microplastic research, the dominant microplastic color is blue, which accounts for 32.9% of published research (Ugwu . The same study also revealed that the dominant microplastic shape is fibershave, consistent with our findings. Moreover, it can also be implied that this microplastic sources are from disposable face masks (DFM) (Sajorne et al. 2022). The dominant color blue in this study was also one of the primary colors of the fabrics used to make DFMs. Blue fabrics mainly were seen on the face masks' outer layers. This increased their exposure to radiation and abrasion, both of which favored the production of microplastics (Song et al. 2017). Another possible source of blue microplastics is effluents from wastewater treatment plants (Kazour et al. 2019) surrounding the lake, household discharges, and chemical industries (Grbić et al. 2020). Polymer composition and its potential sources Out of 123 items extracted, 100 (81%) items were confirmed as a plastic polymer upon the alignment of the generated spectra with reference database from Perkin-Elmer FTIR analysis with spectral matches based on its library (Table 1). The reduced number can be associated with some microplastics articles that showed FTIR limitations from a technical perspective (Xu et al. 2019). One limitation of an ATR-FTIR measurement is that it will detect materials on the sample's surface (Scientific 2018). Environmental exposure leads to polymer aging and oxidative weathering of microplastic (Xu et al. 2019). Thus, if a sample has been weathered (has an irregular surface), this may make identification difficult (Scientific 2018). Additionally, the measurable particle size of an ATR-FTIR is roughly 500 µm to 5 mm (Scientific 2018). Apart from a technical perspective, the collection of these minute particles can also be a consideration. There were 11 types of polymers identified in the surface water of Laguna de Bay. Polymers identified were low-density polyethylene (LDPE), polypropylene (PP), polyethylene terephthalate (PET), general purpose polystyrene (GPPS), polyamide, high-density polyethylene (HDPE), polymethyl methacrylate (PMMA), polyvinyl chloride (PVC), acrylonitrile butadiene styrene (ABS), ethylene-vinyl acetate (EVA), and polybutylene terephthalate (PBT) (Fig. 5A). The majority of the polymers were fibers which were all present in all types of polymers confirmed by FTIR except for PMMA (Fig. 5B). The most abundant polymer assessed in Laguna de Bay's surface water was polypropylene (30%) (Fig. 5A). Some significant sources of PPs are plastic bags, storage containers, and microbeads in personal care products. Apart from personal cares, polypropylene was also used to make protective masks, with 20% of commercially available masks made of polypropylene (Ellison et al. 2007). The abundance of polypropylene in this study can also be linked to the use A) and its composition in different sampling of protective masks as an infection control measure, which was common in East and Southeast Asia at the start of the COVID-19 and eventually in the world during 2020 and 2021 (Worby and Chang 2020;Sajorne et al. 2022). The majority of the microplastics released from the face masks were medium-sized polypropylene fibers derived from nonwoven fabrics. Additionally, the abrasion and aging caused by wearing face masks increased the release of microplastics, particularly medium and blue microplastics (Chen et al. 2021). In the Philippines, 377 items of face masks were collected along the eastern coast of Palawan (Sajorne et al. 2022). Additionally, disposable masks made with a density of 0.014 items/m 2 were observed in Davao Gulf in Mindanao (Abreo and Kobayashi 2021). PMMA, ABS, and PBT were among the least common microplastic polymer out of eleven (11) polymers comprising only 3%, 2%, and 1%, respectively. The density of plastic debris and its behavior in aquatic systems are determined by the composition of microplastic particles. Microplastic debris, for example, may be suspended in the water column or sink to the sediment when discharged in an aquatic environment, depending on its density (Cole et al. 2011). Low-density plastics, such as polyethylene and polypropylene, are less dense than fresh water and thus float on the water's surface. Moreover, detection of these microplastics despite these limitations proved the occurrence of microplastics in the lake's surface and should be given attention. GPPS, which is denser than fresh water, were also found in the water surface samples studied (Fig. 5A). The floating GPPS particles were most likely blown into foam, making them buoyant and thus able to float (Brignac et al. 2019). The sources of microplastics were linked to anthropogenic activities on the water, recreation, and nearby trading centers. Microplastic fibers were common among the eleven (11) polymer types except for PMMA which happens to be microplastic films. Conclusions Our results demonstrated that the surface water of Laguna Lake is contaminated with microplastics. Microplastics were ubiquitously detected in all sites with the concentration highest in areas of the lake characterized by intensive human activities such as but not limited to household discharges, effluents from chemical industries, and intensification of economic activities. This study provides the first documented evidence of microplastics in the surface water of Laguna de Bay and the first among the lakes in the Philippines. All the microplastics were pieces of plastic used by most of the community with the major polymers being polypropylene, ethylene-vinyl acetate copolymer, and polyethylene terephthalate. A majority of the microplastics were small colored particles specifically blue-colored microplastics which pose a threat to water quality and fisheries of the lake as these can easily enter the food chain. Furthermore, use of plastic bags and other plastic materials by fishermen to hold stones used as sinkers for the fish gillnets and floaters for aquaculture in the lake calls for urgent interventions was aimed at reducing microplastic pollution of the lake. PPEs (personal protective equipment) such as disposable face masks were also linked to being contributors of microplastic pollution in the lake. Microplastics pose risks to fish and its natural foods, especially invertebrates, and the possible link to human health need to be understood. The plastic issue must be addressed Fig. 5 Percent composition of A different microplastic polymer type and B the different microplastic shapes within its polymer type on land, starting at the production and consumption stages, in order to mitigate this emerging issue. The researchers also advise using biodegradable substitutes in place of plastic. At least in an industrial setting, bioplastics derived from biologically produced basic materials might be a part of the solution. Furthermore, strategies such as proper waste management, plastic recycling, and penalties for illegal dumping in areas close to water resources should be promoted and implemented in the communities. Additionally, municipalities surrounding the lake should plan a policy brief on this matter. The accumulation and effects of these microplastics to aquaculture species and sediment in Laguna de Bay deserve further investigation.
12832860
s2orc/train
v2
2016-05-12T22:15:10.714Z
2014-06-10T00:00:00.000Z
Recent Advances in Drug Repositioning for the Discovery of New Anticancer Drugs Drug repositioning (also referred to as drug repurposing), the process of finding new uses of existing drugs, has been gaining popularity in recent years. The availability of several established clinical drug libraries and rapid advances in disease biology, genomics and bioinformatics has accelerated the pace of both activity-based and in silico drug repositioning. Drug repositioning has attracted particular attention from the communities engaged in anticancer drug discovery due to the combination of great demand for new anticancer drugs and the availability of a wide variety of cell- and target-based screening assays. With the successful clinical introduction of a number of non-cancer drugs for cancer treatment, drug repositioning now became a powerful alternative strategy to discover and develop novel anticancer drug candidates from the existing drug space. In this review, recent successful examples of drug repositioning for anticancer drug discovery from non-cancer drugs will be discussed. Introduction to drug repositioning The traditional approach to drug discovery involves de novo identification and validation of new molecular entities (NME), which is a time-consuming and costly process. Despite huge investment in drug discovery and development and explosive advancement in biological/informational technologies during past decades, the number of new drugs introduced into the clinic has not increased significantly. For example, while the total R&D expenditure for the drug discovery worldwide increased 10 times from 1975 (US $4 billion) to 2009 ($40 billion), the number of NMEs approved has remained largely flat (26 new drugs approved in 1976 and 27 new drugs approved in 2013) [1,2]. The average time required for drug development has also increased over time. It has been estimated that the average drug development time from discovery to market launch in US and EU coun-tries was 9.7 years during 1990s, but has increased to 13.9 years from 2000 onwards [3]. Those hurdles in discovering and developing new drugs call for alternative approaches including drug repositioning. Drug repositioning refers to the identification of new indications from existing drugs and the application of the newly identified drugs to the treatment of diseases other than the drug's intended disease. A well-known example of drug repositioning is the use of sildenafil (Viagra) in erectile dysfunctions. Sildenafil is an inhibitor of cyclic guanosine monophosphate (cGMP)-specific phosphodiesterase type 5 (PDE5) and was originally developed for the treatment of coronary artery disease by Pfizer in 1980s. The side effect of sildenafil, marked induction of penile erections, was serendipitously found during the Phase I clinical trials for the patients with hypertension and angina Ivyspring International Publisher pectoris [4]. After sildenafil failed in Phase II clinical trials for the treatment of angina, it was redirected to the treatment of erectile dysfunctions. Sildenafil received a US-Food and Drug Administration (FDA) approval and entered the US market in 1998, quickly becoming a blockbuster. Another well-known example of drug repositioning is thalidomide. Thalidomide was originally developed as a sedative by the German pharmaceutical company Grünenthal in 1957. It had been used to alleviate morning sickness in pregnant women. Not long after the drug was introduced, it was found to cause serious birth defects. More than 10,000 children in 46 countries were born with malformation of the limbs and other body extremities due to the use of thalidomide, and around half of them died within a few months after birth [5], leading to its withdrawal from the market. In the ensuing decades, several research groups found that thalidomide possesses anticancer activity. It was found to inhibit angiogenesis in animal models by Robert D'Amato and Judah Folkman [6] and was subsequently shown to have promising therapeutic effect on refractory multiple myeloma and metastatic prostate cancer [7,8]. In 2006, thalidomide received US-FDA approval for the treatment of multiple myeloma in combination with dexamethasone. Activity-based vs in silico drug repositioning Several success stories of drug repositioning brought global attention to the existing drug space for potential off-target effects that may be beneficial to certain diseases such as cancer. Since existing drugs have already been used in humans, they have well-established dose regimen with favorable pharmacokinetics (PK) and pharmacodynamics (PD) properties as well as tolerable side effects, making old drugs useful sources of new anticancer drug discovery. In early 2000s, we launched a new initiative to assemble a library of existing drugs, dubbed the Johns Hopkins Drug Library (JHDL) [9]. JHDL has about 2,200 drugs that have been approved by US-FDA or by its foreign counterparts and about 800 non-approved drug candidates that have entered various phases of human clinical trials. We note that NIH Chemical Genomics Center (NCGC) recently built a collection of existing drugs called NCGC Pharmaceutical Collection (NPC) which contains 2,400 small molecular entities that have been approved for clinical use in US (FDA), EU (EMA), Japan (NHI), and Canada (HC) [10,11]. In addition to these, many of clinical drug collections are currently commercially available. These clinical drug collections have proven to be useful sources to find new indications of existing drugs. The term 'activity-based drug repositioning' we shall use in this review refers to the application of actual drugs for screening. In contrast, 'in silico drug repositioning' utilizes public databases and bioinformatics tools to systematically identify interaction networks between drugs and protein targets [12]. This latter approach has become successful since a large amount of information on the structure of proteins and pharmacophores has been accumulated over the past few decades along with the advancement of bioinformatics and computational science. Most pharmaceutical companies have already adopted the in silico models for drug discovery from diverse chemical spaces. In silico drug repositioning is a potentially powerful technology and has some advantages over the activity-based drug repositioning, including increased speed and reduced cost. However, it also has some limitations since it requires high-resolution structural information of targets. It also requires disease/phenotype information or gene expression profiles of drugs when a screen does not involve protein targets. In contrast, activity-based drug repositioning can employ both protein target-based and cell/organism-based screens without requiring structural information of target proteins or database. Thus, activity-based and in silico drug repositioning represent two alternative and complementary approaches to new drug discovery (Table 1). Here, we briefly summarize a few recent discoveries of new anti-angiogenic and anticancer activities of existing drugs through activity-based screening of the JHDL along with the subsequent mechanistic and translational follow-up studies. Itraconazole Itraconazole is a triazole antifungal drug developed in 1980s. Like other azole family of antifungal drugs, it is effective in a variety of systemic fungal infections [13]. The mechanism of antifungal activity of itraconazole has been well established. It is known to inhibit cytochrome P450-dependent lanosterol 14-α-demethylation (14DM) in the ergosterol biosynthesis pathway in fungi [14]. Ergosterol is the main sterol in most yeasts and fungi, being responsible for their membrane integrity and function. It is required for fungal cell proliferation [15]. By inhibiting 14DM, itraconazole and related azole compounds cause the depletion of ergosterol and induce accumulation of 14-α methylsterols that can impair membrane functions, thereby suppressing the fungal growth [14,16]. Although itraconazole is a well-tolerated drug, it has some side effects including hepatotoxicity (rare but sometimes serious), cardiovascular toxicity and diarrhea (when prepared with cyclodextrin) [17]. Anticancer activity of itraconazole was first reported by Chong et al. in 2007 due to its newly discovered anti-angiogenic activity [18]. In this study, the JHDL was screened for inhibitors of human umbilical vein endothelial cell (HUVEC) proliferation, a proxy for angiogenesis, and itraconazole was identified as one of the most potent hits. In follow-up studies, itraconazole, either alone or in combination with other anticancer drugs, showed strong anticancer activities in preclinical models including non-small cell lung cancer (NSCLC), medulloblastoma, and basal cell carcinoma [19,20]. Prompted by such encouraging preclinical results, itraconazole has entered several Phase II clinical studies for the treatment of various types of cancer. Most recently, positive clinical results have been reported from advanced lung cancer and prostate cancer trials (both at Johns Hopkins Sidney Kimmel Comprehensive Cancer Center) and from basal cell carcinoma trial (at Stanford University) [21][22][23]. Itraconazole in combination with pemetrexed showed significant survival benefit in patients with progressive nonsquamous non-small-cell lung cancer compared to the control arm of pemetrexed alone [21]. High dose (600 mg/day) of itraconazole also showed modest anticancer activity in patients with metastatic castration-resistant prostate cancer (CRPC) [22]. In the basal cell carcinoma trial, patients were received oral itraconazole 200 mg twice per day for 1 month or 100 mg twice per day for an average of 2.3 months. In this exploratory trial, itraconazole reduced tumor size by 24% from the patients [23]. Overall, itraconazole was well-tolerated by the patients in all three aforementioned Phase II trials with a few common toxicities including fatigue, nausea and anorexia. Recent study about therapeutic drug monitoring (TDM) of itraconazole suggests that serum concentration of 5 µg/ml (7 µM) is associated with 26% probability of adverse effect [24]. The probability increases progressively with increasing serum concentrations of itraconazole. Classification and regression tree (CART) analysis suggests that serum itraconazole level of 17.1 µg/ml (24.4 µM) is upper limit for TDM [24]. Considering that itraconazole's IC50 values for angiogenesis range from sub-micromolar to single digit micromolar concentrations (0.5 ~ 3 µM), it has a moderate therapeutic window. However, occurrence of some rare but serious side effects such as hepatotoxicity and congestive heart failure should be monitored in the clinical settings, though it is interesting that the higher dose of itraconazole (600 mg daily) did not cause those major side effects in prostate cancer patients as would have been anticipated [17,22]. Although itraconazole showed promising anticancer activity in several types of cancer, its precise anticancer mechanism has remained elusive. To date, two anticancer mechanisms of itraconazole have been proposed; inhibition of angiogenesis and inhibition of Hedgehog signaling pathway in certain cancer cells ( Figure 1). The studies of Xu et al. [25], and Nacev et al. [26] showed that itraconazole inhibited cholesterol trafficking in human endothelial cells, leading to inhibition of mammalian target of rapamycin (mTOR) and vascular endothelial growth factor receptor type 2 (VEGFR2) signaling pathways that are critical for endothelial cell proliferation and angiogenesis. In a separate study, Kim et al. demonstrated that itraconazole inhibited Hedgehog signaling pathway, thereby suppressing the growths of medulloblastoma and basal cell carcinoma [19]. Ongoing studies are being focused on identifying the molecular target of itraconazole in mammalian cells, which will further our understanding of the precise mode of action of itraconazole for its anticancer activity, facilitating its development as a new anticancer and anti-angiogenic drug. Nelfinavir Nelfinavir is a competitive inhibitor of human immunodeficiency virus (HIV) aspartyl protease and is being used in combination with other antiretroviral drugs to treat patients with HIV infection [27]. It received the US-FDA approval in 1997 for an oral dose regimen of 750 mg three times daily. It was later modified to a regimen of 1250 mg twice daily as recommended by US-FDA. Both regimens were proven to be equally effective [28]. The average peak plasma level of nelfinavir is around 8 µM and the bioavailability is known to be increased when taken with food [29]. Nelfinavir is a well-tolerated drug with some common side effects such as insulin resistance, hyperglycemia and lipodystrophy. From early 2000s, researchers have found potential anticancer activity of nelfinavir. It was reported to inhibit the growths of Kaposi's sarcoma [30], multiple myeloma [31], NSCLC [32,33], prostate cancer [34], and breast cancer [35,36]. Nelfinavir exhibited a broad-spectrum anticancer activity in vivo, being efficacious in several preclinical cancer models. There has been an increasing interest in underlying mechanism of anticancer activity of nelfinavir. A common side effect of nelfinavir was insulin resistance, which was later found to be through inhibition of phosphatidylinositol-3-kinase (PI3K)/AKT signaling pathway. AKT was recognized as an important mediator for cancer cell survival. In addition, activation of AKT signaling promotes resistance to chemo-and radiation therapy. Brunner et al. recently conducted a Phase I clinical trial of nelfinavir and chemoradiation for locally advanced pancreatic cancer [37]. In this trial, nelfinavir showed potent radiosensitizing and antitumor activities without adding toxicity in patients with pancreatic cancer. Although nelfinavir is known to inhibit AKT signaling pathway, it does not directly inhibit the kinase activity of AKT. Gupta et al. showed that nelfinavir down-regulates AKT phosphorylation by inhibiting 20S proteasome activity [38]. Hamel et al. also showed that nelfinavir inhibited chymotrypsin-and trypsin-like activities of the rat proteasome preparation in vitro [39]. However, whether proteasome is the relevant target of nelfinavir for its anticancer and anti-AKT activity has been debated since several conflicting results have been reported. Nelfinavir was shown to inhibit cyclin-dependent kinase 2 (CDK2) activity by enhancing proteasome-dependent degradation of Cdc25A phosphatase [40]. In addition, known proteasome inhibitors including MG132 and bortezomib did not recapitulate effects of nelfinavir in breast cancer cells, but, instead, they rescued the AKT inhibition by nelfinavir [36]. Recently, Srirangam et al. [41] and Shim et al. [36] demonstrated that nelfinavir is a novel inhibitor of heat shock protein-90 (HSP90). Srirangam et al. showed that ritonavir, a structurally related HIV protease inhibitor to nelfinavir, bound to HSP90 and inhibited interaction between HSP90 and AKT. Shim et al. conducted a pharmacological profiling of seven genotypically different breast cancer cell lines using JHDL and found that nelfinavir selectively inhibited the proliferation of human epidermal growth factor receptor 2 (HER2)-positive breast cancer cells over HER2-negative ones. In HER2-positive breast cancer cells, nelfinavir caused degradation of HER2 and AKT by inhibiting their association with HSP90. In addition to HER2 and AKT, the study further showed that nelfinavir decreased the levels of other known HSP90 client proteins including CDK4 and CDK6 [36]. These studies explained in part how nelfinavir inhibited AKT signaling in cancer cells. Nelfinavir is known to have a strong anticancer activity through multiple pathways including induction of ER stress, apoptosis and autophagy, and inhibition of AKT pathway and hypoxia-inducible factor 1α (HIF-1α)-dependent angiogenesis. Nelfinavir was shown to inhibit the chymotrypsin-and trypsin-like activities of 20S human proteasome. However, whether anti-proteasome effect is the primary mechanism of nelfinavir for anticancer activity remains elusive since nelfinavir causes proteasome-dependent degradation of several proteins. HSP90 is another proposed molecular target of nelfinavir, of which the inhibition leads to a decrease in the levels of its client proteins including HER2, AKT and CDKs through proteasome-dependent degradation. Other proposed anticancer mechanisms of nelfinavir include induction of endoplasmic reticulum (ER) stress and autophagy in cancer cells and inhibition of angiogenesis through down-regulation of hypoxia-inducible factor-1α (HIF-1α) [32,42,43]. Since nelfinavir can potentially interact with multiple proteins in cells, its anticancer activity might be a consequence of simultaneous inhibition of multiple pathways essential for cancer cell proliferation and survival ( Figure 2). Nelfinavir is now under more than 20 Phase I/II clinical trials for cancer (http://clinicaltrials.gov/). Although the anticancer mechanism of nelfinavir remains to be completely elucidated, promising anticancer activities have been reported from the clinical studies [44][45][46]. Digoxin Digoxin is a cardiac glycoside isolated from foxglove [47]. It has a long history of use in the treatment of various heart conditions including heart failure and arrhythmia. Digoxin is known as a potent inhibitor of Na + /K + -ATPase pump in cell membrane [48]. Na + /K + -ATPase regulates sodium ion gradient across the cell membrane to efflux intracellular Ca 2+ ions. Inhibition of Na + /K + -ATPase by digoxin causes an increase in intracellular Ca 2+ concentration in myocardiocytes and pacemaker cells, thereby lengthening cardiac action potential [49]. From early 1980s, a few cohort studies with a small group of breast cancer patients have shown that the use of digoxin decreased the breast cancer recurrence and aggressiveness [50,51]. These observations suggested a potential anticancer activity of digoxin against breast cancer. It was believed that digoxin, as a phytoestrogen, could interfere with estrogen receptor (ER) signaling in cancer cells, thereby suppressing the growth of breast cancer [52,53]. Two decades later, however, conflicting results were reported. Haux et al. showed that the population who were taking digitoxin, another cardiac glycoside, had a higher incidence of cancer compared to the control population [54]. In addition, Ahern et al. [55] and Biggar et al. [56] reported that the use of digoxin significantly increased the breast cancer incidence among women in Denmark. Among the digoxin users, there was the higher risk for developing ER-positive breast cancers than ER-negative breast cancers [56]. These data suggested that digoxin, in certain conditions, might act as an estrogen-like molecule rather than an anti-estrogen in women, thus increasing ER-positive breast cancer risk. In a recently study, Platz et al. conducted two-stage multidisciplinary studies to identify possible anti-prostate cancer drugs from JHDL [57]. The authors screened JHDL and identified digoxin as one of the most potent inhibitors of prostate cancer cell proliferation. A subsequent large-scale cohort study with long-term follow-up demonstrated that digoxin significantly reduced the incidence of prostate cancer by 25% among men [57]. Moreover, men who had used digoxin for longer than 10 years showed 46% lower incidence of prostate cancer, suggesting a potential anti-prostate cancer activity of digoxin. This encouraging observation led to the recent Phase 2 clinical trial for recurrent prostate cancer. How digoxin showed opposite effects between breast cancer and prostate cancer remains unclear. The fact that digoxin increased the risk of only estrogen sensitive cancers including breast and uterus cancers, but not ovary or cervix cancer, suggests that the tumor promoting mechanism is mediated through its estrogenic effect [58]. Paradoxically, estrogens suppress androgen levels and inhibit prostate cancer growth [59]. Hedelin et al. reported that intake of dietary phytoestrogens significantly reduced prostate cancer risk among the population in Sweden [60]. Moreover, digoxin and other cardiac glycosides decreased secretion of prostate specific antigen (PSA) in androgen receptor (AR)-dependent prostate cancer cells [61]. These observations strongly suggest that estrogenic effect of digoxin is beneficial for the treatment of androgen-dependent cancers, such as prostate cancer. In addition to the estrogenic effect, other anticancer mechanisms of digoxin have been also proposed, such as inhibition of Na + /K + -ATPase and HIF-1α synthesis [62,63]. Proposed anticancer mechanisms of digoxin are summarized in Figure 3. Digoxin is known to have a narrow therapeutic index (2 to 3), suggesting that doubling or tripling its recommended dose may cause toxicity [64]. The therapeutic serum level of digoxin for heart rate control is about 2 ng/ml (2.6 nM). However, IC50 of digoxin for prostate cancer cell proliferation was about 5-10 times the therapeutic serum level, suggesting a discrepancy between in vitro and in vivo anti-prostate cancer activity of digoxin [57]. Although the mechanism by which digoxin exerts anticancer activity in vivo with its therapeutic serum level remains unclear, it is intriguing to postulate that digoxin may accumulate in prostate tissue or that it may indirectly inhibit prostate cancer growth through other mechanisms such as inhibition of angiogenesis during the long-term, low-dose treatment. Nonetheless, it is clear that digoxin has a beneficial effect on patients with certain types of cancer and is currently undergoing several clinical trials for the treatment of cancer as a monotherapy or in combination with other chemotherapy drugs (http://clinicaltrials.gov/). Nitroxoline Nitroxoline is an old antibiotic which has been widely used in European, Asian and African countries from 1960s. It is particularly effective for the treatment of urinary tract infections (UTI) due to the drug's unique PK property. When administered orally, nitroxoline is rapidly absorbed into the plasma and is subsequently excreted into urine [65]. It has a long retention time in urine, thus making it ideal for UTI treatment. Nitroxoline is known to be able to chelate divalent metal ions such as Mg 2+ and Mn 2+ , which is appreciated as a possible mechanism for its antibacterial activity [66]. Digoxin is a phytoestrogen which inhibits AR signaling pathway by preventing AR binding to AR-responsive element (ARE), leading to decrease in AR target genes such as PSA in prostate cancer cells. Digoxin is also known to inhibit HIF-1α synthesis, thereby reducing HIF-1α binding to its cognate element, hypoxia-responsive element (HRE), and suppressing the expression of HIF-1α target genes such as VEGF in cancer cells. Binding of cardiac glycosides to Na + /K + -ATPase is known to activate Src, epidermal growth factor receptor (EGFR) and extracellular signal-regulated kinase 1 and 2 (ERK1/2) phosphorylation, which leads to an accumulation of p21/CIP1 and induction of cell cycle arrest in cancer cells. Shim et al. first reported anticancer activity of nitroxoline in 2010 [67]. The authors conducted two distinct screens, a target-based (methionine aminopeptidase-2 or MetAP2 as a target) and cell-based (HUVEC) screens to identify novel anti-angiogenic agents from a diverse chemical compound library and JHDL, respectively. Nitroxoline was found to be a common hit from both screens [67]. As it was identified from the MetAP2 inhibitor screen, it is not surprising that nitroxoline potently inhibited MetAP2 activity in vitro (IC50 = 54 nM) and in endothelial cells. It is well established that inhibition of MetAP2 activity in endothelial cells causes an increase in p53 level and an activation of retinoblastoma protein (pRb) by decreasing its phosphorylation, leading to the inhibition of endothelial cell proliferation [68]. Similar to a known MetAP2 inhibitor TNP-470, nitroxoline increases the level of p53 and induces hypo-phosphorylation of pRb in HUVEC. In addition, nitroxoline also causes an increase in acetylation of p53 (K382), α-tubulin and histone H3, hallmarks of inhibition of human sirtuins 1 and 2. Subsequent in vitro and in vivo studies showed that nitroxoline inhibited angiogenesis and the growth of cancer xenograft in mouse models. Given that nitroxoline has a long retention time in urine, it was postulated that the drug might be particularly effective in urological cancancers such as bladder cancer. Nitroxoline was tested in an orthotopic bladder cancer model in mice and was administered orally for two weeks to assess its anticancer activity. Cancers from control group grew continuously, whereas the cancer growth was significantly delayed in nitroxoline treated group, suggesting a potential anticancer activity of nitroxoline against bladder cancer in vivo. From the translational perspective, the concentration of nitroxoline required for inhibition of endothelial cell proliferation (IC50 = 1.9 µM) was well below the maximal clinically achievable concentration (C max > 10 µM) in both human plasma and urine. Taking into account that antibacterial activity of nitroxoline was shown at greater than 10 µM and that daily nitroxoline dosage of 400-750 mg (for adult) was sufficient to show antibacterial activity in human, the current nitroxoline dosage regimen for UTI treatment is likely to be sufficient for blocking angiogenesis in vivo. Other recent studies also supported the anticancer activity of nitroxoline. Mirkovic et al. showed that nitroxoline inhibited cathepsin B activity and suppressed breast cancer cell invasion [69]. Cathepsin B plays a role in degradation of extracellular matrix (ECM) and is implicated in tumor cell migration, invasion and metastasis. However, the Ki values of nitroxoline for endopeptidase activity of cathepsin B were 154.4 µM (for dissociation of EI complex) and 39.5 µM (for dissociation of ESI complex), calling into question the relevance of this effect of nitroxoline to its anti-angiogenic activity. However, it cannot be ruled out that the anti-cathepsin B effect of nitroxoline contributes to its anticancer activity in vivo by suppressing tumor cell migration and invasion. The proposed mechanisms of anticancer activity of nitroxoline are summarized in Figure 4. In a separate study, Jiang et al. recently reported that nitroxoline showed strong anticancer activity against lymphoma, leukemia, pancreatic cancer and ovarian cancer cells [70]. Nitroxoline has been used in many European countries as UTI drug for over 50 years and no apparent human toxicity has been reported, making the drug an excellent candidate for anticancer drug repositioning. With the unique PK property and current dosage regimen, human clinical studies of nitroxoline for the treatment of cancer, especially, bladder cancer are warranted. Nitroxoline was shown to inhibit both MetAP2 and sirtuins (SIRT1 and 2) in human endothelial cells. Inhibition of MetAP2 by nitroxoline induced hypo-phosphorylation of retinoblastoma protein (pRb) and increased the level of p53. Inhibition of SIRT1 and 2 caused an increase in acetylation of p53 (K382) and α-tubulin (in the presence of histone deacetylase inhibitor), leading to an induction of endothelial cell senescence. A synergy in increasing the acetylation level of p53 (K382) and inducing senescence was observed when MetAP2 and SIRT1 were inhibited simultaneously, representing a mechanism of nitroxoline for its anti-angiogenic activity. Nitroxoline was also shown to bind and inhibit cathepsin B, an enzyme responsible for extracellular matrix (ECM) protein degradation in cancer cells, thereby blocking cell migration and invasion. Phase I and II Mycophenolic acid * Immunosuppressant * Inhibiting type-1 inosine monophosphate dehydrogenase (IMPDH-1) and angiogenesis [73] * Inhibiting c-Myc signaling network in endothelium [74] Phase I Disulfiram * Treatment for chronic alcoholism * Inhibiting proteasome when complexed with metals [75] * Inhibiting DNA methyltransferase 1 (DNMT1) [76] Phase II and III Concluding remarks In this review, we provided an overview of drug repositioning for anticancer applications with a particular emphasis on activity-based drug repositioning of non-cancer drugs. Several successful case studies including those exemplified in this review are summarized in Table 2. Many of these drugs are under Phase II studies for cancer therapy. Although drug repositioning should significantly reduce the time and cost associated with drug development processes, benefits are limited to a certain process between preclinical to Phase II study. Many challenges still exist after Phase II trials. Phase III studies involves much larger number of patients compared to Phase I and II studies. Due to the size and relatively long duration, Phase III studies are the most expensive and time-consuming trials and these hurdles in Phase III studies have not changed over the years. Another challenge that should be considered for drug repositioning has to do with intellectual property (IP) protection of the repositioned drugs, especially for those drugs that are off patents. Off-patent drugs can be protected in part by method-of-use (MOU) patents which contain one or more claims directed to a method of use. MOU patents are much weaker than the composition-of-matter (COM) patents in terms of the exclusionary right. Nitroxoline, for example, used to be off patent, but is currently under MOU patent protection for anticancer applications since it was found to have anticancer properties [71]. Currently, an estimated number of 4,000 of active pharmaceutical ingredients (API) have been approved for human use in the world [10]. Approved drugs keep accumulating over the years; on average 20 to 30 NMEs each year have been approved by US-FDA [2] further expanding the space for drug repositioning. Since more diverse and selective cancer drug targets are being discovered and developed, the approved drug collections will be particularly useful to quickly identify clinically advanced anticancer drugs against those targets. A major problem of conventional cancer chemotherapy drugs (mainly DNA damaging agents) is notorious side effects that significantly reduce the quality of life of patients. As most of non-cancer drugs have little or tolerable side effects in human, repositioning of non-cancer drugs for anticancer therapy as exemplified in this review will be an excellent strategy for future anticancer drug development.
9855080
s2orc/train
v2
2003-06-27T04:25:17.000Z
2003-06-27T00:00:00.000Z
Pruning Isomorphic Structural Sub-problems in Configuration Configuring consists in simulating the realization of a complex product from a catalog of component parts, using known relations between types, and picking values for object attributes. This highly combinatorial problem in the field of constraint programming has been addressed with a variety of approaches since the foundation system R1(McDermott82). An inherent difficulty in solving configuration problems is the existence of many isomorphisms among interpretations. We describe a formalism independent approach to improve the detection of isomorphisms by configurators, which does not require to adapt the problem model. To achieve this, we exploit the properties of a characteristic subset of configuration problems, called the structural sub-problem, which canonical solutions can be produced or tested at a limited cost. In this paper we present an algorithm for testing the canonicity of configurations, that can be added as a symmetry breaking constraint to any configurator. The cost and efficiency of this canonicity test are given. Introduction Configuring consists in simulating the realization of a complex product from a catalog of component parts (e.g. processors, hard disks in a PC ), using known relations between types (motherboards can connect up to four processors), and instantiating object attributes (selecting the ram size, bus speed, . . . ). Constraints apply to configuration problems to define which products are valid, or well formed. For example in a PC, the processors on a motherboard all have the same type, the ram units have the same wait times, the total power of a power supply must exceed the total power demand of all the devices. Configuration applications deal with such constraints, that bind variables occurring in the form of variable object attributes deep within the object structure. The industrial need for configuration applications is widespread, and has triggered the development of many configuration applications, as well as generic configuration tools or configurators, built upon all available technologies. For instance, configuration is a leading application field for rule based expert systems. As an evolution of R1 [9], the XCON system [3] designed in 1989 for computer configuration at Digital Equipment involved 31000 components, and 17000 rules. The application of configuration is experimented or planned in many different industrial fields, electronic commerce (the CAWICOMS project [4]), software [19], computers [13], electric engine power supplies [7] and many others like vehicles, electronic devices, customer relation management (CRM) etc. The high variability rate of configuration knowledge (parts catalogs may vary by up to a third each year) makes configuration application maintenance a challenging task. Rule based systems like R1 or XCON lack modularity in that respect, which encouraged researchers to use variants of the CSP formalism (like DCSP [10,15,1], structural CSP [11], composite CSP [14]), constraint logic programming (CLP [6], CC [5], stable models [16]), or object oriented approaches [8,12]. One difficulty with configuration problems stems from the existence of many isomorphisms among interpretations. Isomorphisms naturally arise from the fact that many constraints are universally quantified (e.g. "for all motherboards, it holds that their connected processors have the exact same type"). This issue is technically discussed in several papers [8,18,17]. The most straightforward approach is to treat during the search all yet unused objects as interchangeable. This is a widely known technique in constraint programming, applied to configuration in [8,17] e.g.. However, this does not account for the isomorphisms arising during the search because substructures are themselves isomorphic (e.g. two exactly identical PCs with the same motherboards and processors are interchangeable). The work in [8], implemented within the ILOG 1 commercial configurators, suggests to replace some relations between objects with cardinality variables counting the number of connected elements for each type. This technique is very efficient and intuitively addresses many situations. For instance, to model a purse, it suffices to count how many coins of each type it contains, and it would be lost effort to model each coin as an isolated object. This solution has two drawbacks : it requires a change in the model on one hand, and the counted objects cannot themselves be configured. Hence the isomorphisms arising from the existence of isomorphic substructures cannot be handled this way. [18] applies a notion called "context dependant interchangeability" to configuration. This is more general than the two approaches seen before, but applies to the specific area of case adaptation. Also, since context dependant interchangeability detection is non polynomial, [18] only involves an approximation of the general concept. Furthermore, the underlying formalism, standard CSPs, is known as too restrictive for configuration in general. One step towards dealing with the isomorphisms emerging from structural equivalence in configurations is to isolate this "structure", and study its isomorphisms. This is the main goal pursued here : we propose a general approach for the elimination of structural isomorphisms in configuration problems. This generalizes already known methods (the interchangeability of "unused" objects, as well as the use of cardinality counters) while not requiring to adapt the configuration model. After describing what we call a configurations's structural subproblem, we define an algorithm to test the canonicity of its interpretations. This algorithm can be adapted to complement virtually any general purpose configuration tool, so as to prevent exploring many redundant search sub-spaces. This work greatly extends the possibilities of dealing with configuration isomorphisms, since it does not require a specific formalism. The complexity of the canonicity test and the compared complexity of the original problem versus the resulting version exploiting canonicity testing are studied. The paper is structured as follows : section 2 describes configuration problems, and the formalism used throughout the paper. Section 3 defines structural sub-problems, and their models called T-trees. In section 4, we describe T-tree isomorphisms and their canonical representatives. Section 5 presents an algorithm to test the canonicity of T-trees. Then section 6 lists complexity and combinatorial results. Finally, 6 concludes and opens various perspectives. Configuration problems, and structural sub-problems A configuration problem describes a generic product, in the form of declarative statements (rules or axioms) about product well-formedness. Valid configuration model instances are called configurations, are generally numerous, and involve objects and their relationships. There exist several kinds of relations : types : unary relations involved in taxonomies, with inheritance. They are central to configuration problems since part of the objective is to determine, or refine, the actual type of all objects present in the result (e.g. : the program starts with something known as a "Processor", and the user expects to obtain something like "Proc Brand Speed "). -other unary relations corresponding to Boolean object properties (e.g. : a main board has a built in scsi interface) -binary composition relations (e.g. : car wheels, the processor in a mainboard . . . ). An object cannot act as a component for more than one composite. -other relations : not necessarily binary, allowing for loose connections (e.g. : in a computer network, the relation between computers and printers) Configuration problems generally exhibit solutions having a prominent structural component, due to the presence of many composition relations. Many isomorphisms exist among the structural part of the solutions. We isolate configuration sub-problems called structural problems, that are built from the composition relations, the related types and the structural constraints alone. By structural constraints, we precisely refer to the basic constraints that define the structure : those declaring the types of the objects connected by each relation the constraints that specify the maximal cardinalities of the relations (the maximal number of connectable components) To ensure the completeness of several results at the end of the paper, we enforce two limitations to the kind of constraints that define structural problems : minimal cardinality constraints are not accounted for at that level (they remain in the global configuration model), and the target relation types are all mutually exclusive 2 . For simplicity, we abstract from any configuration formalism, and consider a totally ordered set O of objects (we normally use O = {1, 2, . . .}), a totally ordered set T C of type symbols (unary relations) and a totally ordered set R C of composition relation symbols (binary relations). We note ≺ O , ≺ TC and ≺ RC the corresponding total orders. Definition 1 (syntax). A structural problem, as illustrated in figure 1, is a tuple (t, T C , R C , C), where t ∈ T C is the root configuration type, and C is a set of structural constraints applied to the elements of T C and R C . For readability reasons and unless ambiguous, in the rest of the paper we use the term configuration to denote a model of a structural problem. Figure 2 lists a sample model of the structural problem detailed in figure 1. It is obvious from this example that object types can be inferred from the composition relations. We define the following : Definition 3 (root, composite, component). A configuration, solution of a structural problem (t, T C , R C , C), can be described by the set U of interpretations of all the elements of R C . If R U denotes the union of the relations in U (R U = rel∈U rel), and R t denotes its transitive closure, then we have : Isomorphisms From a practical standpoint, as soon as two objects of the same type appearing in a configuration are interchangeable, it is pointless to produce all the isomorphic solutions obtained by exchanging them. Two solutions that differ only by the permutation of interchangeable objects are redundant, and the second has no interest for the user. It would be particularly useful for a configurator to generate only one representative of each equivalence class. More interestingly, the capacity of skipping redundant interpretations also prunes the search space from many sub-spaces, and was shown a key issue in other areas of finite model search [2]. Definition 4. We note U (rel) the relation interpreting the relational symbol rel ∈ R C in U . Two configurations U and U ′ are isomorphic if and only if there exists a permutation θ over the set O, such that ∀r ∈ R c , θ(U )(r) = U ′ (r) Coding configurations, T-trees Because composition relations bind component objects to at most one composite object, configurations can naturally be represented by trees. For practical reasons, we make the hypothesis that two distinct relations cannot share both their component and composite types 4 . Then any configuration U is in one to one correspondence with an ordered tree where : illustrates this translation by an artificial example, which shows that object numbers are redundant. If we suppress them, we keep the possibility to produce a configuration tree isomorphic to the original via a breadth first traversal. We hence introduce T-trees, which capture part of the isomorphisms that exist among configurations : Definition 5 (T-tree). A T-tree is a finite and non empty ordered tree where nodes are labeled by types and children are ordered according to ≺ TC . We note (T, c 1 , . . . c k ) the T-tree with sub-trees c 1 , . . . c k and root label T . To translate a configuration tree in a T-tree, we simply replace the node labels by their parent edge labels. Several T-tree examples are listed by the figure 4. To perform the opposite operation, i.e. build a configuration tree from a T-tree, it suffices to generate node labels via a breadth first traversal (using consecutive integers, the root being labeled 0), then to relabel the edges. Proposition 1. Let A 1 be a configuration tree, C 1 the corresponding T-tree , and A 2 the configuration tree rebuilt from C 1 . Then A 1 and A 2 are isomorphic. The proof is straightforward. A permutation θ : O → O which asserts the isomorphism can be built by simply superposing A 1 and A 2 . Since every configuration bijectively maps to a configuration tree, this result legitimates the use of T-trees to represent configurations. This encoding captures many isomorphisms, because the references to members of the set O are removed, and the children ordering respects ≺ TC . A total order over T-trees Configuration trees and T-trees being trees, they are isomorphic, equal, superposable, under the same assumptions as standard trees. As a means of isolating a canonical representative of each equivalence class of Ttrees, we define a total order over T-trees. We note nct(T ) (number of component types) the number of types T i having T as composite type for a relation in R C . The types T i (1 ≤ i ≤ nct(T )) are numbered on each node according to ≺ TC . If C is a T-tree, we call T-list and we note T i (C) the list of its children having T i as a root label. |T i (C)| is the number of T-trees of the T-list T i (C). To simplify list expressions in the sequel, we use a i n 1 to denote the list a 1 , a 2 , ..., a n . Many ways exist to recursively compare trees, by using combined criteria (root label, children count, node count, etc.). For rigor, we propose a definition using two orders and ≪. Definition 7 (The relations , lex , ≪ and ≪ lex ). We define the following four relations : compares T-trees with roots of the same type T , lex is its lexicographic generalization to T-lists, ≪ compares two T-lists of same type T i , and ≪ lex is its lexicographic generalization to lists T i (C) nct(T ) 1 . These four order relations recursively define as follows : In other words, each T-tree is seen as if built from a root of type T and a list of T-lists of sub-trees. These two list levels justify having two lexicographic orders. (lines 1 and 2) lexicographically compares the lists of T-lists of two trees having the same root type. ≪ lexicographically compares T-lists (taking their length into account). Proposition 3. The relations , lex , ≪ and ≪ lex are total orders. Proof. As any lexicographic order defined from a total order is itself total, it remains to prove that the relations and ≪ are total orders. To demonstrate that a binary relation is a total order it suffices to show that any two elements from the set of reference can be compared, either one being less than or equal to the other. The proof is by induction on the height of T-trees. • else ∃j, ∀i < j, l i = l ′ i and either l j ≪ l ′ j or l ′ j ≪ l j . As a consequence, either C ≪ lex C ′ or C ′ ≪ lex C hence either C C ′ or C ′ C. In all cases, C C ′ or C ′ C. We call P (h) the property "any couple of T-trees C and C ′ of heigh less than h is such that C C ′ or C ′ C " and Q(h) the property "any couple of T-lists L and L ′ which T-trees are of height less than h is such that L ≪ L ′ or L ′ ≪ L ". We have shown that P (0) is true, and that ∀h, P (h) implies Q(h) and ∀h, Q(h) implies P (h + 1). We conclude that ∀h, P (h) and Q(h), and hence that the relations and ≪ are total orders, as are their lexicographic extensions. Definition 8 (Canonicity of a T-tree). A T-tree C is canonical iff it has no child or if ∀i, T i (C) is sorted by and ∀c ∈ T i (C), c itself is canonical. Proposition 4. A T-tree is the -minimal representative of its equivalence class (wrt. T-tree isomorphism) iff it is canonical. Proof. Let C and C' be two isomorphic and distinct T-trees. Consider the following prefix recursive traversal of a T-tree : examining a T-tree C, is examining its lists T i (C) in sequence. examining a list T i (C), is examining its length then, if the length is non zero, examining its T-trees in sequence. ⇐ We first show by induction that if, according to this traversal, two trees differ somewhere by the length of two T-lists, they are comparable accordingly. Compare C and C' by performing a simultaneous prefix traversal, and stop as soon as we meet at depth p two lists T i (S n ) and T i (S ′ n ) with distinct lengths, S n (resp. S ′ n ) being a sub-tree in C (resp. C'). Call S (resp. S ′ ) the parent T-tree of S n (resp. S ′ n ). Suppose that = L ′ and hence L ≪ L ′ . We thus proved that if two lists T i (S n ) and T i (S ′ n ) of depth p are such that T i (S n ) ≪ T i (S ′ n ) then the sub-trees S n and S ′ n of depth p which contain these lists are such that S n S ′ n and thus that the lists L and L ′ of depth p − 1 which contain S n and S ′ n are such that L ≪ L ′ . It follows that S and S ′ , which are of depth p − 1 and which contain L and L ′ are such that S S ′ and, by induction, that C C ′ . Suppose now that C is canonical (and thus that C' is not). Compare C and C' via a prefix traversal until we encounter two distinct sub-trees S n and S ′ n . As the list L ′ which contains S ′ n is a permutation of the list L which contains S n and since ∀j < n, S j = S ′ j then ∃m > n, S m = S ′ n . As the list L is sorted according to , we have S n S m and thus S n S ′ n . It follows that C C ′ . As the relation C C ′ is true ∀C ′ ∈ Iso(C), C is -minimal over Iso(C). ⇒ Now suppose that C is -minimal over Iso(C). Prove the contrapositive by assuming that C is not canonical. Traverse C as usual, and stop as soon as two sub-trees S n and S n+1 are met such that S n+1 S n . This necessarily happens since there exists at least a non sorted list of sub-trees because C is not canonical. Consider the tree C ′ resulting from the permutation σ which simply exchanges S n and S n+1 . We have C ′ ∈ Iso(C). As S n+1 S n then σ(S n ) S n , and it follows that C ′ C which contradicts the non canonicity hypothesis of C. C is thus canonical. Enumerating T-trees The rest of the study proposes on one hand a procedure allowing for the explicit production of only the canonical T-trees, and on the other hand an algorithm to test and filter out non canonical T-trees. These two tools are meant to be integrated as components within general purpose configurators, so as to avoid the exploration of solutions built on the basis of redundant solutions of the inner structural problem of a given configuration problem. We continue in the sequel to call "configurations" the solutions of a structural problem . To generate a configuration amounts to incrementally build a T-tree which satisfies all structural constraints. Definition 9 (Extension). We call extension of a T-tree C, a T-tree C ′ which results from adding nodes to C. We call unit extension, an extension which results from adding a single terminal node. The search space of a (structural) configuration problem can be described by a state graph G = (V, E) where the nodes in V correspond to valid (solution) T-trees and the edge (t 1 , t 2 ) ∈ E iff t 2 is a unit extension of t 1 . The goal of a constructive search procedure is to find a path in G starting from the tree (t, ) (recall that t is the type of the root object in the configuration) and reaching a T-tree which respect all the problem constraints (i.e. not only the constraints involved in the structural problem). Definition 10 (Canonical removal of a terminal node). To canonically remove a terminal node from a T-tree C not reduced to a single node consists in selecting its first non empty T-list T i (C) (the first according to ≺ TC ) then to select a T-tree C j in this T-list : the first which is not a leaf if one exists, or the last leaf otherwise. In the first case we recursively canonically remove one node of C j , in the other case, we simply remove the last leaf from the list. Notice that since the state graph is directed, the canonical removal of a leaf is not an applicable operation to a graph node (only unit extensions apply). Canonical removal is technically useful to inductive proofs in the sequel. Proposition 5. The canonical removal of a terminal node in a T-tree C not reduced to a single node produces a T-tree C ′ such that C ′ C. Proof. Let C j be the j th T-tree of a T-list and C ′ j the tree resulting from the canonical removal of a node in C j . The proof is by induction over the depth p of the root of C j in C. Let L and L ′ be the T-lists (of depth p − 1) containing C j and C ′ j : if C j is a single node, it is removed from its T-list, thus L ′ ≪ L. else, if the canonical removal of a node of T-tree C j of depth p produces a Ttree C ′ j such that C ′ j C j then C 1 , . . . C j−1 , C ′ j , . . . ≪ C 1 , . . . C j−1 , C j , . . . and thus L ′ ≪ L. In both cases, L being the only T-list of C modified to obtain L ′ (which transforms C in C ′ ), the same rationale leads to C ′ C. Proposition 6. Let G be the state graph of a configuration problem. Its subgraph G c corresponding to the only canonical T-trees is connex. Proof. It amounts to proving that any canonical T-tree can be reached by a sequence of canonical unit extensions from a T-tree (t, ), or that (taken from the opposite side) the canonicity of a T-tree is preserved by canonical removal. We proceed by induction over the height of T-trees. -Let r be the depth of removed node. By definition of the canonical removal, it occurred at the end of its T-list, which hence remains sorted after the change, and the parent T-tree (of depth r − 1) remains canonical, since nothing else is modified in the process. -Now we show that whatever the value of p, if the canonical removal of a node in a T-tree C of depth p preserves the canonicity of C, then the T-tree of depth p − 1 which contains C is remains canonical. By the proposition 5, the canonical removal of a node in a T-tree C produces a T-tree C ′ such that C ′ C. Canonical removal operates by selecting the first T-tree in a T-list that contains more than one node. If C is not the last T-tree of its T-list, call C right the T-tree immediately after C in the T-list. As C ′ C, we still have C C right . If C is not the first T-tree of its T-list, we call C lef t the T-tree immediately at the left of C in the T-list. As C is the leftmost T-tree containing more than a node, C lef t contains a single node, with the same root label as C and C ′ . Since C contained more than one node, C ′ contains at least a node and C lef t C ′ . Consequently, the canonical removal of a node in a T-tree (of depth p) of a T-list (of depth p − 1) leaves the T-list sorted. And the T-tree of depth p − 1 which contains this T-list, which is the only modified one, thus remains canonical. We conclude that canonical removal preserves the canonicity of all the sub-Ttrees, whatever their depth in the T-tree. By this operation, a T-tree remains canonical. The sub-graph G c is thus connex. It immediately follows a practically very important corollary : Corollary 1. A configuration generation procedure that filters out the interpretations containing a non canonical structural configuration remains complete. Proof. According to the proposition 6, to reject non canonical T-trees does not prevent to reach all canonical T-trees, since each T-tree can be reached by a path sequence of canonical unit extensions from the empty T-tree. It thus suffices to add to any complete procedure enumeration of T-trees a canonicity test to obtain a procedure which remains complete (in the set of equivalence classes for T-tree isomorphism) while avoiding the enumeration of isomorphic (redundant) T-trees. Algorithms A test of canonicity straightforwardly follows from the definition of canonicity. It is defined by two functions : Canonical and Less listed in pseudo code by the figure 5. We note ct(T ) the list of component types of T , sorted according to ≺ TC , and by extension, as the labels of nodes of a T-tree are types, we generalize these notations to ct(C) for a given T-tree C. Note that the function Less compares T-trees with the same root type. Complexity The worst case complexity of the function Less is linear in n (Θ(n)), n being the number of nodes of the smallest T-tree. It is called at most once on each node. The function Canonical is of complexity Θ(n log n) in the worst case. It recursively calls itself for each sub-tree of its argument and tests that their T-lists are sorted via a call to Less. Applications The algorithm described by the figure 5 can be used as a constraint to filter out the non canonical solutions of the structural sub-problem of a configuration problem, and this is so whichever the enumeration procedure and data structures are used (as possibly by example within the object oriented approach described in [8]). Il can be integrated so that the test of canonicity is amortized over the search, if the T-tree corresponding to the currently built configuration grows by unit extensions. In that case, the top part of the search made by "Canonical", that operates on a T-tree that did not change, may be saved. Counting T-trees In this section, we show the potentially very important benefit that results from the enumeration of only the canonical T-trees, compared with a standard exhaustive enumeration of all possible T-trees. To this end, we count the total number of T-trees and of canonical T-trees in a particular case of T-trees, those for which each type (the label of nodes) may have children of a single type. The corresponding configuration problem can be so defined : p + 1 object types T 0 , T 1 , ... and T p that can be inter connected by the composition relations R(T 0 , T 1 ), R(T 1 , T 2 ), ... and R(T p−1 , T p ). T 0 is the root type and there exists exactly one object with this type. We may connect from 0 to k objects of type T i+1 to any object of type T i . These T-trees are called k-connected. We note N p,k (resp. M p,k ) the total number of k-connected T-trees (resp. canonical k-connected T-trees), of maximal height p. Number of k-connected T-trees of depth p, N p,k A T-tree of maximal height p can be built by connecting from 0 to k T-trees of maximal height p − 1 to a node root. The number of arrangements of i elements (some of which may be identical) among N p−1,k is (N p−1,k ) i . N p,k is thus recursively defined by : N 0,k = 1 (the tree containing a single root object root, thus no object of T 1 ), N 1,k = k + 1 (the configurations of 0 to k objects of type T 1 without more children) and ∀p > 1, N p,k = i=k i=0 (N p−1,k ) i = (N p−1,k ) k+1 − 1 N p−1,k − 1 .
119092110
s2orc/train
v2
2017-12-29T14:11:08.000Z
2017-12-29T00:00:00.000Z
The Effect of Metal Thickness on Si Wire to Plasmonic Slot Waveguide Mode Conversion We investigate mode converters for Si wire to plasmonic slot waveguides at 1550 nm telecom wavelength. The structures are based on a taper geometry. We provide optimal dimensions with more than 90% power transmission for a range of metal (Au) thicknesses between 30-250 nm. We provide details on how to differentiate between the total power and the power in the main mode of the plasmonic slot waveguide. Our analysis is based on the orthogonality of modes of the slot waveguide subject to a suitable inner product definition. Our results are relevant for lowering the insertion loss and the bit error rate of plasmonic modulators. I. INTRODUCTION The practice of using waveguides to transfer information between different points in space with high bandwidth photonic links is gaining popularity. Fiber optics have replaced electrical wiring for a range of distances from thousands of kilometers coast to coast long-haul connections, to meter scaled rack to rack communications in data centers. The burgeoning field of silicon (Si) photonics has made it possible to build integrated opto-electronic components for a more intimate and efficient coupling of electronics for data processing and optics for data transfer. Si photonic links are based on the Si wire waveguides which are typically fabricated on silicon on insulator (SOI) wafers [1]. In addition to waveguides, photonic links also require light sources, modulators for electrical to photonic and photodetectors for photonic to electrical conversion of information. Mach-Zehnder interferometers and resonant microring cavities have been used in Si modulators [2]. Both technologies have relatively large power consumption. In order to have energy efficient and small footprint modulators, recent designs incorporated metallic (i.e. plasmonic) and Si wire waveguides [3], [4]. The coupling rate from Si wire to plasmonic slot waveguides is an important parameter that contributes to the insertion loss and hence the bit error rate of the modulators [5]. Previous studies on Si wire to plasmonic slot waveguide transitions focused mostly on relatively thick, 180-250 nm Au layers [6], [7], [8], [9]. A recent study focused on layers of 20-30 nm thickness [10]. The propagation length of the modes in plasmonic slot waveguides changes as a function Department of the slot dimensions [11]. There is a trade-off between the amount of field enhancement due to the small size of the slots and the propagation distance of the ensuing modes. It would be advantageous to choose the necessary field enhancement based on the light propagation constraints of the application at hand. Field enhancement and metal layer thickness are two closely coupled variables. In this work, we cover a range of Au layer thicknesses between 30-250 nm and provide blueprints for coupling geometries that work at 1550 nm telecom wavelength range with more than 90% coupling efficiency. We use numerical simulations to verify our designs and we give details of our modeling techniques which use both simple power extraction as well as modal decomposition of scattered fields for estimating the transmission efficiency of the couplers. II. MATERIAL AND METHOD The mode coupler parameters that we employ are illustrated in Fig. 1. The aim is to convert the Si wire mode on the left to the mode of the plasmonic slot waveguide on the right, through the taper region at the center. As shown in Fig. 1(b), the Si wire and the slot waveguide are centered with respect to one another in theŷ direction. The ambient environment is SiO 2 . Such vertical alignment is possible with clean room fabrication techniques as discussed in [12]. We obtain the optical properties of Si from the Sellmeiertype dispersion formula quoted in [13]. The refractive index of glass is obtained from Eq. (20) in [14]. The permittivity of Au is obtained from the supplemental data provided in [15]. Since we will be working at a wavelength of λ = 1550 nm, we quote the relevant optical constants of Si, SiO 2 and Au in Table I. We use the exp(+iωt) convention, thus Im(ε Au ) < 0. We use the finite element method implementation of the COMSOL package (v5.1) to solve for the waveguide modes of the Si wire and plasmonic slot waveguides as a function of the height of Si (h Si ) and Au (h Au ) for fixed values of Si width (w Si ) and slot width (w slot ). Typical mode profiles for E are plotted in Fig. 2. The Si wire and plasmonic slot waveguide modes are TE-like with the E-field primarily in thex direction [11], [1]. The plasmonic slot waveguide concentrates the fields The effective index (n eff ) of the modes are plotted in Fig. 3 as a function of waveguide height. At large heights, the plasmonic slot waveguide modes approach the 2D metalinsulator-metal modes [11] which have larger n eff for smaller slot widths. The price to pay for the large n eff is a reduced propagation length (L p ) as shown in Fig. 4. L p is for the mode intensity and is given by −1 2 Im(k z ) where k z is the wave vector of the mode in the direction of propagation. For the Si wire waveguide, the mode approaches the 2D slab waveguide mode as the height increases. We use COMSOL to simulate the 3D geometry as depicted in Fig. 1. We surround the simulation volume by perfectly matched layers. We source the Si wire waveguide mode from the left and solve for the fields at λ = 1550 nm, as we vary various geometric variables. We take advantage of the symmetry of the waveguide modes and of the geometry in the x−y plane, by putting perfect electric conductor (PEC) and perfect magnetic conductor (PMC) boundary conditions at planes that vertically and horizontally bisect the waveguides, respectively [see Fig. 1(b)]. These boundary conditions enable us to reduce the number of unknowns by 4-fold, speeding up the simulations. We record the tangential (E x , E y , H x , H y ) fields along the straight section of the plasmonic slot waveguide, at different z cuts, starting from z = 0 as shown in Fig. 1(a). We calculate the time averaged total power by numerically integrating the z component of the Poynting vector, 1 2 Re at different z cuts, as shown by the blue '×' symbols in Fig. 5. Fitting an exponential to these data points (red curve) give us a propagation length L Total = 5565 nm, which is smaller than L p = 6105 nm for a slot waveguide of h Au = w slot = 30 nm. The fields along different z cuts of the slot waveguide include non-bound scattered fields as well. In order to determine the power in the bound mode, we make a modal expansion of the fields. The total electric and magnetic fields at different z cuts, E, H can be expressed as where E 0 , H 0 are the waveguide modes of the slot waveguide as illustrated in Fig. 2(b) and E n , H n are a decomposition of the scattered fields. The coefficient α is what we are after. We can obtain the power in the bound mode by calculating In order to obtain the α coefficient, we make use of the orthogonality property of the bound modes that can be proved through the use of the Lorentz theorem. We define an inner product between two sets of fields (E 1 , H 1 ) and (E 2 , H 2 ) similar to the one in [16] as With this definition, we can take the inner product of the fields from a fixed z cut in the 3D simulation (1)-(2) with the bound modes calculated before, (E 0 , H 0 ), and use the fact that the bound modes are orthogonal to the non-bound modes, i.e. We calculate α by numerically integrating the relevant fields in (5). We calculate the power in the bound modes from (3), and plot them as magenta '+' symbols in Fig. 5. As expected, the power in the bound mode of the slot waveguide is less than the total power in a given z-cut. When we fit an exponential to the power in bound modes (cyan curve), we get a decay length of L Mode = 6283 nm, a result much closer to L p . We estimate the power conversion efficiency of a given taper geometry by calculating the power at the z = 0 cut. We use the total power at z = 1100 nm, and estimate the power at z = 0 by using the L p value from mode analysis, see the black dashed lines in Fig. 5. This is an approximate method, albeit a practical one, which gives a close estimate of the total power in the bound mode of the waveguide at z = 0 (slightly less than 88% in this example). III. RESULTS We used the reported values in [8] and [10] for mode conversion to plasmonic slot waveguides with 250 nm and 30 nm thick Au layers, respectively. We searched around these design points, varied h Si and other dimensions to come up with the optimized parameters in Table II. The norm of the electric field on the central plane that cuts through the structure is plotted in Fig. 6. The h Au = 30 nm case has a shorter taper , and the field intensities in the slot region are higher due to the 30 nm width of the slot. This case has a transmission factor of ∼88%. The h Au = 250 nm case has a longer taper and a larger slot width of 250 nm with a transmission factor of ∼95%. It is noteworthy that both sets have relatively thick Si layers, h Si h Au . After we had the optimal values for two different Au thicknesses, we linearly interpolated all the geometry variables in between the two sets listed in Table II and calculated the transmission factor of the resulting mode converter structures corresponding to Au thicknesses ranging from 30 to 250 nm. The results are provided in Fig. 7. We measured the power at the z = 1100 nm cut, and back propagated to z = 0 by multiplying the results with exp(1100/L p ) where L p values are calculated separately for each (h Au , w slot ) pair similar to Fig. 4. IV. DISCUSSION AND CONCLUSION We investigated optimal structures for coupling the mode of a Si wire waveguide to the mode of a plasmonic slot waveguide. We concentrated on the taper geometry and came up with designs that have over 90% power transfer efficiency (approaching 95% in some instances) for Au thicknesses ranging from 30-250 nm. The results that we quote can find applications in compact plasmonic modulator designs with minimal insertion loss and low bit error rates. Although we focused on Si wire waveguides in this study, the techniques that we present can easily be applied to Si nitride waveguides that have been shown to be highly advantageous for non-linear applications [17]. Lastly, although our focus has been on taper structures, resonant stub-like elements are another route for designing mode converters as has been demonstrated in 2D structures [18].
246724990
s2orc/train
v2
2022-02-11T16:24:40.415Z
2022-02-09T00:00:00.000Z
Prognostic Nutritional Index Predicts Early Mortality in Diffuse Large B-cell Lymphoma Objective: The international prognostic index (IPI) and the revised IPI (R-IPI) are used to determine the prognosis in diffuse large B-cell lymphoma (DLBCL). However, these scoring systems are insufficient to identify very high-risk patients. Recently, the prognostic nutritional index (PNI) -calculated with lymphocyte count and albumin- has been used to determine the prognosis in DLBCL. This study aimed to evaluate the effect of PNI score on prognosis and survival in patients with high-risk DLBCL. Methods: Patients diagnosed with DLBCL and treated with rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisolone were included. Pre-treatment IPI, R-IPI, and PNI scores and progression-free survival (PFS) and overall survival (OS) times were calculated. The cutoff value for PNI according to OS was determined by using the X-Tile program. Results: One hundred and ten patients were included, the median age was 63 years and the median follow-up period was 25 months. According to R-IPI, the median OS could not be reached for the very good risk group, and the median OS values were 83 and 17 months in the good and poor-risk groups, respectively (p=0.001). The cohort was divided into three groups according to the cut-off value for the PNI: patients with PNI <33 were classified as high-risk, 33-42 intermediate-risk, and ≥42 as low-risk. According to PNI, the median durations of PFS and OS were 2 months and 3 months in the high-risk group, 9 months and 19 months in the intermediate-risk group respectively, and in the low-risk group the median duration for PFS and OS could not be reached (p=0.001). Conclusions: The R-IPI is widely used to estimate the prognosis in DLBCL. But in our cohort, in the poor-risk patient group, the OS was 17 months according to R-IPI, while this period was 3 months according to PNI. This finding demonstrated that PNI might predict early mortality in DLBCL. INTRODUCTION Diffuse large B-cell lymphoma (DLBCL) is the most common type of lymphoma in adults and accounts for 28% of all non-Hodgkin lymphomas 1 . Treatment was mainly composed of anthracycline-based combined chemotherapy regimens such as cyclophosphamide, doxorubicin, vincristine, prednisolone (CHOP) in the recent past but with the addition of rituximab to this combination (R-CHOP), 20% improvement in treatment responses were achieved 2 . Today, R-CHOP remains the gold standard treatment regimen in DLBCL 3 . However, DLBCL is a heterogeneous disease and the response to treatment may differ. A minority of patients may be unresponsive and have a worse outcome. To identify this patient group, -before the rituximab era-international prognostic index (IPI) has been the primary prognostic tool to determine the prognosis of DLBCL which is still the most commonly used scoring system 4 . In the rituximab era, the revised-IPI (R-IPI) scoring system -that is plotted by the redistribution of the IPI elements-started to be widely used to make inferences about prognosis 5 . Yet IPI and R-IPI both could not strictly predict overall survival in high-risk patients, therefore a National Comprehensive Cancer Network IPI (NCCN-IPI) scoring system was created and the poorer survival in very high-risk patients was predicted better 6 . Prognostic nutritional index (PNI) is a different prognostic parameter which is calculated by the formula of serum albumin (g/L) + 5 x absolute lymphocyte count (10 9 /L) and has been substantially developed to determine preoperative immune nutrition status and the surgical risk in patients diagnosed as having gastrointestinal malignancies 7 . PNI has been also claimed to be a useful prognostic tool in hematological malignancies 8 . The aim of this study was to evaluate whether the PNI score, which was a marker of nutrition and immune system, could predict progression-free survival (PFS) and overall survival (OS) in patients with high-risk DLBCL. MATERIALS and METHODS Patients diagnosed with DLBCL by pathological examination in our hematology clinic between 2010-2021 and treated with R-CHOP regimen were included in the study. Pre-treatment data were collected from patients' files retrospectively. The laboratory values of serum albumin and absolute lymphocyte count were noted and PNI scores were calculated as described before 7 . According to the age, Eastern Cooperative Oncology Group (ECOG) performance score, Ann-Arbor stage of the disease, the presence of extranodal involvement, and serum lactate dehydrogenase (LDH) level at the time of diagnosis; the patients' IPI and NCCN-IPI scores were calculated 4,6 . Pregnant women, patients under the age of 18, patients diagnosed with primary central nervous system lymphoma, acquired immunodeficiency syndrome related lymphoma, or with Richter's transformation were excluded. Patients who had a history of solid organ malignancy or who were treated with a regimen other than R-CHOP were also excluded. PFS was defined as the time from diagnosis to progression or death, and OS was defined as the time from diagnosis to death from any cause. The primary endpoint was the prediction of OS and the secondary endpoint was the prediction of PFS. Statistical Analysis The SPSS (IBM Corp. Released 2013. IBM SPSS Statistics for Windows, Version 22.0. Armonk, NY: IBM Corp.) program was used for all statistical analyses and figure creation. Normality of the data was tested with the Kolmogorov-Smirnov test. If the normality requirement was satisfied for the descriptive statistics, the results were reported as mean +/-standard deviation, while if normality requirement was not satisfied the results were reported as median and range [minimum (min)maximum (max)]. Chi-square test was used for categorical variables and Student's t-test for continuous variables for comparison between groups. The median survival time was calculated with the Kaplan-Meier method and to estimate the difference of survival between groups, logrank test was performed. The X-tile 3.6.1 software (Yale University, New Haven, CT, USA) was used to determine the optimal cut-off value of PNI, which was identified from the min p-value according to the OS. All analyses were two-tailed and the type 1 error rate was determined as 5%. RESULTS Of the 148 patients diagnosed with DLBCL, 38 patients who did not receive rituximab and anthracycline-based chemotherapy and whose data were missing were excluded. Therefore, 110 patients were included in the study cohort and evaluated retrospectively. Fifty-three patients (48%) were female and the median age was 63 years (range between 23-88). The characteristics of the patients at the time of diagnosis are summarized in Table 1. The median follow-up time was 25 months (min: 1-max: 97 months). The mean PNI score was calculated as 44.78±0.95 (range: 10.56-79.74). The patients were classified as very good, good, and poor-risk groups according to R-IPI. While the median PFS could not be reached for the very good risk disease group, the median PFS values for the good and poor-risk groups were 52 and 9 months, respectively (p=0.001) ( Figure 1). Likewise, the median OS could not be reached for the very good risk group. The median OS values for the good and poor-risk disease groups were 83 and 17 months, respectively (p=0.001) (Figure 1). According to NCCN-IPI, the patients were classified Table 2). The cut-off value for PNI that made a difference in OS between groups was determined by using the X-Tile program and the cohort was divided into three groups as follows: patients with PNI<33 were classified as high-risk, 33-42 intermediate-risk, and ≥42 as low-risk (favorable). According to PNI, the median duration of PFS and OS were 2 months and 3 months in the high-risk group, 9 months and 19 months in the intermediate-risk group, respectively, and in the low-risk group the median duration of PFS and OS could not be reached (p=0.001) ( Figure 2). Comparison of parameters by dividing the entire cohort into 2 groups according to PNI as low-risk patients (PNI≥42) and intermediate plus high-risk patients (PNI<42) revealed that patients who were in the low-risk group had lower LDH and beta-2 microglobulin levels, better ECOG performance score, earlier stage, and fewer B symptoms. The age did not differ between these groups (Table 3). Univariate logistic regression analysis was performed to identify the risk factors for OS. IPI risk score, presence of B symptoms, poor performance score (ECOG>1), and low PNI were identified as poor risk factors for OS. High LDH was not associated with OS. Multivariate logistic regression analysis demonstrated a 3.27 times increased risk of death in patients with a low PNI score (p=0.002) ( Table 4). DISCUSSION Despite the improvements in survival with R-CHOP treatment in DLBCL, it remains an important cause of mortality among patients with hematological malignancies. While the incidence of DLBCL is growing, it is the 4 th leading cause of death from cancer in people aged 20-40 years 9 . R-CHOP regimen remains the gold standard treatment but there is an emerging need for tailored therapies for high-risk patients. For this purpose, the identification of high-risk patients at the time of diagnosis is gaining greater importance. The most widely accepted risk scoring system, IPI, predicts a 55 percent 4-year OS for high-risk patients 5 . It is obvious that this modeling cannot distinct the group of patients with an OS probability of less than 50%. Hence, NCCN-IPI was developed with similar parameters and it was observed that it showed a better 5-year survival by 33% in the highrisk group 6 . To ameliorate these scoring systems, many different parameters such as hyperfibrinogenemia and albumin levels at diagnosis were examined in clinical trials and their effect on the survival probabilities were calculated 10,11 . Elevated serum C-reactive protein and free light chain levels, which were the markers of inflammation, were found to be independent prognostic factors 12,13 . After the notification of the low absolute lymphocyte count for being an independent poor risk factor in DLBCL, the idea of using the PNI score which was primarily developed for solitary malignancies came up to predict survival in DCBCL 8,14 . In the present study, the predictive PNI score for OS was 42. Similar median score values were reported in solitary malignancies and DLBCL in previous studies, and similar cut-offs were reported to be predictive for OS (PNI score: 40-45) 8,15 . Although almost all of the studies on this topic were carried out in the Far East, the limited data obtained in studies conducted in Western countries also showed resemblance 16 . While the PNI score was found to favor OS in most studies, there were also a few studies suggesting that it could not impact OS 17,18 . When the cut-off value for OS was determined by using the X-tile program for the PNI score, which was divided into two prognostic groups in previous studies, the third group with a very poor prognosis was differentiated with a PNI<33 value. The reason why such a group was not defined in other studies might be that the patients who experienced early mortality were not included. As the wide acceptance of R-IPI in determining the risk of DLBCL, our study demonstrated that the OS values were 17 months and 19 months in our high-risk patients according to R-IPI and NCCN-IPI, respectively, but OS was 3 months in the same patient population according to PNI with the cut-off value <33. This finding supports that PNI may predict early mortality in DLBCL better than both R-IPI and NCCN-IPI scores. In addition, our findings reveal that the PNI score also predicts the duration of PFS. Zhou et al. 19 stated that a low PNI score could indicate short event-free survival (EFS) in patients receiving R-CHOP. Interestingly, it has been shown that the PNI score cannot separate risk groups for OS and EFS in patients receiving CHOP, unlike patients receiving R-CHOP 19,20 . While the PNI score is successful in demonstrating the prognosis in combination treatments with rituximab, it cannot sufficiently differentiate the high-risk group in patients who do not receive immunotherapy. The reason for this may be the already expected worse outcomes in patients treated with CHOP. In this study, PNI was also found to have a positive or negative relationship with the known prognostic factors -as anticipated-such as Ann-Arbor stage, ECOG, LDH, IPI, presence of B symptoms, and beta-2 microglobulin. The absence of a direct relationship between PNI and age and the fact that the mean age did not differ significantly in the high-risk PNI group suggested that the poor prognosis in this patient group was due to the disease rather than the age-related fragility of the patients. Although the pathophysiology of the relationship between low PNI score and poor prognosis is not fully understood, hypoalbuminemia may be an explanatory factor as being an indicator of nutritional deficiency and inflammation 21 . In solitary organ malignancies, the inflammatory state is aggravated and the increase in tumor necrosis factor and IL-6 can deepen hypoalbuminemia. There is not enough knowledge yet on whether hypoalbuminemia is an outcome or a treatment target. Albumin has a greater effect on the calculation of the PNI score than the lymphocyte count. Lymphopenia has also been defined to be a poor prognostic factor in lymphoma 22,23 . Although lymphopenia in solitary malignancies is thought to be due to concomitant immune suppression, the relationship of lymphopenia with poor prognosis in lymphomas has not been fully elucidated. It is thought that early lymphocyte recovery after autologous stem cell transplantation gives better results in NHL and multiple myeloma, which is due to earlier immune restructuring 24 . In later studies it was reported that lymphocyte recovery depended on the number of NK lymphocytes 25 . Several limitations of our study deserve to be mentioned. Since being a retrospective analysis, we were not able to evaluate the NK lymphocyte count of patients. Except for the scoring systems, it was not possible to evaluate the risk profile of patients per genetic risk factors and the cell of origin (germinal or nongerminal center), even though immune histochemically, which could not be performed in all patients. Although it is known that cytogenetically double/triple hit lymphoma and active B type lymphoma have a worse prognosis, designing studies in the framework of these parameters requires expensive and specialized procedures 26,27 . CONCLUSIONS PNI is a simple, inexpensive, and easily applicable risk profiling system and is especially successful in predicting early mortality. Prospective studies are needed to better investigate the causes of hypoalbuminemia and lymphopenia, which are components of the PNI score, in terms of targeted therapy alternatives in reducing early mortality. Peer-review: Externally and internally peerreviewed.
8969620
s2orc/train
v2
2014-10-01T00:00:00.000Z
2009-12-31T00:00:00.000Z
Multiple Constraints for Ant Based Multicast Routing in Mobile Ad Hoc Networks 1 Problem statement: A Mobile Ad hoc Network (MANET) is one of the challenging environments for multicast. Since the associated overhead is more, the existing studies illustrate that tree-based and mesh-based on-demand protocols are not the best choice. The costs of the tree under multiple constraints are reduced by the several algorithms which are based on the Ant Colony Optimization (ACO) approach. The traffic-engineering multicast problem is treated as a single-purpose problem with several constraints with the help of these algorithms. The main disadvantage of this approach is the need of a predefined upper bound that can isolate good trees from the final solution. Approach: In order to solve the traffic engineering multicast problem which optimizes many objectives simultaneously this study offers a design on Ant Based Multicast Routing (AMR) algorithm for multicast routing in mobile ad hoc networks. Results: Apart from the existing constraints such as distance, delay and bandwidth, the algorithm calculates one more additional constraint in the cost metric which is the product of average-delay and the maximum depth of the multicast tree. Moreover it also attempts to reduce the combined cost metric. Conclusion: By simulation results, it is clear that our proposed algorithm surpasses all the previous algorithms by developing multicast trees with different sizes. INTRODUCTION A Mobile Ad Hoc Network (MANET) is a kind of wireless ad hoc network and is a self-configuring network of mobile routers (and associated hosts) connected by wireless links-the union of which forms an arbitrary topology. The routers are free to move randomly and organize themselves arbitrarily; thus, the network's wireless topology may change rapidly and unpredictably. Such a network may operate in a standalone fashion, or may be connected to the larger Internet. Mobile ad hoc networks became a popular subject for research as laptops and 802.11/Wi-Fi wireless networking became widespread in the mid to late 1990s. A Mobile Ad hoc Network (MANET) is a set of mobile nodes which communicate over radio and do not need any infrastructure [1] . This kind of networks are very flexible and suitable for several situations and applications, thus they allow the establishing of temporary communication without pre-installed infrastructure. The interfaces exhibit limited transmission range to facilitate communication between two nodes. Many intermediate nodes have been involved to relay communication traffic. Therefore, this kind of networks is also called mobile multi-hop ad-hoc networks. In order transmit data to a subset of destination nodes in a computer network multicast consists of simultaneous data transmission from a source node. Multicast routing algorithms are used in radio and TV transmission, on demand video and teleconference. End-to-end delay, minimum bandwidth resources and cost of the tree are the main QoS parameters which are included in the multicasting. Thus the traffic engineering multicast problem should be treated as a multi-objective problem. In this study, an Ant Based Multicast Routing (AMR) algorithm for multicast routing in mobile ad hoc networks has been proposed to solve the Traffic Engineering Multicast problem that optimizes several objectives simultaneously. This algorithm calculates one more additional constraint in the costs metric which is the product of average-delay and the maximum depth of the multicast tree and tries to minimize this combined cost metric. Related work: Some algorithms which have elements in common with our algorithm, such as multipath routing, data load spreading and proactive path maintenance have been analyzed. The basic idea behind ACO algorithms for routing is the use of mobile agents, called ants. Gunes et al. [1] proposed a new on-demand routing algorithm for mobile, multi-hop ad hoc networks. The algorithm is based on ant algorithms which are a class of swarm intelligence. Ant algorithms try to map the solution capability of ant colonies to mathematical and engineering problems. The Ant-Colony-Based Routing Algorithm (ARA) is highly adaptive, efficient and scalable. The main goal in the design of the algorithm was to reduce the overhead for routing. Pitakaso et al. [5] an ant-based algorithm for solving unconstrained multi-level lot-sizing problems called ant system for multi-level lot-sizing algorithm (ASMLLS). A hybrid approach which uses ant colony optimization in order to find a good lot-sizing sequence and a simple single stage lot-sizing rule is applied with modified setup costs. They have modified the setup costs depends on the position of the item in the lot-sizing sequence, on the items which have been lot-sized before and on two further parameters, which are tried to be improved by a systematic search. Baras et al. [6] proposed a novel approach to the routing problem in MANETs by using swarm intelligence inspired algorithms. The proposed algorithm uses Ant-like agents to discover and maintain paths in a MANET with dynamic topology. Kazuyuki Fujita [7] proposed an Ants-Routing with routing History (ARH) and Ants-Routing with routing history and no return rule (ARHnr), that can perform a robust routing by selecting stochastically the good route and learn quickly the route by using routing history. ARH and ARHne adapt reinforcement learning to the routing algorithm. Matsuo et al. [8] accelerated Ants-Routing which increase convergence speed and obtain good routing path is discussed. Experiment on dynamic network showed that accelerated Ants-Routing learns the optimum routing in terms of convergence speed and average packet latency. Marwaha et al. [9] overcome these shortcomings of ant-based routing and AODV by combining them to develop a hybrid routing scheme. The Ant-AODV hybrid routing protocol is able to reduce the end-to-end delay and route discovery latency by providing high connectivity as compared to AODV and ant-based routing schemes. Heissen Büttel et al. [10] addressed the problem of routing in large-scale Mobile Ad-Hoc Networks (MANETs), both in terms of number of nodes and coverage area. Our approach aims at abstracting from the dynamic, irregular topology of a MANET to obtain a topology with "logical routers" and "logical links", where logical router and logical links are just a collection of nodes and (multihop) paths between them, respectively. To "build" these logical routers, nodes geographically close to each other are grouped together. Logical links are established between selected logical routers. Sinha et al. [11] proposed the MCEDAR multicast routing algorithm for ad hoc networks. MCEDAR is an extension to the CEDAR architecture and provides the robustness of mesh based routing protocols and approximates the efficiency of tree based forwarding protocols. It decouples the control infrastructure from the actual data-forwarding infrastructure. The decoupling allows for a very minimalistic and low overhead control infrastructure while still enabling very efficient data forwarding. Devarapalli et al. [12] proposed a new multicast protocol for mobile ad hoc networks, called the Multicast routing protocol based on Zone Routing (MZR). MZR is a source-initiated on demand protocol, in which a multicast delivery tree is created using a concept called the zone routing mechanism. The protocol's reaction to topological changes can be restricted to a node's neighborhood instead of propagating it throughout the network. Vaishampayan et al. [13] proposed the Protocol for Unified Multicasting through Announcements (PUMA) in ad hoc networks, which establishes and maintains a shared mesh for each multicast group, without requiring a unicast routing protocol or the pre assignment of cores to groups. PUMA achieves a high data delivery ratio with very limited control overhead, which is almost constant for a wide range of network conditions. Jetcheva et al. [14] proposed the design and initial evaluation of the Adaptive Demand Driven Multicast Routing protocol (ADMR), a new on demand ad hoc network multicast routing protocol that attempts to reduce as much as possible any non on demand components within the protocol. Castro et al. [15] proposed a scalable applicationlevel multicast infrastructure. Scribe supports large numbers of groups, with a potentially large number of members per group. Scribe is built on top of Pastry, a generic peer-to-peer object location and routing substrate overlayed on the Internet and leverages Pastry's reliability, self-organization and locality properties. Pastry is used to create and manage groups and to build efficient multicast trees for the dissemination of messages to each group. Scribe provides best-effort reliability guarantees and we outline how an application can extend Scribe to provide stronger reliability. Zhang et al. [16] proposed a hybrid multicast scheme in p2p networks. Borg is motivated by the asymmetry in routing in structured p2p networks. The overlay path taken in routing a message from node A to node B is likely to be distinct and therefore has a different routing delay from the path taken in routing a message from node B to node A. Borg exploits this asymmetry by building the upper part of a multicast tree using a hybrid of forward-path forwarding and reverse-path forwarding and leverages the reversepath multicast scheme for its low link stress by building the lower part of the multicast tree using reverse-path forwarding. The boundary nodes of the upper and lower levels are defined by the nodes' distance from the root in terms of the number of overlay hops. Nasipuri et al. [17] proposed a particular on-demand protocol, called Dynamic Source Routing and show how intelligent use of multipath techniques can reduce the frequency of query floods. They develop an analytic modeling framework to determine the relative frequency of query floods for various techniques. Ant Colony Optimization (ACO): Ant colony optimization is a probabilistic technique for solving computational problems which can be reduced to find the good paths through graphs. Ants are used as the agents and the routing is on basis of the food searching behavior of the real ants. These agents are divided into forward and backward ants. The sender to the neighbor nodes broadcasts the forward ants. The backward ants utilize the useful information like end-to-end delay, number of hops gathered by the forward ants on their trip from source to the destination. Ant Colony Optimization (ACO) is a paradigm for designing meta heuristic algorithms for combinatorial optimization problems [18] . The first algorithm which can be classified within this framework was presented in 1991. The essential trait of ACO algorithms is the combination of a priori information about the structure of a promising solution with a posteriori information about the structure of previously obtained good solutions. The characteristic of ACO algorithms is their explicit use of elements of previous solutions. In fact, they drive a constructive low-level solution, as GRASP does, but including it in a population framework and randomizing the construction in a Monte Carlo way. A Monte Carlo combination of different solution elements is also suggested by Genetic Algorithms, but in the case of ACO the probability distribution is explicitly defined by previously obtained solution components. An ACO algorithm includes two more mechanisms: Trail evaporation and, optionally, daemon actions. Trail evaporation decreases all trail values over time, in order to avoid unlimited accumulation of trails over some component. Daemon actions can be used to implement centralized actions which cannot be performed by single ants, such as the invocation of a local optimization procedure, or the update of global information to be used to decide whether to bias the search process from a non-local perspective. It has been experimentally observed that ants in a colony can converge on moving over the shortest among different paths connecting their nest to a source of food. The main catalyst of this colony-level shortest path behavior is the use of a volatile chemical substance called pheromone: Ants moving between the nest and a food source deposit pheromone and preferentially move in the direction of areas of higher pheromone intensity. Shorter paths can be completed quicker and more frequently by the ants and will therefore be marked with higher pheromone intensity. These paths will therefore attract more ants, which will in turn increase the pheromone level, until there is convergence of the majority of the ants onto the shortest path. The local intensity of the pheromone field, which is the overall result of the repeated and concurrent path sampling experiences of the ants, encodes a spatially distributed measure of goodness associated with each possible move. Ant-system-based QOS multicasting algorithm: Multicast algorithm: Step1: Backup-paths-set: For each destination node m i ∈M, Dijkstra K shortest path algorithm is used to compute the least-cost paths from s to m to construct backup-paths set. Let P i be paths set for destination node i: where, j i P is the jth path for destination node i. If the delay constraint is violated by some of the trees, then the cost is to be increased, so that it is likely to be rejected. Step 2: Tree formation: In this algorithm, a multicast tree T is represented as an array of m elements: Step 3: Path selection: When an ant moves from the node i to the next node j, the probability function of the ant choosing node j as the next node as follows: α and β are the relative importance of pheromone strength and the distance between nodes that affect an ant's judgment when choosing the next node to select. Step 4: Pheromone update: The pheromone trail associated to every edge is evaporated by reducing all pheromones by a constant factor: where, p (0,1) ∈ is the evaporation rate. Next, each ant retracts the path it has followed and deposits an amount of pheromone h ij ∆τ on each traversed connection: The pheromone on a connective path (i,j) left by the mth ant is the inverse of the total length traveled by the ant in a particular cycle. The formula is as follows: h ij Q / Lm τ = In the above formula: Q = A constant Lm = (C j -C i ) Where: C i = cost of sub multicast tree node i C j = cost of sub multicast tree node j To avoid the situation of C i = C j compute: Step 5: Stopping criterion: The stopping criterion of the algorithm could be specified by a maximum number of iterations or a specified CPU time limit. Problem formulation: The multicast tree will be determined on a particular set of nodes in which the delay can be measured between all nodes. A graph representation is considered as G = (V,E). V is the set of all vertices (end systems in the network) and E is the set of weighted, undirected edges between all nodes. Let us consider only networks in which all nodes are subscribers of the multicast group or one in which nonsubscribers can be ignored. Edges are assigned weights corresponding to the bandwidth and delay between the nodes they connect: • Cost of the tree (C) • Average end-to-end delay (d) • Maximum Depth (D) We will compute the normalized product: A multicast routing problem tries to find the multicast tree T that minimizes P. Bounding the maximum depth of the tree and therefore bounding the maximum hops, is a meaningful metric for networks in which Time To Live (TTL) is a parameter on messages. Reducing the number of hops between the root and the leaves, reduces the number of failure points along any given root to leaf path. Ant based multicast routing with multiple constraints: Our new algorithm uses a heuristic to form degree-bounded spanning trees by ensuring that all hosts connect to the source by connecting through a host that is closer to the root. This helps to reduce delay introduced by deviations from the optimal path. Our algorithm calculates the shortest path tree from the root using the ACO technique. The shortest path tree is then modified using the heuristic whereby it is stated that each node must be connected to a parent that is closer to the root than itself, that is if a's parent is b, then dM(b,root)≤dM(a,root). Visually, this is represented by concentric circles are drawn for each radius of a node from the root and all nodes are connected to a parent in a circle of equal to, or small radius than the ring they are a member of. Initially, all nodes in the shortest path tree will either be connected directly to the root, or have a path to the root that connects only through nodes closer to the root. Algorithm for constraint based tree construction: Form the cluster V i with cluster member v i and maximum out-degree N by optimizing the shortest path tree and such that each node has a parent closer to the root than itself. For each Cv, do 4. Add CL to set Al. 6. Add Cv to Al 10. end for 11. for each Rv do 12. Connect Rv to the closest vertex in set Al 14. Swap Rv with a sibling that is farther from the root than Rv The relationship of each node always connecting to the root through nodes closer to the root is kept in effect by correcting the tree one level at a time. For example, starting at the root (level 1), the closest B children at each vertex are kept as children, while the remaining children up to n-B-1are made children of the closed B children. The process is then repeated until all nodes in the tree have < = B children at all levels. Consider a scenario in Fig. 1. Here node Y would normally be the third child of node X and node Z would be made a grandchild of X. However, this would violate the organizational rule stating that each vertex be connected to a vertex that is closer to the root than itself. This is solved by swapping the tree position of Y and Z. Node Z is made a child of X and Y is made a grandchild X and a child of Z. Simulation model and parameters: The NS2 is used to simulate the proposed algorithm. In our simulation, the channel capacity of mobile hosts is set to the same value: 2 Mbps. The distributed coordination function (DCF) of IEEE 802.11 for wireless LANs as the MAC layer protocol is used. It has the functionality to notify the network layer about link breakage. In the simulation, mobile nodes move in a 600×600 m rectangular region for 50 sec simulation time. Initial locations and movements of the nodes are obtained using the Random Waypoint (RWP) model of NS2. I assume each node moves independently with the same average speed. All nodes have the same transmission range of 250 m. In this mobility model, a node randomly selects a destination from the physical terrain. It moves in the direction of the destination in a speed uniformly chosen between the minimal speed and maximal speed. After it reaches its destination, the node stays there for a pause time and then moves again. In the simulation, the maximal speed is 10 m sec −1 and pause time is 5 sec. The various no. of nodes are 25, 50, 75 and 100 is to investigate the performance influence of different topologies. The simulated traffic is Constant Bit Rate (CBR). For each scenario, ten runs with different random seeds were conducted and the results were averaged. The AMR protocol is compared with MAODV. The evaluation is mainly based on performance according to the following metrics: Delay X depth: It is the normalized product of end-toend-delay and average tree depth. Simulation results: Table 1 shows the simulation results of AMR and MAODV [19] for the above metrics. Table 1, it can be seen that AMR is better than MAODV in all the metrics. Figure 2 shows that the normalized product of average delay and depth of the tree is less when compared with MAODV. Figure 3 shows the average packet delivery fraction is more when compared to MAODV. Figure 4 and 5 shows that, the routing load and overhead are significantly less compared to the MAODV routing protocol. CONCLUSION The costs of the tree under multiple constraints are reduced by the several algorithms which are based on the Ant Colony Optimization (ACO) approach. The Traffic-Engineering Multicast problem is treated as a single-purpose problem with several constraints with the help of these algorithms. The main disadvantage of this approach is the need of a predefined upper bound that can isolate good trees from the final solution. In order to solve the traffic engineering multicast problem which optimizes many objectives simultaneously this study offers a design on Ant Based Multicast Routing (AMR) algorithm for multicast routing in mobile ad hoc networks. The algorithm calculates one more additional constraint in the cost metric which is the product of average-delay and the maximum depth of the multicast tree. Moreover it also attempts to reduce the combined cost metric. By simulation results, it is clear that our proposed algorithm surpasses all the previous algorithms by developing multicast trees with different sizes.
253240070
s2orc/train
v2
2022-11-01T13:21:31.013Z
2022-11-01T00:00:00.000Z
The feasibility of using Apple's ResearchKit for recruitment and data collection: Considerations for mental health research In 2015, Apple launched an open-source software framework called ResearchKit. ResearchKit provides an infrastructure for conducting remote, smartphone-based research trials through the means of Apple's App Store. Such trials may have several advantages over conventional trial methods including the removal of geographic barriers, frequent assessments of participants in real-life settings, and increased inclusion of seldom-heard communities. The aim of the current study was to explore the feasibility of participant recruitment and the potential for data collection in the non-clinical population in a smartphone-based trial using ResearchKit. As a case example, an app called eMovit, a behavioural activation (BA) app with the aim of helping users to build healthy habits was used. The study was conducted over a 9-month period. Any iPhone user with access to the App Stores of The Netherlands, Belgium, and Germany could download the app and participate in the study. During the study period, the eMovit app was disseminated amongst potential users via social media posts (Twitter, Facebook, LinkedIn), paid social media advertisements (Facebook), digital newsletters and newspaper articles, blogposts and other websites. In total, 1,788 individuals visited the eMovit landing page. A total of 144 visitors subsequently entered Apple's App Store through that landing page. The eMovit product page was viewed 10,327 times on the App Store. With 79 installs, eMovit showed a conversion rate of 0.76% from product view to install of the app. Of those 79 installs, 53 users indicated that they were interested to participate in the research study and 36 subsequently consented and completed the demographics and the participants quiz. Fifteen participants completed the first PHQ-8 assessment and one participant completed the second PHQ-8 assessment. We conclude that from a technological point of view, the means provided by ResearchKit are well suited to be integrated into the app process and thus facilitate conducting smartphone-based studies. However, this study shows that although participant recruitment is technically straightforward, only low recruitment rates were achieved with the dissemination strategies applied. We argue that smartphone-based trials (using ResearchKit) require a well-designed app dissemination process to attain a sufficient sample size. Guidelines for smartphone-based trial designs and recommendations on how to work with challenges of mHealth research will ensure the quality of these trials, facilitate researchers to do more testing of mental health apps and with that enlarge the evidence-base for mHealth. In 2015, Apple launched an open-source software framework called ResearchKit. ResearchKit provides an infrastructure for conducting remote, smartphone-based research trials through the means of Apple's App Store. Such trials may have several advantages over conventional trial methods including the removal of geographic barriers, frequent assessments of participants in real-life settings, and increased inclusion of seldom-heard communities. The aim of the current study was to explore the feasibility of participant recruitment and the potential for data collection in the nonclinical population in a smartphone-based trial using ResearchKit. As a case example, an app called eMovit, a behavioural activation (BA) app with the aim of helping users to build healthy habits was used. The study was conducted over a 9-month period. Any iPhone user with access to the App Stores of The Netherlands, Belgium, and Germany could download the app and participate in the study. During the study period, the eMovit app was disseminated amongst potential users via social media posts (Twitter, Facebook, LinkedIn), paid social media advertisements (Facebook), digital newsletters and newspaper articles, blogposts and other websites. In total, 1,788 individuals visited the eMovit landing page. A total of 144 visitors subsequently entered Apple's App Store through that landing page. The eMovit product page was viewed 10,327 times on the App Store. With 79 installs, eMovit showed a conversion rate of 0.76% from product view to install of the app. Of those 79 installs, 53 users indicated that they were interested to participate in the research study and 36 subsequently consented and completed the demographics and the participants quiz. Fifteen participants completed the first PHQ-8 assessment and one participant completed the second PHQ-8 assessment. We conclude that from a technological point of view, the means provided by ResearchKit are well suited to be integrated into the app process and thus facilitate conducting smartphone-based studies. However, this study shows that although participant recruitment is technically straightforward, only low Introduction In 2015 Apple launched an open-source software framework called ResearchKit, designed to facilitate medical and health research (1). ResearchKit aims to simplify app development for research purposes by providing a variety of customizable modules to, for example, create informed consent forms, participant reported outcome surveys, and real-time dynamic active tasks (e.g., gait, tapping, spatial memory). This allows for conducting remote, smartphonebased research trials solely through the means of Apple's App Store and tracking and studying the behaviour and wellbeing of individuals who engage with medical and health apps. Such App Store Trials (ASTs) can for instance be used for appbased feasibility or effectiveness studies. In ASTs, the intervention under investigation (the app) is hosted by and offered through an open app store, recruitment of participants occurs directly via this app store, and data (e.g., demographics of participants, outcome measures) are collected via the app. Anyone who installs the app on their device, can function as a potential study participant. ASTs may have several advantages over conventional trial methods. Research through smartphone applications can, for example, remove geographic barriers, and allow for frequent assessments of participants in real-life settings. ASTs may facilitate study recruitment by reaching more and underrepresented or seldom-heard communities compared to conventional research. On the other hand, it is important to mention that ASTs exclude participants that do not own a smartphone or are unable to use a smartphone. ASTs can further support data collection processes, thereby potentially increasing the amount and quality of the data [e.g., through Ecological Momentary Assessment (2)]. ASTs can be conducted within any app store (e.g., Apple's App Store, Google Play), however, not every app store provider is currently offering the infrastructure, such as ResearchKit, to conduct ASTs. Numerous apps have been developed with the help of ResearchKit. However, studies have primarily focused on physical conditions [e.g., (3-11)] and ResearchKit-based mental health apps have been developed far less frequently. An examples that uses ResearchKit for the development of a mental health app, is the study by Egger and colleagues (12), in which an app to collect videos of young children with the aims of detecting autism-related behaviours was designed. Egger et al. (12) investigated the acceptability and feasibility of conducting an AST with young children and their caregivers. The entire study procedure was designed with ResearchKit (i.e., e-Consent process, stimuli presentation, data collection) and, over the course of one year, 1,756 families participated in the study by uploading 4,441 videos and completing 5,618 caregiver-reported surveys. The research team concluded that research via iPhone-based means was acceptable for their target population. A similar conclusion was drawn by Boonstra and colleagues (13) who investigated the feasibility of using a smartphone app to measure the relationship between social connectivity and mental health. The majority of the 63 participants indicated that data collection (including two mental health questionnaires and an exit survey via the app, as well as passively collected data via activated Bluetooth) was acceptable and that they would participate in future studies of the investigated app. Further, Byrom et al. (14) tested ResearchKit for delivering a Paced Visual Addition Test and concluded that ResearchKit provided a straightforward approach to app development, that participant acceptance was good and that ResearchKit is a promising tool to enable cognitive testing on mobile devices. Thus, ResearchKit has shown promise in terms of facilitating the development of and research on (mental) health apps by promoting app-based trials. The field of mental mHealth displays an urgent need for such research, as most applications are currently unguided self-help applications that are directly, and often freely, available to the general public. Platforms such as ORCHA [https://appfinder. orcha.co.uk/] and One Mind PsyberGuide [https:// onemindpsyberguide.org] conduct reviews to ensure quality standards and to provide transparency on the quality of digital health applications. However, most applications are not evidence-based (15) and only few mental health apps have been subjected to effectiveness (16). This is very troublesome for patients in need of selecting self-help and unguided apps. The aim of the current study was to explore the feasibility of conducting mental health research in the non-clinical population via the means of an AST using ResearchKit. In particular, the objectives of this study were to investigate the feasibility of participant recruitment and data collection in this AST. Such feasibility testing is an important prerequisite for studying the effectiveness of app-based research focussing on mental health in the general population. As a test case example, we used an app called eMovit, a behavioural activation (BA) app with the aim of helping users to build healthy habits. Although BA is most commonly associated with the treatment of depression, BA interventions can be adapted to and useful for non-clinical populations as well (e.g., 17-19). eMovit stimulates the development of new and positive behaviours by letting users schedule activities and reminding them on the corresponding days and times. The eMovit app VU Amsterdam commissioned the development of an BA ResearchKit app and the IT developer Brightfish BV was selected for development. eMovit was designed through an iterative co-design process involving e-mental health experts, IT developers, target users, and pilot participants. eMovit was developed as an example of how to embed a research process within a mHealth application with the aim to study this integrated research process as well as the effectiveness of the app itself. The app was translated from Dutch in three languages -English, German, and the Flemish-Dutch dialect and released in the Apple App Stores of The Netherlands, Belgium, and Germany. eMovit is a BA intervention app with the aim to activate and build healthy habits of users. The app stimulates the development and maintenance of new and positive behaviours by integrating activity scheduling, reminder setting [e.g., (20, 21)], monitoring (e.g., 22) and rewarding mechanisms (gamification; e.g., 23). More specifically, users can choose from existing positive activities in the app ( Figure 1A), create and personalize their own activities, and choose the number of times that they would like to repeat the activity ( Figure 1B). Hence, users are free to schedule any number of planned behaviours or activities throughout the day or week, and choose the frequency of which they would like to be reminded of these planned behaviours. Users earn badges and trophies for carrying out these new, positive habits. Enrolment, consent and study participation Once participants downloaded the eMovit App, they could self-navigate through welcome information about the functionalities of the App. At the end of this tour, participants could choose whether they wanted to participate in the research study or not. By choosing to participate in research, participants were presented with information about the research, i.e., the study goal, study period, corresponding procedures, anonymity, and researcher contact information ( Figure 2A). Users were informed that they would receive two pop-up messages with an invitation to complete the Patient Health Questionnaire-8 (PHQ-8; 24): one at the beginning of the study and one at the end of the study period 3 weeks later. Furthermore, participants were informed that they would receive two questions about their mood and feelings of happiness at three random times each day. After reading the information, participants received a 3-item multiple choice participant quiz evaluating their understanding of study participation, more specifically the research aim, data being shared completely anonymously, and the ability to stop at any time during the study ( Figure 2B). In case participants gave the wrong answer, they were provided with the correct answer. After the quiz, participants were asked to confirm whether they had read and understood all the information, and to agree to participate in the research ("I agree" or "Cancel") ( Figure 2C). Study participation was switched on ("I agree") or off ("Cancel") accordingly. After participants gave informed consent, they were asked to provide demographics on age, gender, country of residence, employment status, and whether the participant was a twin or triplet. In case participants indicated to be younger than 18 years of age, the study participation was switched off in the app. In case participants indicated to be a twin or triplet, they were asked to complete a number of follow-up questions as part of a different study. After providing the requested information, participants were welcomed to the study and directed to the first questionnaire. The research tools (i.e., participant information, participant quiz, consent form, and questionnaires including notifications/reminders) were programmed using ResearchKit's freely available templates. Information on Covid-19 was included in the App as the onset of the pandemic fell in the study period (see "Procedure"). Procedure The study was conducted over a 9-month period (March 1, 2020-October 31, 2020). As of March 1, 2020, any iPhone user with access to the App Stores of The Netherlands, Belgium, and Germany could download the app and participate in the study. During the study period, the eMovit app was disseminated amongst potential users via social media posts (Twitter, Facebook, LinkedIn), paid social media advertisements (Facebook), digital newsletters and newspaper articles, blogposts and other websites. An app landing page was installed to provide potential users with more information [https://emovit. org/]. Participants were recruited online by means of three different strategies: (1) active dissemination using free dissemination strategies, (2) active dissemination using paid dissemination strategies, and (3) passive recruitment (word of mouth and the mere availability of the app in the App Store). Dissemination strategies were designed to either direct potential users to the eMovit landing page [https://emovit.org/] from where a link led to the App Store page [https://Apps.Apple. com/nl/App/emovit], or direct them immediately to the App Store. Using those two pathways was hypothesized to increase download rates as the eMovit landing page provided additional and engaging information. Dissemination strategies were executed by the study partners in Germany (Leuphana University Lüneburg), Belgium (Thomas More University of Applied Sciences), and The Netherlands (VU Amsterdam). Outcome measures and data collection We monitored recruitment using App Store Connect, traffic on the eMovit landing through Matomo Analytics, and interactions with Facebook Ads through Facebook. Participant Data cleaning and analysis App Store Connect, Matomo Analytics, and Facebook Ads provided descriptive data in a clean and accessible format via their user interfaces. This data was extracted from the services and reported as such. The data collected via ResearchKit was sourced from the platform, cleaned and prepared for analysis by a statistician (HML). Completion Screenshots of the research information and consent screens of the eMovit app, showing study information (A), a 3-item multiple choice participant quiz evaluating their understanding of study participation (B), and confirmation of consent (C). Note. The eMovit app was developed by Brigthfish BV for VU Amsterdam. © 2022 VU Amsterdam. All rights reserved. Frontiers in Digital Health rates on participant demographics, mood, happiness, and symptoms of depression scales were collected. However, the outcome data on these measurements were not analysed and reported on in this publication as the focus of this study was the uptake of and engagement with eMovit. Descriptive statistics of the collected data were analysed using RStudio Version 1.3.1093 (25). Results Feasibility of participant recruitment in an app store trial In terms of active recruitment, eighty-three unpaid dissemination activities were conducted (n = 64 by Leuphana University Lüneburg, n = 15 by Thomas More, n = 4 by VU Amsterdam). These were formulated in lay language and designed to recruit individuals with varying areas of interest to reach the broader population (see Figure 3 for examples of a Twitter tweet in English and German). Furthermore, paid activities were also set up by means of three 2-week Facebook ad campaigns ( Table 2). The first part of the first campaign (Phase 1a), directed ad visitors to the eMovit landing page after which they could continue to the App Store. In total, 1,788 individuals visited the eMovit landing page from which 1,170 visitors were generated through specific dissemination campaigns. A total of 144 visitors subsequently entered Apple's App Store through that landing page. The first Facebook campaign (Phase 1a) resulted in no actual app installs, therefore the two-step approach via the landing page was changed to lead potential participants directly to the App Store. FIGURE 3 Examples of social media posts (Twitter) to disseminate eMovit. Feasibility of data collection in an app store trial Data collection of dissemination and recruitment activities Between March 1, 2020 and October 31, 2020, the eMovit product page was viewed 10,327 times in the App Store, with viewer peaks between July 8-14, July 29-August 11, and September 16-30, 2020. The time periods with the highest numbers of views overlap with the paid Facebook ad campaigns (see Table 2). The participant flow is presented in Figure 4. All in all, eMovit showed a conversion rate of 0.76% (from impressions to app units). Between March 1 and October 31, 2020, App Store Connect reported 12 crashes and 36 deletions of the app, with no more than 2 deletions per day. Nearly half of the eMovit installs were tracked back to FIGURE 4 Participant flow. * visitors = app views; ** app referral = visitors referred by another app; web referral = visitors referred by a website. Data collection from research participants In total, 53 users indicated that they were interested to participate in the research study by clicking the button "Yes, I want to participate" which led to the informed consent procedure. Of those, 36 consented and completed the demographics and the participants quiz and could be included in the study. Table 3 shows the demographics of the included participants. Fifteen participants completed the first PHQ-8 assessment and only 1 participant completed the second PHQ-8 assessment. A total of 2,180 assessments of current mood and happiness states were offered and 203 of these were completed (response rate of 9.3%; total of n = 16). Table 4 shows an overview of the completed measures. Discussion This study investigated the feasibility of participant recruitment and data collection for conducting mental mHealth research via App Store Trials (ASTs) using Apple's ResearchKit, an innovative method to enable large scale mobile-based trials. While it seemed technically straightforward to recruit study participants, reality showed that only 0.76% of product page views resulted in an install and 45.57% of the installs resulted in a completed informed consent form. Drastic decreases in the number of users after initial download is a common phenomenon in ASTs. Chan et al. (26), for example, report a download to informed consent conversion rate of 20.95% with 40,683 initial downloads of their Asthma Health Application. Marketing strategies to promote their app included the development and launch of an app landing page, a partnership with Apple for a launch video and media outreach, press releases and outreach to journalists, co-promotion efforts with asthma advocacy groups, and active social media promotion (Twitter, Facebook). Chan and colleagues (26) explain the high download number by a combination of media publicity and the ease of app download, and the subsequent decrease in numbers from download to informed consent by the "rigor of the consent process" (26, p. 360). Zens and colleagues (4) reached a download to informed consent conversion rate of 57.60% (953 initial downloads) for their ResearchKit app Back on Track, which is an outstandingly high conversion rate compared to other ASTs. Reasons for this can be multitude and are not discussed by the authors, however, they do highlight the importance of local language versions of the app to facilitate recruitment and retention rates. Zens and colleagues (4) then report a subsequent low download to active participation conversion of 11.2%, which is comparable to this study with only 18.98% of installs completing the first assessment. Other studies show similar patterns with download to active participation conversion rates around 10% [e.g., (6,27)]. Chan et al. (26) argue that mobile health developers must understand and incorporate the psychosocial and behavioural needs of mobile users to counteract the steady decrease of user rates from download to informed consent to continued participation (including survey completion) in ASTs. Zens and colleagues (4) support this by suggesting the incorporation of, for example, instant feedback mechanisms, gamification approaches, or the provision of relevant treatment information in mobile health apps. The low recruitment numbers and conversion rates in the current study could be due to a variety of reasons. Initial download rates may have been influenced by users' perceived need for a BA app such as eMovit and/or the attractiveness of eMovit to potential users. Because eMovit is a lifestyle app rather than a medical app, the user's gains associated with using the app are more difficult to convey and it is more likely that potential users will not feel the need to engage with such a lifestyle app. Anticipated quality of the app before download could also be a key factor in the users' decision to install the app, this might likely be influenced by the low number of app ratings in the App Store. Additionally, Apple ResearchKit precludes adoption by Android users hereby inherently lowering potential download rates to iOS users only. The low conversion rate from download to informed consent to study participation might have been influenced by the app content (e.g., perceived user-friendliness of the app, the perceived quality of the app), the users' expectations related to continuously using the app and participating in the study, or users not being willing to provide their data for research (due to privacy reasons or a lack of interest in research). It is beyond the methodology of this study to make a conclusive statement about the reasons for the low recruitment numbers and the sudden drop in participant engagement after the initial download. However, this study offers some considerations and lessons learnt for future mental mHealth research. Considerations for future (mental) mHealth research (1) The technical incorporation and use of ResearchKit. ResearchKit provided the research team with the tools to set up a high-quality trial infrastructure. Processes which can be time-consuming or difficult to organize in traditional (online) research, were optimized and easier to implement through ResearchKit. Zens et al. (3) describe ResearchKit as an "easy-to-use framework and powerful tool to create medical studies" (p. 1). This is in line with the current study where the technological incorporation of research componentsinformed econsent and data collection toolsby ResearchKit was perceived as fairly straightforward. Templates for creating questionnaires and participant information pages were provided, however, those templates restricted the level of design options for, e.g., number and allocation of text boxes, number of possible words per text boxes. Study information was a prominent component of the study flow and it was possible to easily navigate through the provided information. The corresponding research quiz, a preparation for the e-consent based on the study information, might raise the user's attention towards critical study information in a playful way and enables the researcher to check whether the participant understood the provided study information. The econsent template, which could easily be incorporated in the flow of the study provides a strong tool to simplify time-consuming paper work for both, the participant and the researchers. We conclude that, from a technological point of view, ResearchKit provides a robust and feasible mean to facilitate the conduct of ASTs. (2) App Store Trials using ResearchKit require a well-designed app dissemination process. Due to the technological advantages ResearchKit offers, the traditional participant recruitment process is replaced by disseminating an app. The app is launched in the App Store and every iPhone user with access to Apple's App Store can download the app and partake in the study. In order to recruit participants, the research team needs to bring the attention of potential app users to the app. The launch of the app in the App Store is thereby solely the minimal dissemination strategy. Isolated examples [e.g., (5,11,26)] show that this can be enough to boost download rates, however, the launch itself generally does not make an app sufficiently popular and visible. Recruitment via app stores is (in most cases) not a self-runner; what we need is effective dissemination. Dissemination is defined as a targeted approach of distributing information to a specific audience (28). Thereby, dissemination strategies need to be tailored to the purposes of the study and the target population. Those parameters define the scope (specific vs. broad, e.g., population with specific health need vs. general population) and might influence the success of the dissemination campaigns. Dissemination processes benefit from careful planning which can be facilitated by using a dissemination framework as underlying guidance. An overview of existing dissemination frameworks can be found online [https:// dissemination-implementation.org/viewAll_di.aspx]. (3) The issue with sampling from the "social media population". While recruiting through social media has many advantages such as low costs and wide reach, it is important to take a critical look at the representativeness of samples recruited through social media channels. Recruitment from social media often follows the so-called "river" sampling, as did the social media sampling in this study. River sampling is a non-probability sampling approach named as such because researchers using the traffic flow of a web page (here Twitter and Facebook) and "catching some users floating by" (29, p. 137). For reasons such as unequal access to the Internet, differences in users' preferences for social media use, or age differences in social media use across the population, river sampling is likely to lead to coverage bias. Therefore, it is not possible to build a probability model linking the "river" sample to the general population without knowing the demographic distribution of users of a service and the frequency of use of the service (28). In other words, without a well-known and defined sampling frame, no representative sample can be recruited. In addition, the researcher has little control over the reach of recruitment strategies because they Bührmann et al. 10.3389/fdgth.2022.978749 Frontiers in Digital Health depend on the algorithms of the various social media services (30). Without knowing the algorithms of information distribution in the medium, it will not be possible to design recruitment strategies to reach every potential participant, and even if the algorithm is known, it is probably almost impossible to reach every potential participant of the social media service. This again provides unequal opportunity to participate in the study and therefore a biased representation of, what we defined as, general population. Lastly, self-selection of respondents into the sample is a potential risk to the representativeness of the sample. We conclude that not only do conversion rates through social media recruitment appear to be low across studies, but social media sampling also carries a high risk of sampling bias and therefore should be used with caution as it could confound the science. (4) The sheer evidence of need does not guarantee engagement of users. Evidence suggests that Internet-delivered BA is efficacious in the treatment of depression and in increasing general wellbeing (31, 32). Due to its parsimoniousness nature, BA is a suitable candidate to be delivered through a mobile application. eMovit aimed to reach the general population and was therefore designed in a focused manner, reducing functionalities to the very basics, using simplicity as a strategy to remove barriers and reach many potential users. Despite those enabling pre-conditions, the fact that poor mental health rates are high amongst the population (33), and smartphone-based interventions are promising in decreasing the user's threshold in taking advantage of psychotherapeutic (preventative) treatments, engagement of users in eMovit was low. eMovit is not an isolated case: While it is estimated that around 20.000 mental health apps exist (34), a recent analysis found that most user engagement with mental health apps is focused on only two apps (i.e., Calm and Headspace; 35). Problems with the use of eMovit and other mental health apps could be due to several interacting issues, including the appeal of the app, awareness of the app's existence, and potential users' psychosocial and behavioral barriers to actively engaging with the app (e.g., habits). At its best, a well-designed, user-friendly, and useful app leads to positive feedback, and satisfied users attract and motivate other users. Understanding what factors drive users to engage with apps is critical to harnessing the potential of mental health apps for individuals effectively. This understanding could be achieved by involving the end user in the app development process, thereby addressing implementation issues from the outset. (5) We need guidelines for conducting and reporting App Store (effectiveness) Trials. Planning and conducting mental mHealth trials remains novel, despite the clear need for effectiveness testing in mental mHealth and ASTs provide the infrastructure for large-scale research trials. Additionally, the existing literature also shows a need for standardized and comprehensive reporting of ASTs, including detailed descriptions of dissemination/ recruitment strategies, to increase transparency, reproducibility and understanding of those trials. Guidance for the evaluation of eHealth exists (e.g., eHealth methodology guide; 36), however, mHealth research encounters unique challenges. Guidelines on AST trial design and reporting as well as recommendations on how to work with challenges of mHealth research might facilitate researchers to do more testing of mental health apps, ensure the quality of these trials, and with that enlarge the evidence-base for mHealth. In turn, the end user would benefit from more effective treatment options and assistance in selecting an appropriate and evidence-based application. Study limitations This AST was based on ResearchKit, which precludes participation from potential participants without an iPhone. Currently, iOS has a market share of 39.17% in Germany, 43.66% in Belgium, and 42.21% in the Netherlands (37). This, coupled with the fact that it is nearly impossible to know the sample frame in social media recruitment, calls into question the representativeness of the sample in this study. The eMovit app, and consequently the implemented dissemination strategies, targeted a general population. Therefore, findings might not generalize to clinical samples since recruitment strategies and conversion rates differ. In addition, we cannot be sure how the COVID-19 pandemic influenced the results since it did cause a rise in mental health problems (38) but also an increased exposure to digital tools (e.g., teleworking), and limitations in which BA activities could be planned (e.g., restrictions in social contact, closing of certain entertainment venues, etc.). It was also beyond the scope of this study to explore why our recruitment results were low, and thus it remains a discussion which adjustments would be necessary to increase participant engagement. Conclusion This study highlights important lessons learnt for future ASTs and mental mHealth research and practice. Apple's ResearchKit provides the means for setting up an infrastructure to conduct ASTs within the field of mental health. With the integration of ResearchKit, eMovit was a valuable tool to facilitate several research processes such as informed e-consent procedures and data collection. It could be easily adapted to different defined populations and therefore used as an add-on in trials to support research processes that are time consuming and costly in traditionally conducted trials. While ResearchKit and the eMovit app hold promise for app-based (effectiveness) research, conversation rates remain low and mHealth research would benefit from structured guidance for setting up ASTs, including well-planned considerations for app dissemination and engagement. Recruitment of participants from online and social media platforms needs to be treated carefully as it might pose a bias to the sampling results and results in weak science. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by The Scientific and Ethical Review Board Funding This study was funded by VU University, GGZ inGeest and A-UMC, VUMC.
254252840
s2orc/train
v2
2022-12-06T14:26:26.147Z
2020-06-09T00:00:00.000Z
A weakly Compressible, Diffuse-Interface Model for Two-Phase Flows We present a novel mathematical model of two-phase interfacial flows. It is based on the Entropically Damped Artificial Compressibility (EDAC) model, coupled with a diffuse-interface (DI) variant of the so-called one-fluid formulation for interface capturing. The proposed EDAC-DI model conserves mass and momentum. We find appropriate values of the model parameters, in particular the numerical interface width, the interface mobility and the speed of sound. The EDAC-DI governing equations are of the mixed parabolic–hyperbolic type. For such models, the local spatial schemes along with an explicit time integration provide a convenient numerical handling together with straightforward and efficient parallelisation of the solution algorithm. The weakly-compressible approach to flow modelling, although computationally advantageous, introduces some difficulties that are not present in the truly incompressible approaches to interfacial flows. These issues are covered in detail. We propose a robust numerical solution methodology which significantly limits spurious deformations of the interface and provides oscillation-free behaviour of the flow fields. The EDAC-DI solver is verified quantitatively in the case of a single, steady water droplet immersed in gas. The pressure jump across the interface is in good agreement with the theoretical prediction. Then, a study of binary droplets coalescence and break-up in two chosen collision regimes is performed. The topological changes are solved correctly without numerical side effects. The computational cost incurred by the stiffness of the governing equations (due to the finite speed of sound and the interface diffusion term) can be overcome by a massively parallel execution of the solver. We achieved an attractively short computation time when our EDAC-DI code is executed on a single, desktop-type Graphics Processing Unit. Introduction The usual way of modelling the low speed and incompressible flows, called later on as the truly incompressible approach, is based on the law of momentum conservation and the assumption that the speed of sound c s , in comparison to the convective velocity scale, is high enough to be considered as infinite. The assumption c s = ∞ implies the elliptic character of the system. The pressure field is a solution of the Poisson-type equation and is no longer treated as a thermodynamic quantity but rather as a source of momentum which enforces a kinematic constraint on the velocity field, i.e. ∇ ⋅ = 0 . The advantages of this traditional approach are: (1) only one velocity scale is present and (2) the density is indeed constant. The second feature is especially advantageous in the case of two-phase flows where the densities of particular fluids differ-the density varies only at the fluidfluid interface, while it remains constant in the bulk of each phase. There are, however, disadvantages in the numerical context, both in single and multiphase flows: the algorithmic complexity stemming from the operator splitting and the necessity of performing some iterative subprocesses (due to elliptic nature of the governing equations). It is well known that the truly incompressible models do not take a significant advantage of the parallelisation of the computational process, especially when graphics processing units (GPU) are used as the computing devices. Only fewfold speed-ups (in comparison to a single desktop CPU) are reported in the literature and the main reason is the elliptic type of the equations. The situation changes significantly if one assumes a finite speed of sound. The governing equations are then of the mixed parabolic-hyperbolic type. They may even become purely parabolic as is the case of the Entropically Damped Artificial Compressibility (EDAC) model for single-phase flows, see Sect. 2. This is advantageous for the algorithmic and numerical handling. Unfortunately, due to disparity of the velocity scales of the convective transport and the pressure waves propagation, which is required to approach the incompressible flow limit, the governing equations are stiff. The resulting time step restrictions can be counterbalanced by efficient parallelisation of the computations. On the other hand, the acoustic (compressible) effects cause some problems when applied to two-phase flows; these issues are comprehensively addressed in this work. In the present paper we propose a novel mathematical model for the simulation of twophase interfacial flows. The flow is described with the assumption of finite speed of sound by means of the EDAC model introduced by Clausen (2013). In our opinion (as argued in Kajzer and Pozorski 2018a) a better-suited name of the approach would be EDWC where WC stands for "weakly compressible". Concepts similar to EDAC were also proposed in Borok et al. (2007), Ohwada and Asinari (2010) and Toutant (2017). An interesting discussion and comparison of these approaches, called also the general pressure equation methods, has recently been presented in Shi and Lin (2020) and Dupuy et al. (2020). It has to be noted that there exist alternative, well established WC approaches to the modelling of incompressible flows. The Lattice Boltzmann Method (LBM) (Succi 2001) which solves the mesoscopic level equations was proven to be an accurate and efficient tool for the simulation of single-and multiphase flows, see e.g. Schoenherr et al. (2011), Moqaddam et al. (2016). In the LBM, the Boltzmann equation in a discretised phase space is solved and the discretisation is done using (most often) uniform grids, or lattices, and a finite set of the propagation velocities. It can be shown that the velocity and density fields, computed from the resolved probability density functions, on the macroscopic level satisfy the mass and momentum conservation equations in the weakly compressible regime. One can also solve the macroscopic equations directly using the finite difference or finite volume methods as recently presented in Bigay et al. (2017) and Vittoz et al. (2019). The hyperbolic character of the mass conservation equation requires the upwind-type schemes for the spatial discretisation which can suffer from the accuracy deficiencies and numerical diffusion in the case of low-Mach number flows; however, some improvements in this topic have been proposed (Thornber et al. 2008). Also, when very low Mach numbers need to be imposed, the accuracy could be impaired due to the appearance of short pressure waves. Since the present work is devoted to the twophase flows we also mention the Smoothed Particle Hydrodynamics (SPH) approach (Monaghan 2012). This method is Lagrangian and meshfree; therefore, it is well-suited for the description of the fluid-fluid interfaces and free surfaces, see for example Olejnik and Pozorski (2020). SPH is most often (due to simple implementation) used with the assumption of weak compressibility. On the other hand, SPH is a time-consuming method in comparison to mesh-based approaches and, unfortunately, is only first-order accurate. For the description of the two-phase systems we use here the so-called one-fluid formulation. A comprehensive classification of the two-phase flow models together with a short description of the one-fluid approaches can be found in Tryggvason et al. (2011) and Mirjalili et al. (2017). The one-fluid formulation needs either explicit tracking of the fluid-fluid interface represented by Lagrangian markers or implicit interface capturing using an additional field which is governed by model-specific equations. The class of interface capturing approaches contains the Level-Set (LS) methods, the Volume of Fluid (VoF) method and the Phase Field Models (PFM). The relationship between LS, VoF and PFM was discussed by Wacławczyk (2017). In our opinion, the main feature that distinguishes PFM from LS and VoF is that the PFM approach takes into account, at least approximately, the physics underlying the interface dynamics (including the topological changes) while LS and VoF are based on purely geometrical considerations. In this work we consider the diffuse-interface (DI) model that originated in the work of Sun and Beckermann (2007) and was rendered mass-conservative by Chiu and Lin (2011). The latter model variant (after a re-arrangement of the coefficients) is also referred to in the recent literature as the conservative Allen-Cahn (CAC) equation, see e.g. Fakhari et al. (2017), and indeed can be considered as a member of the PFM family. For the first time the EDAC model with DI technique was proposed in a basic variant, along with rudimentary numerics, in our earlier work (Kajzer and Pozorski 2018b). Here, a considerably generalised model is presented together with a comprehensive account on the discretisation schemes that are developed specifically for the EDAC-DI equations. Recently, a WC approach coupled with the original model of Chiu and Lin was reported by Matsushita and Aoki (2019). In their approach, the flow solver was based on the method of characteristics and the DI model was complemented with the standard LS equation (its solution was used to improve the surface tension computation). This makes the numerical methodology quite sophisticated as compared to our EDAC-DI model which is relatively simpler to solve. This paper is organised as follows. We first put forward the EDAC-DI model and find the expressions for model coefficients. Next, we present the proper numerical methods for solving the governing equations. Then, the results of three-dimensional simulations of steady droplet and binary droplets collisions, including the coalescence and break-up regimes, are presented. Finally, we briefly discuss the computational efficiency of the flow solver implemented for the execution on GPU. As we only show a few examples of twophase flows, the present work should be perceived as a "proof of concept" of the EDAC-DI approach. The EDAC Flow Model Due to its favourable features (Kajzer and Pozorski 2018a), we use the EDAC model as the flow solver. For its detailed derivation for single phase flows the reader is referred to Clausen (2013). Here, for the sake of brevity, we only recall the EDAC equations as needed for further considerations: where p is the pressure, is the velocity, 0 and 0 are the (constant) fluid density and kinematic viscosity, respectively, and is the vector of body forces. Equation (1) is the usual momentum conservation equation, i.e. the Navier-Stokes equation of the single phase flow, and Eq. (2) establishes the EDAC model. It is derived from the entropy balance equation (hence the phrase "entropically damped" in the model name). One assumes that the speed of sound c s ≫ | | max , i.e. the Mach numbers Ma = | | max ∕c s are low. Nevertheless, the incompressibility constraint ∇ ⋅ = 0 no longer holds. Therefore, despite the very acronym of EDAC, its affiliation to the family of artificial compressibility (AC) methods is in our opinion misleading. In the classical AC approach of Chorin (1997) the concept has been used for steady flows and when the computations converge, the resulting velocity field is divergence-free. For unsteady flows one introduces an artificial time and iterates the discretised momentum and pressure equations at each physical time step till ∇ ⋅ = 0 is satisfied. Clearly, this is not the case of EDAC approach. On the other hand, the density is (artificially) assumed constant which is physically consistent (asymptotically) at Ma → 0. Clausen's main idea consists in the introduction of a physically justified (derived from the temperature diffusion) term 0 ∇ 2 p . Therefore, the EDAC model is of purely parabolic type. This is important from the numerical point of view since centred spatial discretisation schemes without addition of numerical or artificial diffusion can be used if the grid is sufficiently fine. Indeed, this was proven by Kajzer and Pozorski (2018a). The pressure diffusive term significantly reduces the noise in the velocity divergence field, bringing the flow closer to the incompressible limit. As for the implementations of EDAC to date, apart from some laminar and turbulent flow cases reported in the original paper of Clausen (2013) and the direct numerical simulation of wall-bounded turbulence (Kajzer and Pozorski 2018a), the model has recently been studied in the context of Large Eddy Simulation (Delorme et al. 2017). To be able to handle two-phase flows in the weakly-compressible approximation, we propose a more general form of the EDAC model, permitting variable density and viscosity (due to material properties of the phases): These equations will be used in the following to obtain the final one-fluid formulation of the two-phase interfacial flow model. Again, the only difference in comparison to the truly incompressible approach is that the divergence-free condition, ∇ ⋅ = 0 , is replaced by the EDAC pressure equation, Eq. (5). Importantly, the eigenstructure of the system remains unchanged, i.e. information propagates with the maximum speed of c s + | | max . As it was mentioned in the Introduction, one can argue whether the EDAC model belongs to AC or WC methods in the case of single-phase flow since the density variations are then neglected. In Eqs. (3)-(5), the mass conservation is also explicitly taken into account, admitting local density variations that stem both from the two-phase nature of the system and the compressibility effects. Therefore, it is a true member of the WC family. Importantly, the pressure is explicitly evolved in the considered model so there is no need of any equations of state. The Diffuse-Interface Model The PFM approach is often associated with the Cahn-Hilliard (CH) equation, see e.g. Magaletti et al. (2013). The CH model has a strong physical foundation: it expresses a conservation law and does not involve any geometrical information, like direction normal to the interface. On the other hand, the CH equation is a nonlinear 4th-order partial differential equation (PDE), so it is difficult to solve numerically. Explicit schemes suffer from very short timesteps dictated by the high order spatial derivative of the so-called order parameter and therefore implicit schemes have to used. Moreover, issues according to the spurious shift of the solution arise and spurious shrinkage of the droplets or bubbles occurs. An alternative, easier in numerical handling, is the Allen-Cahn equation, which is a 2nd-order PDE. The serious drawback of that model is the lack of mass conservation. Therefore we have decided to put forward another approach. The very starting point is the PFM-type equation proposed by Sun and Beckermann (2007). This equation involves only 1st and 2nd spatial derivatives but requires the vectors normal to the interface. However, the model does not conserve mass. This deficiency was remedied by Chiu and Lin (2011) who introduced the conservative PFM given by the following 2nd-order PDE: In the above equation ∶ d ↦ [0, 1] is the order parameter (which we will also call the phase indicator function); it takes values 0 and 1 in the regions occupied by the separate fluids. The interface is understood here as a finite width transitional band where 0 < < 1 ; its actual position is determined by the isoline (or isosurface in 3D) = 1∕2 . The direction normal to the interface is defined by a unit vector = ∇ ∕|∇ | . If we neglect the advection term the dynamics is similar to the Burgers equation: the compressive term (1 − ) tends to create a "shock" and the diffusive term smoothes it. The steady state solution of Eq. (6) in one spatial dimension is given by Chiu and Lin (2011): where is the local coordinate in the direction perpendicular to the interface located at ̃ . The constant has the dimension of length and controls the width of the interfacial region. Obviously, the real world values of the order of 10 −9 m can not be used in any feasible numerical solution of macroscopic flow problems. Therefore, one sets the value of proportionally to the spatial resolution x . The width of the interface has a significant influence on the stability and accuracy of the solution since it controls the smoothness of the density and pressure gradients across the interface. On the other hand, setting a large value of reduces the effective resolution of the simulation. The second model parameter in Eq. (6) is . It obviously has the dimension of velocity and controls (in the absence of advection) the rate of approaching a steady state. From the literature it is, however, not clear what are the proper values of from the physical point of view. Let us rewrite Eq. (6) in the form which is also referred to as the conservative Allen-Cahn equation (Fakhari et al. 2017): where = 4 and ̂= ∕4 . It is now clear that ̂ is a diffusion coefficient (it has dimension of m 2 ∕s ), sometimes referred to as the mobility. This is somehow improper since the dimension of mobility is, in fact, m 3 s∕kg (Magaletti et al. 2013). To retrieve the dimensional consistency let us recall the Cahn-Hilliard equation (Magaletti et al. 2013): where the physical quantities are the fluid mobility and the surface tension coefficient . Clearly, the coefficient 3 Let us now focus on the choice of the mobility value which is not straightforward. Physically-sound values of the mobility are of the order of 10 −17 m 3 s∕kg . Although they reflect the microscopic nature of the interfacial phenomena, such low values can not be handled numerically in an otherwise macroscopic model and, alike for , higher values have to be set. Importantly, there exist a value of mobility which is optimal when the CH equation is used for the two-phase flow simulation and we will follow the work of Magaletti et al. (2013). Although their analysis was performed for the CH equation, Eq. (9), one can expect that the result gives at least reasonable approximation of a proper mobility value in the CAC equation since it reveals the same dynamics. The dimensionless mobility is * = ∕(UL 2 ) , where U and L are characteristic velocity and length scales. Magaletti et al. (2013) recommended * = 3Cn 2 with Cn = ∕L being the Cahn number. Therefore, we get: Using the above relation in Eq. (10) and recalling that = 4 , we get: This way, we obtain a physically-sound and no longer adjustable estimation of the parameter in Eq. (6) as = 12U . It should be emphasised that this value is much higher than in the original work of Chiu and Lin and in the recent paper of Mirjalili et al. (2019) who set = U . The probable reason is that the DI model considered there was coupled to truly incompressible flow solvers and high values of led to prohibitively short time steps when explicit time integration was used. Moreover, high values of quickly lead to severe unphysical deformation of the interface if improper spatial discretisation is applied, as we will show later. To obtain our final model we will use Eq. (12) and for convenience we will drop the overbar from the symbol . Let us make an additional comment on the original conservative level set (CLS) method of Olsson and Kreiss (2005) since it is quite popular and, in a sense, related to the DI model we consider here. The CLS equations are: where is the artificial time of the so-called re-initialisation stage, Eq. (14). This model can be considered as a splitting of the advection and DI operators in Eq. (6) by setting = t and taking one re-initialisation step. Although the normal vectors used in the CLS are fixed during the re-initialisation process ( 0 denotes the normal vectors obtained from distribution at the most recent physical time step), the number of steps in the artificial time with given acts in a similar way like the mobility: the long-time integration of the re-initialisation equation corresponds to high mobility value. Therefore, insufficient re-initialisation time after each physical time instant can lead to non-physical results and requirement of special numerical treatment, as exemplified by artificial rupture of the gas film in the simulation of droplets collisions, cf. Amani et al. (2019). Olsson et al. (2007) solved this issue by proposing another CLS model in which the diffusive term is projected on the direction normal to the interface, resulting in a formulation based on purely geometrical considerations. Another remark should be made here: using an iterative subprocess in artifical time between subsequent physical time steps (the level-set methods, also some algebraic VoF approaches, see e.g. So et al. 2011) rises questions about the accuracy since the distribution of the density is modified while the distribution of momentum remains unchanged. The EDAC-DI Model The main idea of the one-fluid formulations is that the local fluid properties are computed (directly or after some mollification as in sharp interface methods, e.g. VoF) from the phase indicator function: where 0 , 1 and 0 , 1 are the densities and viscosities of the phases, respectively. Without the loss of generality we can assume 1 > 0 . For brevity, we will also use the notation [ ] = 1 − 0 for the jump of the density across the interface. Provided that the densities differ, we can write the trivial inverse relation of and to be used next: 3 Using the above relation, from Eq. (6) we can obtain an "interfacial" continuity equation: where = ∇ ∕|∇ | . Obviously, considering Eq. (7), the stationary solution is: The reasons to solve the equation for rather than the one for are threefold: (1) one avoids the multiplication of possible dispersive numerical errors by an arbitrarily large factor [ ] ; (2) there are less arithmetic operations to be performed in the solution algorithm; (3) the interpretation of the governing equations is straightforward. It should be noted that the formulation (17) is less general than solving Eqs. (6) and (15) since it does not permit the fluids density ratio 0 ∕ 1 = 1 which, on the other hand, is less common in practice. We emphasise that the actual interface is now defined in a natural way as the isoline (isosurface) corresponding to = ⟨ ⟩ with ⟨ ⟩ = ( 1 + 0 )∕2 being the mean of the material densities. Apart from the interfacial region, Eq. (17) should reduce to the usual mass conservation law. However, due to the weak-compressibility effects it is possible that ∇ ≠ also apart from the interface. The DI terms and the surface tension acting in the "spurious" interfacial region (i.e. locations where ∇ ≠ is caused by compressibility) could further non-physically steepen the density gradient (or, in case of low surface tension, excessively smear the compressible density variations). Also, for the sake of physical consistency, the EDAC pressure diffusion should not act in the interfacial region: there is no reason to artificially smooth the pressure jump across the interface. Moreover, if the pressure diffusion is not switched off on the interface, a constant shift of mean pressure appears, which is fed by the surface tension. A question arises whether applying the EDAC pressure diffusion is justified at all. The answer is positive since in the well-resolved flow regions it will reduce the action of the limiters used in the numerical approximation. Summarising, one needs to distinguish between the interfacial region and the density fluctuations in the bulk of the phases stemming from compressibility. Therefore, we propose the following EDAC-DI model, cf. Eqs. (17), (4) and (5), as the final formulation: where st is the surface tension force (we do not take the body forces into account for brevity) and is a switch that takes the value of 1 in the interfacial region and 0 apart from it. The density variations due to compressibility are proportional to Ma 2 . The same holds for the phase indicator function, i.e. ∼ Ma 2 and variations of such magnitude appear where ≈ 0 and ≈ 1 . Consequently, unlike in Eq. (6), the constraint 0 ≤ ≤ 1 no longer holds in the WC implementation. Before we explicitly define the switch , let us introduce auxiliary, clipped fields and ̃: The switch is defined as follows: where is a threshold to determine whether a given location is attributed to the interfacial region. Keeping in mind the discussion following Eqs. (19)- (21), we set = Ma 2 . The coefficient has to be chosen carefully. If it is too low the surface tension and the DI terms can possibly act also in the bulk of the particular phases. On the other hand, too high value of will artificially narrow the region where the surface tension and DI terms should act, leading to oscillations of the pressure field and excessive diffusion of the density. We have found that = 1 was a proper and rather safe choice in all the cases considered here; however, one has to remember that it is dependent on the numerical method used (in particular the numerical diffusion). The switching procedure can be summarised as follows: (1) get rid of the compressible overshoots (i.e. regions of mass accumulation) where ∼ 1 and the undershoots (mass rarefactions) where ∼ 0 ; (2) remove (by setting ̃ ) the undershoots where ∼ 1 and the overshoots where ∼ 0 ; (3) mark the interface as the locations where <̃< 1 − . Notice that this procedure is performed explicitly and cell-wise so neither the auxiliary fields nor the switch have to be stored. It is clear that in the limit Ma → 0 , when compressible density variations vanish, no switch is needed and the DI terms can be applied in the whole domain keeping the physical consistency. This would result in the parabolic type of the mass conservation equation in the whole domain allowing us to use centred spatial schemes (without any upwinding or artificial diffusion). The dispersive errors would be damped by the DI diffusive term since the cell Peclet numbers Pe = U x∕(12U ) ≪ 1 , as ∼ x . On the other hand, in conjunction with the truly incompressible solvers whose efficiency is based on the absence of sound waves, an implicit time integration has to be applied to the DI equation, Eq. (19), due to the stiff diffusive term, see Sect. 4 for details. The divergence-free velocity field is obviously not our case and a more sophisticated numerical technique than centred schemes for the spatial discretisation has to be used (Sect. 3.1). The surface tension force is defined by Tryggvason et al. (2011): where is the identity tensor, S is the surface Dirac delta function related to the interface and is the surface tension coefficient (assumed constant in this work). The local dynamic viscosity, in order to avoid negative values, is computed according to Eq. (15) using ̃ as the order parameter; the same is done when computing the EDAC pressure diffusion coefficient. We emphasize that the clipped values ̃ are used only for the evaluation of local viscosity coefficients; the density value taken in the computation of other terms is unchanged, so the mass and momentum are conserved. The normal vectors used for the computation of the DI compressive term and the surface tension could also be determined using ̃ , but since the overall behaviour of the solution is sensitive to the numerical errors in the computation of gradients, the use of clipped field ̃ should be avoided ( or are to be used). In the flow cases that we consider in this work the maximal velocity of the flow varies significantly during the simulation (e.g., due to the capillary waves created by the topological changes of the interface). Choosing the reference velocity scale U is therefore problematic. Setting U = | | max based on theoretical considerations before the simulation can lead to, e.g., overestimation of the speed of sound and the mobility. Therefore, we adaptively set the reference velocity as the maximum speed in the domain with 10% safety factor, i.e. U(t) = 1.1 × | (t)| max ; the speed of sound and the mobility are also set adaptively using U(t). This allows us to keep Ma almost fixed during the simulation and assure a proper value of without significant overestimation. This is important since very low Mach numbers should be avoided due to excessive numerical diffusion and creation of short pressure waves of high magnitude; the role of was discussed earlier. Additionally, this allows one to achieve higher computational efficiency since the time step is also set adaptively. Summary of the Model Parameters The model parameters that have to be set are: the Mach number Ma, the numerical interface width , and the coefficient . The choice of was already discussed and we will always use = 12U . During the investigation we found that = 0.75 x is a reasonable choice with respect to the accuracy and resolving power; this results in the interface being resolved using ∼ 8 grid cells. The choice of the Mach number is mainly dictated by the computation of the surface tension: to avoid significant pressure oscillations in the vicinity of the interface one has to take into account the locations where |∇ | > 10 −3 ∕ x . Setting Ma=0.03 we obtain the switch = 1 at the positions where the phase indicator variations satisfy > 9 × 10 −4 . Numerical Method To numerically solve Eqs. (19)- (21) we use the method of lines: the equations are rewritten in the semi-discrete form with a prior discretisation of spatial operators, resulting in a system of ordinary differential equations. Let us present the spatial discretisation first. Spatial Discretisation For the single-phase flow the EDAC equations are of purely parabolic type and, therefore, one can use centred spatial discretisation without introducing artificial or numerical viscosity if the grid is sufficiently fine, as it was proven by Kajzer and Pozorski (2018a). The case of the EDAC-DI model is different: for physical consistency we switch off the DI terms (including the diffusion) of the mass conservation equation in the bulk of the phases and the EDAC pressure diffusion is switched off in the interfacial region. This results in a locally hyperbolic character of the equations and, in the absence of diffusion, the centred schemes would lead to unphysical oscillations in the solution. Moreover, later in this work we consider the simulation of colliding liquid droplets immersed in gas. Even though the droplet diameters are very small (less than 1 mm ) the resulting Reynolds numbers are of the order of 10 3 . The situation gets worse at time instants just before the collision, when velocity rises locally in the region of gas film significantly above the initial velocity of droplets and the film has to be resolved using only few grid cells. We were not able to perform a stable simulation in this case using the central schemes for the discretisation of the hyperbolic part of the governing equations. Therefore, we have chosen to use the scheme proposed by Kurganov and Tadmor (KT) (2000) which is a simple, Riemann solver-free, upwind-type finite volume scheme for convection dominated problems. Below we present the details. The discretisation method is shown here for the case of two spatial dimensions (2D). The extension to three dimensions (3D) is straightforward, except for some terms which are also specified in 3D case. We will use the uniform, cartesian grids with cell edge length denoted by x . All variables are arranged in collocated manner and are stored at the cell centers. Let = (u, v) and = (n x , n y ) . For clarity we rewrite the governing equations as follows: Above, we already replaced the surface tension, Eq. (25), by the Continuum Surface Stress (CSS) model (Tryggvason et al. 2011) and we use s ≈ |∇ |∕[ ] . The CSS model advantage is that it conserves the total momentum which is important in the flow cases dominated by inertia. The main drawback of CSS (with respect to the popular CSF approach) is the appearance of stronger numerical side effect, the so-called parasitic currents, see Tryggvason et al. (2011). We analyse this phenomenon in Sect. 4. We emphasise that the density gradients and normal vectors used for the computation of the surface tension are marked by a hat symbol. They need to be distinguished from the ones used in the DI equation (without hat symbol) since they are discretised in different ways. In the following we will use the approximation of local kinematic viscosity at the cell faces given by: The local dynamic viscosity and the switch are computed in the same way. The viscous flux in the momentum equation is discretised using centred differences and for the off-diagonal terms the arithmetic mean of cell-centred values is used: The density and pressure diffusive terms are discretised by analogy with Eq. (30). The CSS term in Eq. (27) is discretised using the following approximations: 1 3 and The normal vectors are computed as follows: Namely, the density gradients on the cell faces are computed as the average of the cell centred values while the gradient norms are computed as the average value of norms and not the norm of the average gradient; please note that � |∇ | ≠ |∇ | . We have found that this discretisation of the CSS model allows to avoid spurious topological changes of the interface during coalescence process and results in the lowest magnitude of the parasitic currents among the other combinations of schemes. This observation corresponds, to some extent, to the results of Jamet et al. (2002). The KT scheme (Kurganov and Tadmor 2000) for the mass and momentum convective fluxes reads, respectively (we present only the x-direction): where a i+ 1 2 ,j = c s + max(|u L i+ 1 2 ,j |, |u R i+ 1 2 ,j |) and the reconstructions of the left and right states of a generic variable f (here, , u or p) are done by the MUSCL procedure: where x f i,j is the x-component of the gradient of f computed using a chosen limiting procedure. The above discretisation of the momentum convective term is, however, improper in our application: the density gradient in the interfacial region, that stems from the material properties and not from the flow, should not add diffusion to the momentum flux. Therefore, using the elementary identity x (fg) = f x g + g x f and the mean value of the left and right states, we can fix this problem by setting: while keeping consistency at the locations in the bulk of the phases. The pressure flux is also computed using the KT scheme with the MUSCL reconstruction. Let us now present the discretisation of the EDAC pressure equation. The required left and right states of the velocity components and the pressure are already computed for the discretisation of the momentum fluxes. The acoustic and convective terms read: It should be emphasised here that we do not apply the switch for the numerical diffusion, although we turn off the EDAC pressure diffusion term in the interfacial region. This is required to control the pressure oscillations during the topological changes of the interface. We now discuss the discretisation of the DI terms of Eq. (26). The discretisation of the momentum and mass fluxes consists in approximation of the flow fields (ρ, u and p) at the cell faces and then the numerical fluxes are computed using these face values. Such a procedure can be also applied using centred interpolation of variables, leading to the so-called skew-symmetric or split forms of the nonlinear fluxes, see, e.g., Pirozzoli (2010), Kennedy and Gruber (2008). This approach, however, should not be applied to the DI compressive term. Instead, we should use the so-called divergence form: compute the compressive fluxes at the cell centres and then average them on the cell faces. Namely, setting and we compute while the following split form should be avoided: When using the above form, the normal vector at the cell face could be computed using a multidimensional stencil, like it is done for the computation of the off-diagonal terms of (38) the viscous stress tensor, or by averaging the neighbouring cell centred values. Unfortunately, independently of the way the normal vectors are computed, the split forms lead to a spurious deformation with tendency to "squaring"-a circular interface becomes a square. It was found that computation of the normal vectors at the cell centres also needs special attention as far as shape preservation of the advected structures is considered. The first option is to discretise the density gradients in the dimension-by-dimension (DBD) manner using the discrete operators introduced above: However, this leads to a spurious deformation of the interface, see Fig. 1. These deformations are more significant with increasing and decreasing . We found that in 2D cases it is much better to use a genuinely multidimensional (MDIM) stencil and compute the The non-isotropic behaviour of the numerical solutions of PDEs with the DBD spatial discretisation strategy was described by Shukla and Giri (2014) who recommended the use of genuinely MDIM schemes with isotropic truncation errors. Here, we also recall the proper formula for the discretisation of the normal vectors in 3D since it is not a straightforward analogy to the 2D case: where, e.g., and the interpolation on the cell face is done using 2nd-order averaging formula. Notice that, in practice, since we are interested in the cell centred values, the above formulation is simplified to a form similar to 2D case. For example, to compute the derivative in the x-direction one will not use cells marked by i at all-only i ± 1 will be involved. Morover, the 3D formula is, fortunately, simpler than its straightforward extension of 2D case and does not require the full 27-cell-stencil nor the use of the 2D Simpson rule on the cell faces. In the present paper we apply these formulae to the computation of the DI normal vectors and the gradients required by the Multidimensional Limiting Process (MLP) (Hubbard 1999) that we use for the flux approximation. The MLP procedure in 2D is described in "Appendix". The necessity of using the multidimensional reconstruction will be proven in Sect. 3.3. Temporal Discretisation The time advancement of the solution is done by means of the 2nd-order, strong stability preserving Runge-Kutta (SSPRK2) method, see e.g. Gottlieb and Shu (1998). Our choice is dictated by the rich variety of physical phenomena governed by the EDAC-DI equations and we look for overall robustness of the numerical methodology. An additional advantage is that this method requires only two registers per unknown variable which is important in GPU computing (due to relatively low global memory available). The stability of an explicit time integration schemes for EDAC-DI equations in d-spatial dimensions is ascertained when: where generalised diffusion coefficient = max( 0 , 1 ) (stemming from the momentum and pressure equations) or = (stemming from the DI equation). The maximum allowable time steps also depend on the applied type of the spatial discretisation. When the surface tension is taken into account, another time step restriction established by Brackbill et al. (1997) is: It stems from the CFL condition based on the estimated speed of capillary waves, c 2 = k∕( 0 + 1 ) , where k is the highest resolved wave-number; for the 2nd-order, threepoint central scheme we have k = ∕ x . However, as pointed earlier, we set the speed of sound and adaptively by tracking the maximal velocity of the flow which also takes the capillary waves into account. Therefore, the restriction on the time step given by Eq. (50) can be neglected. An interesting discussion on the time step constraints stemming from the surface tension is presented by Denner and van Wachem (2015). Let us compare the relevant time step restrictions. In this work we consider low viscosities, therefore we apply the CFL condition and the DI diffusion related time step limits: and respectively, where c = ∕ x is the proportionality coefficient. The ratio of these time steps is: For the parameters used, we have r ≈ 1.05Cr∕C diff . This means that one should apply the time integration methods for which the maximum allowable values of Cr and C diff are in relation Cr < C diff . This is the case of most of the Runge-Kutta methods while, e.g. the Adams-Bashforth methods should be avoided. On the other hand, when using large c , the diffusive time step constraint will be always more severe than CFL. In such case one could consider the use of, e.g., the Runge-Kutta-Chebyshev methods which have a large stability region along the real axis. When the advection-diffusion equation is considered and the maximal t CFL and t DI are of similar magnitude, as in our case, the maximal allowable Cr is a function of the Peclet number, Cr = Cr(Pe ) . In the presence of advection, the maximal allowable C diff is also lower than in the pure diffusion case. The time integration methods with Cr less sensitive to the diffusion coefficient are proposed by Torrilhon and Jeltsch (2007), however the (51) improvement is done at the cost of additional storage. We have found that, in the considered range of the model parameters, stable computations with SSPRK2 are possible when Cr ≤ 0.5 and C diff ≤ 1.2 , resulting in r < 1∕2 so it is still the speed of sound that limits the time step. Assesment of the Numerical Method for DI Equation First, we analyse the influence of the chosen discrete forms of the compressive part of the DI flux and normal vectors approximations on the shape preservation. The test consists in solving Eq. (12) without advection in a periodic square domain [0, 1] × [0, 1] . The initial condition is the circular interface of diameter D = 0.5 , set by adapting Eq. (7) to two dimensions. The resolution is set that D∕ x = 50 . The time step was set according to the remarks in the previous subsection assuming U = √ 2 and the computations are performed till t = 10 . The results are shown in Fig. 1. It is clear that the split form, Eq. (43), gives very bad results independently of the way we compute the normal vectors, while the divergence form, Eq. (42), performs much better. Importantly, if an improper discretisation is used, it is observed that the spurious deformations become stronger with decreasing and increasing (not shown). On the other hand, using large leads to a loss of the effective spatial resolution and should be avoided. Fortunately, the divergence form with multidimensional scheme for the discretisation of the normal vectors offers enormous improvement over the other forms and allows us to use arbitrarily high value of without significant non-physical deformation of the interface. To investigate the proper scheme for the advection term we tested, along with the already mentioned MLP reconstruction, three popular one-dimensional limited slope reconstructions: superbee (SB), monotonised-central (MC) with limiting constant 1.3, and the optimised MUSCL (OM) reconstruction of Leng et al. (2012). We consider two cases: advection in the direction aligned to grid lines and in the direction 45 • oblique to the grid lines. In the first case U = 1 and in the second case U = √ 2 . To mimic the situation of coupling the DI equation with EDAC equations we take the speed of sound stemming from Ma=0.03 in the diffusive (upwind) part of the KT fluxes. The results for the one-dimensional limiters are shown in Figs. 2 and 3. Clearly, the SB and MC limiters perform very poorly: even after a short time a severe deformation is observed. The OM scheme performs better but still introduces a significant distortion. The MLP scheme outperforms all the other schemes, however a slight deformation is visible in the case of the grid-aligned velocity field. We used the least compressive variant of MLP with = 1 , see "Appendix". One can eliminate the spurious deformation by making the limiter a bit compressive by setting = 1.1 but, on the other hand, we noticed stability issues when this was applied to the EDAC equations. Therefore, in the following we will always use = 1 . We note that similar results are obtained in the truly incompressible formulation; obviously, some tuning of the limiting parameter is then required. To obtain meaningful quantitative information we simulated the so-called Rider-Kothe benchmark case (Rider and Kothe 1998). The case is well known and we do not include the very details here, see e.g. Olsson and Kreiss (2005). The domain is again a unit square. The off-centre circular interface of diameter D = 0.3 is rotated (in counterclockwise direction) in a shearing velocity field. The velocity field is reversed after t = T = 1 and the solution should revert back to the initial state at t = 2T . The test was performed at four spatial resolutions using the grid built of N = 80 , 160, 320 and 640 cells in each direction, resulting in D∕ x = 24, 48, 96 and 192 and Cn = 1/32, 1/64, 1/128 and 1/256, respectively. The results are shown 1 3 in Fig. 4. It is seen that to prevent the spurious topological change the Cahn numbers should be ∼ 0.01 or less. In Table 1 we report the outcome of quantitative analysis. We consider the standard L 1 error of the solution and the L A error which measures the area and shape deviation of the contour corresponding to = 0.5 , see Olsson and Kreiss (2005): where H is the step function and exact is the analytical solution which is also imposed as the initial condition (the errors are computed at t = 2T). For the low resolutions the order of convergence is higher than 2 since the solutions differ qualitatively. When the resolution is sufficient to prevent the spurious topological change the order of convergence is lower than 2 which is expected since we keep the interface width constant (with respect to the grid spacing x). Fig. 2 The results of translation test using different limiters for the advection term (the abbreviations are expanded in the text) in the case of the grid-aligned velocity field. The line styles and colors are coded in the same way as in Fig. 1 Summarising, the considered DI model with the numerical discretisation detailed in Sect. 3 is accurate in terms of shape and volume preservation but requires relatively fine grids. Avoiding the spurious interface deformations is critical when the surface tension plays a significant role in the flow. Assesment of the Numerical Method for Variable Density EDAC Equations The numerical methodology proposed in this work differs significantly from the one we used previously for the DNS of the channel flow using EDAC model (Kajzer and Pozorski 2018a). One can expect significant numerical diffusion from the discretisation presented in Sect. 3.1. For the comparison purposes we took the well known case of Taylor-Green vortex (TGV) at Re = 1600 . As the reference we take the DNS data (Jammy et al. 2016) obtained by solving the compressible Navier-Stokes equations with Ma = 0.1 , on a grid built of N = 512 cells in each direction. We solved Eqs. (3)-(5) using the numerical method Fig. 3 The results of translation test using different limiters for the advection term (the abbreviations are expanded in the text) in the case of the grid-oblique velocity field. The line styles and colors are coded in the same way as in Fig. 1 Fig. 4 The Rider-Kothe test: results obtained by solving Eq. (12). The contours corresponding to = 0.05, 0.5 and 0.95 are shown. The black dashed lines denote the DI solution at t = T and t = 2T , the gray solid line is the reference solution where for t = 2T we used the initial condition, and at t = T (only the contour = 1∕2 is shown) we used the solution of the pure advection equation obtained on 1000 × 1000 grid with the 5th-order monotonicity preserving (MP5) scheme (Suresh and Huynh 1997) presented above. In Fig. 5 we compare the time evolution of the kinetic energy obtained at two spatial resolutions, N = 192 and 384, at Ma = 0.1 and Ma = 0.03 . The speed of sound was set adaptively during the simulation according to the remarks in Sect. 2.3 and the update was done every 100 timesteps. Clearly, the numerical diffusion is significant and depends on the Mach number. Using N = 384 we did not achieve satisfactory agreement with the DNS reference data computed with N = 256 (not shown). It should be emphasized that the DNS of interfacial flows should assure spatial resolution high enough to resolve the interface on a length scale that is equal or lower than the Kolmogorov scale l K . In our case this would result in l K > 8 x leading to tremendous computational effort but, on the other hand, the turbulence would be then sufficiently resolved, even by a strongly diffusive scheme. However, it seems that one should consider the use of adaptive mesh refinement (AMR) and hybrid spatial discretisation (by blending low and high order schemes) to enhance the resolving power since high numerical diffusion is not needed except in the interfacial region. Results We have chosen two benchmark cases (Sects. 4.1 and 4.3) to validate the new model and numerics in their basic apects. We study the coalescence and break-up of liquid droplets immersed in gas in two collision regimes. The proper behaviour of the system in those cases is not straightforward to obtain in many simulation strategies, see e.g. Moqaddam et al. (2016) and Amani et al. (2019). It should be emphasized that both coalescence and break-up are always under-resolved, independently of the model applied. The physically-sound thickness Fig. 5 The evolution of the kinetic energy in TGV case of the gas film between the droplets just before collision and thin filaments created before the break-up are significantly smaller than available grid spacing. Moreover, the maximal resolved speed of capillary waves is limited by the grid spacing, see Sect. 3. Additionally (Sect. 4.2), we conducted the simulation of a single steady droplet to validate the model against the Young-Laplace law and to estimate the numerical errors emanating as the so-called parasitic currents. In all the simulations the maximal velocity was updated every 100 time steps, which corresponds to physical time interval of the order 10 −5 s. Head-on Collision of Water Droplets in Air We first simulate the head-on collision of two equal diameter ( D = 0.5 mm ) water droplets immersed in air. The computational domain is a periodic box with the edge length equal 3D. Initially, the droplets' centres are separated by 1.5D. We have chosen the Weber number (based on the initial droplets velocity U 0 ) We = 1 D(2U 0 ) 2 ∕ = 10 . In this case no secondary break-up should occur upon coalescence (Ashgriz and Poo 1990). Additionally, at this relatively low Weber number, the thin liquid lamella will not appear and therefore the simulation is sufficiently resolved (in the context of interfacial structures) for the whole analysed time period. The densities of water and air are taken as 1 = 10 3 kg∕m 3 and 0 = 1.226 kg∕m 3 , respectively. The viscosities are 1 = 1.137 × 10 −3 kg∕(m s) and 0 = 1.78 × 10 −5 kg∕(m s) . The surface tension coefficient is = 0.0728 N∕m . The droplets initial velocity (aligned with the x axis) is U 0 = 0.60 m∕s . The resulting Reynolds number is Re = 1 D(2U 0 )∕ 1 = 530 . The simulation is conducted till t = 1.5 ms. We set the spatial resolution to 192 3 , so the cell size is x = 7.8125 × 10 −6 m . Initially the droplets are resolved using 64 grid cells per diameter. The grid spacing results in the maximal capillary waves speed c = 5.4 m∕s. In Fig. 6 we present an overview of the simulation. After the collision and merging, the resulting droplet expands until the surface tension balances the inertia. We did not observe any spurious break-up at the maximum expansion stage (see the right panel of Fig. 10). Then, the "cylindrical" droplet collapses and expands along the initial velocity direction. Importantly, the evolution of the interface corresponds well to the experimental results of Ashgriz and Poo (1990) (although our simulation is done at lower Weber number). In Fig. 7 we present the interface shape at t = 0.608 ms. Clearly, the axial symmetry is very well preserved, even in the regions of high curvature. This confirms that the spatial discretisation schemes work correctly, cf. Figs. 1 and 2. The very moment when the droplets coalesce is shown in Figs. 8 and 9. Most importantly, the density distribution is physically correct, i.e. no gas is trapped inside during the collision. To see the effect of the choice of , as compared to the recommended value of 12U (see Sect. 2.2), we also tested = 6U which indeed results in improper behaviour: tiny air bubbles become entrapped in the resulting droplet and a region of spurious rarefaction of liquid phase appears (not shown). Although the spatial discretisation we use is highly diffusive, some pressure oscillations are still present in the narrow region between the droplets just before collision. On the other hand, they quickly disappear after the coalescence is completed. The switch acts properly: the surface tension and the DI terms are active only in the interfacial zone. The absolute value of the divergence of the velocity field is high in the gas phase at the locations neighbouring the interface. This is an unwanted effect of applying which sharply switches off the surface tension. On the positive side, the variations of ∇ ⋅ are only local. A larger weakly-compressible structure is visible in the under-resolved gas film escaping from between the droplets (see Fig. 9: third row, middle column). The values of the velocity divergence are quite high but one has to remember that the variations of the density due to the compressibility effects scale as ∼ t ∇ ⋅ and in our simulations t ∼ 10 −8 s which results in ∕ ∼ 1% as shown later. In Fig. 10 the pressure field cross-sections are shown at later time instants when the interface is strongly curved. The pressure is smooth and, as expected, it achieves extremal values near the interface where the curvature is the highest. Finally, in Fig. 11 we report some quantitative results. In the left panel the history of the maximal velocity in the flow is presented. Three peaks are visible: the first one corresponds to gap draining (the gas film ejection from the region between the droplets); the second and third ones are due to the capillary action caused by the highly curved interface, compare with Figs. 6 and 10. In the right panel of Fig. 11 we show the history of normalised extreme density values and the volume bounded by the interface. The results are presented for t > 2 × 10 −6 s for clarity. From that moment on, the discontinuous initial condition for the velocity which imposes a significant compressible effect (the gas phase density locally decreases by ∼ 3% ) is relaxed. At later times, the minimum density of gas decreases locally (in the vicinity of the interface) by not more than 1.6% . The density of liquid phase is less prone to the cumulation and its maximal deviation from 1 does not exceed 0.1% . The volume of phases bounded by the surface = ⟨ ⟩ is well preserved and varies by ± 0.5% during the simulation. Steady Water Droplet in Air Usually, the test of steady droplet is performed in a domain bounded by the solid walls with no-slip boundary conditions but here we use the periodic domain. This choice makes, in fact, the domain infinite so (due to the parasitic currents) there is no reason to perform On the other hand, shorter simulations can also bring some meaningful outcome. We kept the settings from the previous case and considered three spatial resolutions: 128 3 , 192 3 , 256 3 which corresponds to ∼ 43 , 64, and ∼ 85 grid cells per droplet diameter. The simulations were performed until t = 2 × 10 −4 s . The reference velocity scale U was fixed during the simulations and its value was set to be (approximately) the initial droplets speed from the test of head-on collision, i.e. U = 0.6 m∕s. In Fig. 12 we show the time averaged pressure profiles normalised by the pressure jump from the Young-Laplace law, p YL = 4 ∕D , which in our case is equal to 582.4 Pa. For the time averaging, the data were taken at 100 equally spaced time instants, however some acoustic effects are still visible at the highest resolution. The pressure levels inside the droplet agree well with the theoretical prediction, the maximal deviations are less than 1% and decrease with increasing spatial resolution. On the other hand, sharp local pressure peaks appear in the liquid phase in the vicinity of the interface. This effect is caused by the switch and is more significant at higher resolution when the numerical diffusion is weaker. In Fig. 13 we present the history of maximal velocity of the spurious currents expressed by the capillary number Ca = 1 | | max ∕ and the total kinetic energy k. The maximal velocity of the spurious currents does not reveal a monotonic behaviour with increasing spatial resolution (at least in the considered range of x ). This is explained as the interplay of the numerical error of the CSS discretisation and the numerical diffusion of momentum. The maximal velocity attains a constant level quite quickly but, unfortunately, it is higher for higher spatial resolution. Interestingly, the case D∕ x ≈ 43 is quite different from the two others-it results in the highest velocity which, additionally, does not "saturate" during the simulation. It seems that in this case the numerical errors stemming from the discretisation of the surface tension are high when related to the amount of numerical diffusion. On the other hand, the higher the spatial resolution, the lower the kinetic energy. This means that, although the velocity magnitudes can grow with increasing resolution, the region affected by the spurious currents becomes smaller. We have also found that the magnitude of the parasitic currents depends significantly on the interface width (the value of ). The deficiencies reported in this section stem from the sharp switching between the interfacial and bulk regions which was confirmed by setting = 0.1Ma 2 (in the case of steady droplet this also assures the modelling consistency). As an alternative to CSS one could consider the use of the conservative and well-balanced surface tension model proposed recently by Abu-Al-Saud et al. (2018) which is, however, much more complicated than CSS and is not straightforward to be applied in our model. The investigation on the influence of the value and the application of other surface tension models is warranted. Off-Centre Collision of the Carbohydrate Droplet in Nitrogen In this section we present a more dynamic case of binary collision of carbohydrate droplets immersed in nitrogen. The main purpose of this simulation is to verify whether the droplet break-up process is simulated correctly. We exactly set the conditions of the experiment of Qian and Law (1997) (case "o" there). The domain size is now 8D × 3D × 3D . The directions of initial velocities of the droplets are parallel and the off-centre parameter is 0.68D. The densities of liquid and gas phases are 1 = 758 kg∕m 3 and 0 = 1.138 kg∕m 3 , respectively. The viscosities are 1 = 2.128 × 10 −3 kg∕(m s) and 0 = 1.787 × 10 −5 kg∕(m s) . The surface tension coefficient is = 0.026 N∕m . The droplets diameters are D = 0.38 mm . The Weber number is equal to 61 and the Reynolds number is 314. The initial velocity of droplets is U 0 ≈ 1.17 m∕s . The simulation is conducted till t = 2.3 ms. We set the spatial resolution to 640 × 240 × 240 , so the cell size is x = 4.75 × 10 −6 m . Initially the droplets are resolved using 80 grid cells per diameter. The grid spacing results in the maximal capillary waves speed c ≈ 4.76 m∕s. In Fig. 14 we present an overview of the simulation. Till t = 1.84 ms our results agree very well with the experiment, however, due to high numerical diffusion the process is slowed down. The interface evolution is proper since it is not sensitive to the Reynolds number, as pointed by Ashgriz and Poo (1990). At time instants 1.84 ms < t < 2.1 ms , the agreement with the experimental data, where the filament breaks into four droplets, is not good. This is the effect of insufficient spatial resolution. At the final analysed time the interface topology matches well the experimental result when three small droplets are present. In Fig. 15 the detailed view on the appearance of the first break-up event (at t ≈ 1.4 ms ) is provided. Although the interface curvature is high at this location, the density field remains sharp after the the break-up is completed and no mass diffusion is visible; this artifact would occur if too low value of the coefficient were set. Finally, in Fig. 16 we show the history of the extremal density values and the volume bounded by the interface. The maximal and minimal density behave similarly to the previous case of the head-on collision. Since the deformation is much stronger, some loss of volume bounded by the interface occurs during the stretching phase after collision. However, the volume is well preserved overall and less than 1% volume is lost. To summarise the results presented in this section, we point out that: (1) the topological changes of the interface are modelled correctly, no spurious gas entrapment inside the liquid phase nor significant density diffusion are observed; (2) the density variations stemming from the weak compressibility do not exceed 1% which is a commonly accepted criterion to treat the flow as incompressible; (3) the volume bounded by the interface is well preserved; (4) the computed pressure jump across the interface The off-centre collision of droplets. The history of the normalised extremal density values together with the volume V bounded by the surface = ⟨ ⟩ . V 0 denotes the initial volume of the droplets is in good agreement with the theoretical prediction; (5) modelling of the surface tension with CSS in the presence of the switch causes a locally oscillatory behaviour of the solution; (6) the model parameters' values (in particular ) we used did not allow to obtain the convergence to vanishing velocity field in the case of the steady droplet. Computational Efficiency The weakly-compressible flow modelling is inefficient unless the computations can be massively parallelised. This is the only possibility to overcome the severe time step restrictions stemming from the acoustic effects. In the sequential execution one should rather use truly incompressible models. For this reason, all the 3D computations presented in this work were performed using our in-house EDAC-DI code, dedicated for the execution on GPU. The code was written using NVIDIA-CUDA C API. As a computing device we used a typical desktop computer equipped with a single NVIDIA GTX 1080 TI GPU. Let us summarise the total memory requirements for the presented algorithm. To store the flow variables, 5 arrays at two time levels are required. To store the normal vectors and gradients of the variables, 18 arrays have to be used. This gives in total 28 arrays. It is still quite economical when compared to the LBM, which for the widely used D3Q19 variant requires 2 × 19 + 4 = 42 arrays for the single phase flow (!). It has to be emphasized that if one uses the traditional, one-dimensional MUSCL reconstruction (using, e.g., the OM scheme which performed also quite well for short-time simulations), the storage requirements of the EDAC-DI solver significantly decrease to 13 arrays, so more than twice. Obviously, one does not need to store the MDIM-limited gradients of primitive variables resulting from the MLP reconstruction, but then they have to be computed twice for each grid cell which significantly increases the execution time since it is the most demanding part of the overall algorithm. In Table 2 we report the wall-clock time of the two simulations of colliding droplets together with the memory use. In our opinion the computational effort is very attractive keeping in mind that the simulations were performed using a desktop computer. Summary and Future Work In this paper we have presented a computational approach for two-phase interfacial flows using a novel mathematical formulation based on the diffuse-interface technique in a weakly compressible framework. Proper values of the model coefficients are found; in particular, the mobility is determined by applying an analogy to the Cahn-Hilliard equation. A robust and efficient numerical method to solve the model equations is proposed. The solution of the DI equation is almost free from the numerical interface distortions. The model has been validated against the steady and colliding liquid droplets immersed in gas, including the topological changes. The outcome of the simulations agrees well with the theoretical and experimental results. The additional advantages of the presented EDAC-DI model are: (1) the use of straightforward and simple numerical solution procedure, (2) very efficient parallelisation of the computational process, and (3) low memory requirements. The need of high spatial resolution, required by the diffuse interface modelling, may be considered as a disadvantage with respect to sharp interface methods such as VoF. Yet, the DI model can deal with the interfacial phenomena without the use of artificial, modelspecific treatments, usually required by the sharp interface methods. In our opinion, the proposed approach to interfacial flows is an attractive alternative not only to other, well established weakly compressible models, such as those used in LBM or SPH, but also to the traditional truly incompressible approach. As stated in the Introduction, the presented paper provides a "proof of concept" of the EDAC-DI model. More work still has to be done and we list the next steps that, in our opinion, make a natural follow-up. From the modelling point of view, the most important issue is the formal analysis and numerical quantitative assessment of the choice of parameter , alike done by Magaletti et al. (2013). The applied CSS model of the surface tension and the chosen implementation of the switch introduce numerical issues and possible alternatives should be investigated. To broaden the range of EDAC-DI applications, an effort on imposing other boundary conditions has to be done, with focus on the wetting of solid surfaces. From the numerical point of view, the following aspects seem to be important: (1) the investigation on more efficient numerical methods, in particular the time integration, that would allow a wider range of the DI model parameters to be used; (2) the use of hybrid schemes to reduce the numerical dissipation apart from the interface region; (3) AMR may be considered to apply our model to more complex flow cases, keeping reasonable computational effort; (4) since AMR is not well suited for the GPUs, the brute-force approach by multi-GPU implementation could possibly appear as a reasonable alternative.
208061630
s2orc/train
v2
2019-11-17T14:02:51.396Z
2019-11-15T00:00:00.000Z
Late adopters of the electronic health record should move now Internationally, the last decade has seen the rapid adoption of electronic health records (EHRs) in hospitals and ambulatory care; EHRs are now an accepted enabler of a high-performing health system.1 However, the uptake and extent of use of this technology varies substantially. At the country level, Estonia and Sweden are among those nations with mature, interoperable EHRs with high patient access.2 3 In contrast, Switzerland and the UK have only patchy adoption in secondary care,4 5 and New Zealand, an early exemplar of primary care digitisation,6 has not yet integrated this information nationally, nor that of hospitals, at scale. Within countries also, there is variation. Even in jurisdictions with high overall rates of adoption, some providers are sophisticated ‘super-users’ of EHRs, whereas others use only their rudimentary functionalities.7–9 The adoption and full employment of an EHR reflects multiple factors, not the least of which are the financial and non-financial costs of procuring and implementing these platforms.10 Federal-level investment—including policy development, use of legislative levers, and support with resources or subsidies—undoubtedly affects the speed of adoption.11 However, even within a maximally supportive environment, there are those who remain ‘EHR-wary’, citing both uncertain benefit and risk of harm (particularly to clinicians). In this viewpoint, we argue that these EHR concerns may be overstated, irrelevant and/or mitigable, and should neither be used to justify delays in adoption nor full use. We maintain that late adopters and ‘under-users’—be they countries, hospitals or individual clinicians—should embrace this technology, and would benefit from prioritising its adoption and comprehensive use. We acknowledge … Internationally, the last decade has seen the rapid adoption of electronic health records (EHRs) in hospitals and ambulatory care; EHRs are now an accepted enabler of a high-performing health system. 1 However, the uptake and extent of use of this technology varies substantially. At the country level, Estonia and Sweden are among those nations with mature, interoperable EHRs with high patient access. 2 3 In contrast, Switzerland and the UK have only patchy adoption in secondary care, 4 5 and New Zealand, an early exemplar of primary care digitisation, 6 has not yet integrated this information nationally, nor that of hospitals, at scale. Within countries also, there is variation. Even in jurisdictions with high overall rates of adoption, some providers are sophisticated 'super-users' of EHRs, whereas others use only their rudimentary functionalities. [7][8][9] The adoption and full employment of an EHR reflects multiple factors, not the least of which are the financial and nonfinancial costs of procuring and implementing these platforms. 10 Federal-level investment-including policy development, use of legislative levers, and support with resources or subsidies-undoubtedly affects the speed of adoption. 11 However, even within a maximally supportive environment, there are those who remain 'EHR-wary', citing both uncertain benefit and risk of harm (particularly to clinicians). In this viewpoint, we argue that these EHR concerns may be overstated, irrelevant and/or mitigable, and should neither be used to justify delays in adoption nor full use. We maintain that late adopters and 'under-users'-be they countries, hospitals or individual clinicians-should embrace this technology, and would benefit from prioritising its adoption and comprehensive use. We acknowledge that the digital patient record itself-that is, the collation of health encounters with multiple providers across space and time for a given healthcare consumer-has not yet yielded the predicted benefits to healthcare quality or patient outcomes in many countries. 12 That said, in this case the marketing of the 'Electronic Health Record' may be more to blame than the technology itself, as the literal 'record' is potentially both the least beneficial and the most problematic aspect of the technology. If the EHR were a human: the record can be thought of as the 'spine', to which multiple health technology functionalities or 'limbs' attach, and the 'brain' is the data and knowledge obtained from the collation of records from multiple individuals and encounters. Late adopters and under-users should expect benefits to healthcare processes and patients, not principally from the spine, but from the use of the brain and limbs. Many early EHRs evolved from databases designed for billing or scheduling, to which patient information was appended, and the initial limbs were simplistic innovations required to circumvent issues related to their interconnected table structure. 13 However, later limbs were developed purposively and with a clinical lens, and these are largely supported by robust empirical evidence of benefit. E-messaging, clinical decision support, patient portals, and health information exchange have all been shown to improve quality of care and patient outcomes, and all have the capacity to contribute more in time. [14][15][16][17][18] Similarly, the importance of the data brain of the EHR should not be underestimated. Collecting and collating individual-level health and social data enables exploration into healthcare quality and operational inefficiencies, and can inform best management for an individual at the point of care. It can also generate subgroup understandings of health need, utilisation, quality of care Viewpoint and outcomes, which are critical for policy making. These data represent assets of both local and national significance. When weighing up the risk and benefit of EHRs, significant concerns include their impact on clinician satisfaction, their association with clinician burn-out, 19 and their potential to decrease face time spent with the patient. 20 Clinician burn-out is real and has system impact, and it is possible that EHR vendors and developers have neither sufficiently responded to their clinical users, nor understood the importance of the user experience. 21 It is also likely that some of the dissatisfaction and burn-out reflects fatigue with ever-changing EHRs. Like many innovations, the EHR technology is not yet stable, and steady incremental revision (driven in part by evolving requirements) has meant that disruption due to the EHR for early adopters has been frequent and repeated, as opposed to a one-time transformation. However, we suggest late adopters may in fact benefit from the pitfalls already identified by others, in their position to make choices around vendors, products and limbs (the strengths and weaknesses of the various options have become clearer), and to provide supportive environments to their staff. Major vendors have made many improvements in their software, 13 and smaller developers are rapidly filling in gaps enabled by Application Programming Interfaces (APIs) and web services. Research has provided signposting to the EHR limbs that offer the greatest potential for gains in quality of care (including the reduction of inequities). 22 The implementation of these functionalities, and those that remove pain points for clinicians in current workflows, should be prioritised, as should design features that improve the user experience and make high quality care 'more effortless'. 23 We also have information around strategies to mitigate clinician burn-out, including identifying those at risk or having problems, and providing training and support. 24 Additionally, scribes, and recent advances in natural language processing, voice recognition and sensing, provide new ways of data capture and user interaction that can support providers. 24 The fundamental reasons to adopt an EHR still hold-access to tools to improve healthcare quality, and to enable population-level understanding. We suggest that late adopters of the EHR, and those who are limited users only, should not further delayremaining on the sidelines is not an effective strategy. Economic analyses show that there are major potential health system savings to be made, 25 and in the context of finite resources, it may hurt financially not to fully use the technology available. For larger health systems, creating a supportive policy environment may take years, as will implementation of an interoperable system at scale. Finally, further postponement of EHR adoption and full use will also delay the benefit patients may gain from better quality of care and exploration of health data. (Although notably, these factors may be of lesser importance to later adopters than, for example, financial drivers 26 ). The EHR represents a technology that is here to stay, and is a critical enabler for organisations wanting to become high-performing health systems. However, in contrast to where it came from-usually billing or administration-a contemporary EHR should have patients and the healthcare team at the very centre, with its primary function being to support and drive the delivery of high value and high quality care. Contributors JR-S had the idea for the article, wrote the initial draft, was responsible for edits and is the guarantor of the final article. KR and DB contributed to drafts and the decision to submit for publication. Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors. Competing interests KR reports personal fees from Orion Health, and personal fees from Precision Driven Health, outside the submitted work; JR-S reports personal fees from Precision Driven Health, outside the submitted work; DB reports grants and personal fees from EarlySense, personal fees from CDI (Negev), Ltd, other from ValeraHealth, other from Clew, and grants from IBM Watson, outside the submitted work. Patient consent for publication Not required. Provenance and peer review Not commissioned; externally peer reviewed. Data availability statement There are no data in this work. Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work noncommercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is noncommercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/.
16356650
s2orc/train
v2
2014-10-01T00:00:00.000Z
2005-10-06T00:00:00.000Z
Diffeomorphism Invariance and Local Lorentz Invariance We show that diffeomorphism invariance of the Maxwell and the Dirac-Hestenes equations implies the equivalence among different universe models such that if one has a linear connection with non-null torsion and/or curvature the others have also. On the other hand local Lorentz invariance implies the surprising equivalence among different universe models that have in general different G-connections with different curvature and torsion tensors. I. INTRODUCTION In this paper, by using the Clifford and spin-Clifford bundle formalism we present a thoughtful analysis on the concepts concerning diffeomorphism invariance and local Lorentz invariance of Maxwell and Dirac-Hestenes equations. Diffeomorphism invariance implies the equivalence among different universe models such that if one has non-null torsion and curvature, the others also possess similar characters. Local Lorentz invariance implies the astounding equivalence between different universe models that have in general different G-connections with different curvature and torsion tensors. This article is organized as follows: after presenting some algebraic preliminaries in Section 2, in Section 3 the invariance of the Maxwell Lagrangian and of Dirac-Hestenes equation, under diffeomorphisms, is investigated from the extensor field formalism viewpoint. Lorentz transformations and the Lienard-Wiechert formulae are derived in this context. In Section 4 active local Lorentz mappings are introduced, regarding their action on electromagnetic fields. The covariant derivative acting on vector and spinor fields is briefly revisited in the light of the Clifford and spin-Clifford bundle context in Section 5. Next, using that formalism we present in Section 6 the Dirac-Hestenes equation in Riemann-Cartan spacetimes and recall that, in general, in theories of that kind the spin generates torsion. Indeed, it is always emphasized that in a theory where, besides the spinor field, also the tetrad fields and the connection are dynamical variables, the torsion is not zero, because its source is the spin associated with the spinor field. However, in Section 7 we show that to suppose the Dirac-Hestenes Lagrangian is invariant under active rotational gauge transformations implies in an equivalence between torsion free and non-torsion free G-connections, and also that we may also have equivalence between spacetimes with null and non-null curvatures. At each point e ∈ M, we denote respectively by T e M and T * e M, the tangent and cotangent spaces. Reference frames are time-like vector fields (pointing to the future) in the world manifold M. If η ∈ sec T 0 2 M is the metric of Minkowski spacetime, there exists in M a global chart (M, ϕ) with coordinate functions {x µ } (said to be in the Einstein-Lorentz-Poincaré gauge) such that for the section {e µ = ∂ ∂x µ } of the orthonormal frame bundle P SO e 1,3 (M, η) we have The pair (T e M , η| e ) ≃ R 1,3 is called Minkowski vector space. The existence of global coordinates in the Einstein-Lorentz-Poincaré gauge permits to identify all tangent (and cotangent) spaces for all e ∈ M . [17] Special Relativity (SR) refers to theories that have the Poincaré group as a symmetry group [18]. This theory asserts that there is a class of physically equivalent reference frames, the inertial ones [19]. The Clifford algebra associated with R 1,3 is denoted by R 1,3 ≃ H(2) and is called the spacetime algebra. The Dirac algebra is C ⊗ R 1,3 ≃ R 4,1 ≃ C(4), the Clifford algebra associated with a 5-dimensional vector space endowed with a scalar product of signature (4, 1) [20]. Note that given a general Riemann-Cartan spacetime we also have that Cℓ(T e M, g| e ) = R 1,3 . Also, if g ∈ sec T 0 2 M is the metric associated with the cotangent bundle, we have Cℓ(T * e M, g| e ) = R 1,3 . Fields in the Clifford algebra formalism [21] can be taken as sections of the Clifford bundle of multivectors, denoted by Cℓ(M,g) = ∪ e Cℓ(T e M, g| e ) or as sections of the Clifford bundle of multiforms, denoted by Cℓ(M, g) = ∪ e Cℓ(T * e M, g| e ), which we shall use in what follows, because it is more convenient for our purposes. By Cℓ 0 (M, g) we denote the even subalgebra of Cℓ(M, g). [22] Note that is the Pauli algebra. Then, a Clifford field of multiforms will be considered as a section where T * M = ⊕ 4 p=0 p T * M denotes the exterior algebra of multiforms. The symbol ֒→ means that T * M is embedded in Cℓ(M, g). Let L be an arbitrary proper orthochronous Lorentz transformation. A reference frame is also an inertial reference frame. In the Clifford algebra formalism we can write Eq.(3) as where R ∈ secSpin e 3,1 (M ), i.e., for any e ∈ M, R(e)R(e) = 1, R(e) ∈ Spin e 3,1 ≃ SL(2, C). For a general Riemann-Cartan spacetime (M, g, ∇, τ g , ↑) we denote by {e a } ∈ sec PSO e 1,3 (M ) an orthonormal frame and by {γ a } ∈ sec P SO e 1,3 (M ) the respective orthonormal coframe. The Dirac operator acting on sections of Cℓ(M, g) is the invariant differential operator which maps Clifford fields in Clifford fields, given by When ∇ is the Levi-Civita connection of g, we have where δ g is the Hodge coderivative operator. Thus, in this case we can write Recall that coordinates functions for U ⊂ M are mappings x µ : M ⊃ U → R. These mappings can be considered as sections, x µ ∈ sec 0 T * U ֒→ Cℓ(U, g). In the case of a Minkowski spacetime a special set of coordinates naturally adapted to an inertial frame are the ones in the Einstein-Lorentz-Poincaré gauge [25]. They are global coordinate functions such that In this case the Dirac operator can be written as ∂ = γ µ ∇ eµ = γ µ ∂ µ . III. MAXWELL THEORY AND DIFFEOMORPHISM INVARIANCE Classical Maxwell theory on a Lorentzian spacetime deals with an electromagnetic field F ∈ sec 2 T * M ֒→ sec Cℓ(M, g) generated by a current J ∈ sec 1 T * M ֒→ sec Cℓ(M, g), and the motion of probe charges modelled by triples [26] (m i , q i , σ i ) in the field F. The field F satisfies the equations Eqs. (10) can be written in a general Lorentzian spacetime, taking into account Eq.(8), as Neglecting radiation reaction, the motion of an arbitrary probe charge of mass (m, q, σ) is given by where v ≡g(σ * , ), with σ * the tangent vector field to the worldline σ : R →M of the charged particle. Eqs. (11) and (12) are intrinsic, i.e., they do not depend on any reference frame and/or coordinates used by observers living on different reference frames. Note that the concept of observer is different from that of a reference frame. An observer is modelled by an integral line of a reference frame [1,2]. Indeed, a reference frame e 0 can be viewed as the four-velocity field of a family of test observers whose worldlines are the integral curves of e 0 , each one can be parametrized by the proper time τ e0 , defined up to an additive constant on each curve. Now, Eqs.(10) can be derived from the following Lagrangian density As it is well known, every Lagrangian density written in terms of differential forms is invariant under arbitrary diffeomorphisms[27] h : M → M . Under this diffeomorphism the fields, currents and connection transform under the pullback mapping, i.e., where TM denotes the tensor bundle. The models (M,g, ∇, τ g , ↑, A, F, J) and (M,g ′ , h * ∇, τ g ′ , ↑, h * A, h * F, h * J) are said to be equivalent in the sense that if Eqs.(10) are satisfied with well defined initial and boundary conditions then h * F satisfy the equations with well defined transformed initial and boundary conditions. However, take into account that the equivalence is realized via the introduction of different universe models that are also declared to be equivalent. The first formulae in Eq. (17) is clearly diffeomorphically invariant since it is a well known result that dh * = h * d. The second equation is also diffeomorphically invariant because the pullback mapping can be represented by an invertible dislocated extensor field [6] h −1 : T * M → T * M such that its exterior power extension satisfies h − −1 X = h * X, for any X ∈ sec T * M ֒→ sec Cℓ(M, g) and moreover we can easily show that [9] where ⋆ g and ⋆ g ′ denote the Hodge star operators associated with g and g ′ . In this way the equation The active formulation of the Principle of Relativity implies that if the set of geometrical objects (J, F, (m, e, σ)) living on Minkowski spacetime (M, η, ∇, τ η, ↑) satisfies Eqs. (11) and (12), with physically realizable initial and boundary conditions, then any other set (J,F , (m, e,σ)), with where l a Lorentz mapping, e → le, and l * denotes the pullback mapping, will satisfy ∂F =J (21) and with also physically realizable initial and boundary conditions. It is trivial to see, e.g., the validity of Eq. (21), for indeed since in this case, Note moreover that l is conveniently defined in terms of coordinate transformations by where (L µ ν ) ∈ SO e 1,3 is a Lorentz transformation. Observe that the coordinate functions x ′µ satisfy These coordinate functions are, of course, naturally adapted coordinates in the Einstein-Lorentz-Poincaré gauge to the reference frame γ ′ 0 . Now consider a velocity boost in the γ 1 -direction. We write [28] Consider, moreover, the frames γ 0 , γ ′ 0 and γ ′′ 0 , and the orthonormal sets We have in details, Now, consider a charge at rest at the origin of the γ 0 frame. Its field is By definition, for any u, w ∈ sec T M ,F e ( u| e , w| e ) = F | e ( l * u| le , l * w| le ), from where we getF The electric and magnetic parts of the pullback fields in the γ 0 frame arē and using Eq.(24) we finally haveĒ (x(e)) = qγ where v = (−v, 0, 0) and Eqs.(39) give the field of a charge q moving in the negative x 1 -direction, as can be calculated directly from the Lienard-Wiechert potential formulae. We can also write for the field F, and we have for the electric and magnetic fields in the γ ′ 0 frame, with R ′ (e) = γ 2 (x ′1 (e) + vx ′0 (e)) 2 + (x ′2 (e)) 2 + (x ′3 (e)) 2 . We see the γ ′ 0 observers perceive (of course, through measurements) the field F as the field of a charged particle moving with constant velocity in the negative x ′1 -direction, which is intuitively obvious. Note that γ 0 observers perceive the fieldF in the same way that their colleagues at γ ′ 0 realize F . Finally the observers (at rest) in the frame γ ′′ 0 realize the fieldF as the field of a particle at rest in that frame. All these results are classical[30], although not explained in general with rigor. In definitive, the observers at rest in γ 0 can write The relations of all these fields are well-defined and have precise physical meaning. IV. ACTIVE LOCAL LORENTZ ROTATIONS OF THE ELECTROMAGNETIC FIELD Action (13) is also invariant under local (i.e., spacetime point dependent) Lorentz transformations. This statement is trivial once we use the Clifford bundle formalism. Indeed, taking into account that we see that if we perform an active Lorentz transformation where R ∈ sec Spin e 1,3 (M ) ֒→ sec Cℓ 0 (M, g), since τ g = γ 5 which commutes with even sections of the Clifford bundle, we have What is the meaning of the field R F ? A trivial calculation, as shown originally by Hestenes [11], reveals that in the case where R is a constant Lorentz transformation in Minkowski spacetime, the components of R F in the γ 0 inertial frame field are the components of F as seen in the γ ′ 0 inertial frame. But the important question, that is the source of much confusion in the literature arises: is R F a solution of Maxwell equations with a transformed source term RJR −1 ? The answer in the Clifford bundle Cℓ(M, η) formalism is in general negative. Indeed, if in general This can be easily seen in the Clifford bundle formalism, since in general, because, of course, in general, γ µ R = Rγ µ . After recalling the concept of generalized gauge covariant derivatives (G-connections) in the context of Dirac theory we shall investigate if it is possible in some sense to generalize Maxwell equation in order to have local Lorentz invariance. V. COVARIANT DERIVATIVE IN THE CLIFFORD BUNDLE Let {e a }, {e ′ a } ∈ sec P SO e 1,3 (M ) two orthonormal frames and {θ a }, {θ ′a } ∈ sec P SO e 1,3 (M ) the respective dual bases satisfying Let be R ∈ Spin e 1,3 (M ) ֒→ sec Cℓ 0 (M, g), i.e., RR = 1 such that It is well-known that the covariant derivative ∇ X of a Clifford multiform A ∈ sec T * M ֒→ sec Cℓ(M, g) in the direction of the vector field X ∈ sec T M in the gauge determined {e a } ∈ P SO e 1,3 (M ) is given by where ∂ X is the Pfaff derivative of form fields, defined by and where ω X ∈ sec 2 T * M ֒→ sec Cℓ(M, g) is a 2 T * M -valued connection calculated at X in the given gauge. We define from where we find that: From the fact that it follows the expressions A. Covariant Derivative of Spinor Fields The covariant derivative of the representative of a Dirac-Hestenes spinor field is a kind of gauge covariant derivative. Let us explain what we mean by this wording. Let ∇ s ea be the spinor covariant derivative that acting on sections of the left spin-Clifford bundle, i.e., on χ ∈ sec Cℓ Spin e 1,3 (M, g) [5]. The representative ∇ [5]. Suppose that two different spinor fields χ ∈ sec Cℓ Spin e 1,3 (M, g) and Φ ∈ sec Cℓ Spin e 1,3 (M, g) have in the spin frames Ξ and Ξ ′ the same representative χ ∈ sec Cℓ(M, g). Then in each spin frame the representative of the spin covariant derivative is given by where ω ′ X and ω X are related as in Eq.(59) and X ∈ sec T M. Now, let ψ Ξ ∈ sec Cℓ(M, g) and ψ Ξ ′ = ψ Ξ0 R −1 ∈ sec Cℓ(M, g) be the representatives of Ψ ∈ sec Cℓ Spin e 1,3 (M, g) in two different spin frames Ξ and Ξ ′ . We have, as it is easy to verify: which shows that the representative of the spinor covariant derivative in the Clifford bundle is a kind of gauge covariant derivative. Remark 1 From now on we call ∇ (s) X simply the spinor derivative. In each gauge, if A ∈ sec Cℓ(M, g) and ψ Ξ ∈ sec Cℓ(M, g) is the representative of Ψ ∈ sec Cℓ Spin e 1,3 (M, g) we have [5] ∇ (s) where {x µ } are the coordinate function of a local chart (U, ϕ) of the maximal atlas of M and ∂ (s) the representative of the spin-Dirac operator in the Clifford bundle is given by: with θ a = h a µ dx µ . The variational principle used with the Lagrangian density (Eq.(67)) gives after some algebra [12] where is called the torsion covector. Note that in a Lorentzian manifold T = 0 and we obtain the Dirac-Hestenes equation on a Lorentzian manifold. We observe moreover that the matrix representation of Eq.(69) coincides with an equation first proposed by Hehl and Datta [13]. Eq.(69) is manifestly covariant under a passive gauge transformation as it is trivial to verify taking into account Eq.(64). We also recall that spinors transforms as scalars under diffeomorphism [15] and thus it is easy to verify that the Dirac-Hestenes equation is invariant under diffeomorphisms. We observe yet that, if we tried to get the equation of motion related to a Dirac-Hestenes spinor field on a Riemann-Cartan spacetime, directly from the equation on Minkowski spacetime by using the principle of minimal coupling, we would miss the term 1 2 T ψθ 2 θ 1 appearing in Eq.(69). Is this a bad result? According to [13] the answer is yes, because there, a supposed complete theory, where the {θ a } and the {ω ea } are dynamical fields, the spinor field generates torsion. To put more spice on this issue, let us next analyze what active Lorentz invariance would imply. VII. MEANING OF ACTIVE LORENTZ INVARIANCE OF THE DIRAC-HESTENES LAGRANGIAN In the proposed gauge theories of the gravitational field, it is said that the Lagrangians and the corresponding equations of motion of physical fields must be invariant under arbitrary active local Lorentz rotations. In this section we briefly investigate how to mathematically implement such an hypothesis and what is its meaning for the case of a Dirac-Hestenes spinor field on a Riemann-Cartan spacetime. The Lagrangian we shall investigate is the one given by Eq.(67), which we now write with all indices indicating the representative gauge (i.e., spin coframe) Observe that the Dirac-Hestenes Lagrangian has been written in a fixed (passive gauge) individualized by a spin coframe Ξ and we already know that it is invariant under passive gauge transformations ψ Ξ → ψ Ξ ′ = ψ Ξ R −1 (RR = 1), R ∈ sec Spin e 1,3 (M ) ֒→ sec Cℓ(M, g) once the 'connection' 2-form ω V transforms as given in Eq.(59), i.e., Under an active rotation (gauge) transformation the fields transform in new fields given by Now, according to the mathematical ideas behind gauge theories, we must search for a new connection ∇ ′s such that the Lagrangian results invariant. This will be the case if connections ∇ s and ∇ ′s are representatives of a G-connection as introduced in [12], i.e. [32], or Also, taking into account the structure of a representative of a spinor covariant derivative in the Clifford bundle we may verify that in order for Eq.(74) to be satisfied we need that the Pfaff derivative transforms as and that the connection transforms as Under these conditions we have: and we get Write now, Recall that ω rs n = η ra ω anb η sb = ω r nb η sb , ω r nk = ω rs n η sk . Then, from Eqs.(76), (79) and (80) we get Now, we recall that the components of the torsion tensors T and T ′ related to the (tensorial) connections ∇ and ∇ ′ in the orthonormal basis {e r ⊗ θ n ∧ θ k } are given by where [e n , e k ] = c r nk e r . Let us suppose that we start with a torsion free connection ∇. This means that c r nk = ω r nk − ω r kn . Then and we see that T ′ = 0 only for very particular gauge transformations. We then conclude that to suppose the Dirac-Hestenes Lagrangian is invariant under active rotational gauge transformations implies in an equivalence between torsion free and non-torsion free connections. Note also that we may have equivalence between spacetimes with null and non-null curvatures, as it is easily to verify. It is always emphasized that in a theory where besides ψ, also the the tetrad fields θ a and the connection ω are dynamical variables, the torsion is not zero, because its source is the spin of the ψ field. Well, this is true in particular gauges, because as showed above it seems that it is always possible to find gauges where the torsion is null. A. The Case of the Local Lorentz Invariance of the Electromagnetic Field Equations If we are prepared to accept as equivalent spacetimes with different curvatures and torsion tensors then we can modify Maxwell equations in such a way that they are formally invariant under local Lorentz transformations. We start with Maxwell theory on a general Riemann-Cartan spacetime (M, g, ∇, τ g , ↑), where we propose that Maxwell equation is given by where F ∈ sec 2 T * M ֒→ sec Cℓ(M, g) and J ∈ sec 1 T * M ֒→ sec Cℓ(M, g) and ∂ = θ a ∇ ea = d − δ is the Dirac operator in a particular (fiducial gauge) where the spacetime model is a Lorentzian one. Next we propose that F and all RF R −1 are gauge equivalent (in different but equivalent spacetime models (M, g, ∇, τ g , ↑) and (M, g, ∇ ′ , τ g , ↑) R and that the Dirac operator in (M, g, and R ∈ sec Spin e 1,3 (M ) ֒→ sec Cℓ(M, g). As we can easily verify with the formulas of the last section we have and we may say that distinct electromagnetic fields are also classified as distinct equivalence classes, where F and R F represent the same field in different gauges. Note finally that formally we may say that under a change of gauge model the Dirac operator transforms as Such an equation has been used by other authors in the past, but there, its clear mathematical meaning is lacking. In Appendix we present a context in which an equation like Eq.(85) makes its appearance. Under an active Lorentz transformation, generated by R ∈ Spin e 1,3 ⊂ R 0 1,3 the position vector is mapped as x → x ′ . The correct interpretation is that x ′ is the position vector of a point event e ′ = e. We have x(e ′ ) = e ′ − e 0 = Rx(e)R. (A.89) Thus an active rotation is really a diffeomorphism (with a fixed point, namely e 0 ) in M . The action of R on a Clifford field, say, an electromagnetic field F(x(e)) ∈ 2 R 1,3 ֒→ R 1,3 must be interpreted as a mapping F(x(e)) −→ F ′ (x ′ (e ′ )) = RF(x(e))R. . Under these conditions, if F(x) is the field generated at x by a charge at rest in the e 0 frame at positionx, then F ′ (x ′ ) = RF(x)R is the field generated by a charge at rest in the e ′ 0 frame at the pointx ′ . A trivial calculation shows that e 0 frame observers perceive (of course through measurements) the field F ′ (x ′ ) as the field generated by a charge moving in the positive x-direction with velocity v = (v, 0, 0). In the formalism used in this section the Dirac operator is represented by the vector derivative ∂ x , such that ∂ x x µ = e µ , e µ • e ν = δ µ ν . (A.93) Hestenes [11] claims that any Lorentz transformation R sends [33] ∂ We now know which is the mathematical meaning of the operator ∂ ′ x ′ satisfying Eq.(A.94). It has been given by our theory.
201057910
s2orc/train
v2
2019-08-19T13:20:08.763Z
2019-10-01T00:00:00.000Z
3D Reconstruction and Alignment by Consumer RGB-D Sensors and Fiducial Planar Markers for Patient Positioning in Radiation Therapy BACKGROUND AND OBJECTIVE: Patient positioning is a crucial step in radiation therapy, for which non-invasive methods have been developed based on surface reconstruction using optical 3D imaging. However, most solutions need expensive specialized hardware and a careful calibration procedure that must be repeated over time.This paper proposes a fast and cheap patient positioning method based on inexpensive consumer level RGB-D sensors. METHODS: The proposed method relies on a 3D reconstruction approach that fuses, in real-time, artificial and natural visual landmarks recorded from a hand-held RGB-D sensor. The video sequence is transformed into a set of keyframes with known poses, that are later refined to obtain a realistic 3D reconstruction of the patient. The use of artificial landmarks allows our method to automatically align the reconstruction to a reference one, without the need of calibrating the system with respect to the linear accelerator coordinate system. RESULTS:The experiments conducted show that our method obtains a median of 1 cm in translational error, and 1 degree of rotational error with respect to reference pose. Additionally, the proposed method shows as visual output overlayed poses (from the reference and the current scene) and an error map that can be used to correct the patient's current pose to match the reference pose. CONCLUSIONS: A novel approach to obtain 3D body reconstructions for patient positioning without requiring expensive hardware or dedicated graphic cards is proposed. The method can be used to align in real time the patient's current pose to a preview pose, which is a relevant step in radiation therapy. Introduction The process of radiation therapy has two main phases. The first phase is planning, where some type of computed tomography (CT) is done to obtain a volumetric reconstruction of the patient's body. This helps to find the exact position of the tumor that needs to be treated in the patient's body. Treatment is done in the second phase which typically happens in several sessions. At each session, the patient needs to be positioned in the same pose in which the reference CT scan has been done and the exact position of the tumor is calculated by aligning the CT scan with patient's body. The most common way to perform this is by taking X-ray image(s) and alignment of the obtained 2D information with the reference 3D model. This normally needs to be done manually by a specialist. There are other options such as applying cone beam CT (CBCT) to get a 3D model in the treatment session and use that instead of the 2D imaging information. A drawback of these approaches is exposing the healthy tissue of the patient to ionizing radiations (e.g. X-ray) at each session of the therapy. There are non-invasive methods for patient positioning that do not need healthy tissue to be exposed to ionizing radiation. An important subset of these methods are the ones based on optical imaging that make use of visible light and/or infrared sensors. One solution of this type is the use of infrared cameras and reflective markers [1]. This approach is similar to motion capture systems that are employed in the movie and gaming industry. The measurements of this type of systems are accurate however they are limited to the points on the patient's body where the reflective infrared markers are attached to. Also, in the case of attaching the markers on a cast [1], only the rigid pose of the cast is estimated. There are also solutions that take advantage of stereo reconstruction to make a 3D model of the patient's body surface. The reconstructed body surface could then be aligned with the surface of the body in the reference CT scan to align the patient in the right position. This could be easily done with surface alignment algorithms such as iterative closest point (ICP) without the need of known point-wise correspondences between the two surfaces. Currently, there are several commercial optical-based patient positioning systems that make use of 3D surface reconstruction such as AlignRT, Catalyst and IDENTIFY [2]. These are the so-called Surface Guided Radiation Therapy (SGRT) systems. These methods might not be as good as radiation based positioning methods in all cases [3], however, they are a very good alternative to reduce the number of times that radiation-based methods are done [4]. A drawback of the current commercial SGRT solutions is their high price and service costs which might not be affordable for hospitals with limited budgets. Another disadvantage of these systems is that they require their sensors to be fixed in the environment. Therefore, they need to be periodically calibrated with respect to the environment by specialized staff to assure that they report accurate measurements. In recent years, starting with the introduction of Microsoft Kinect, inexpensive RGB-D sensors have become available for normal consumers. Starting with the Kinect-Fusion [5] many 3D reconstruction algorithms were introduced employing this type of affordable sensors. Nevertheless, there are very few works using these types of sensor for patient positioning [6,7,8,9] and their potential is not properly explored despite the fact that these consumer sensors are not as accurate as the ones used in commercial patient positioning systems. This paper proposes a novel patient positioning method based on affordable consumer handheld RGB-D cameras that employs a Simultaneous Localization and Mapping (SLAM) approach that fuses natural features of the patient's body with a set of artificial fiducial planar markers in order to speed up the reconstruction and positioning process. The proposed method can be run in a regular computer without special hardware or graphics card and create a complete reconstruction and visualization within and average of 31 seconds based on our experiment. Our experimental results show that the proposed method allows a median accuracy of 1 cm in translational error and 1°of rotational error for rigid transformation. The rest of this article is organized as follows. First, we explore the optical-based solutions to patient positioning in radiation therapy in Section 2. Then, in Section 3, we introduce our method. In Section 4, we present the results obtained by our algorithm and discuss the results. Finally, we present our conclusions in the last section. Background and Objective One of the first optical patient positioning systems was [10], where infrared light emitting diodes were attached on a bite plate and the head pose of the patient was inferred from the 3D position of the diodes employing infrared camera images. Another similar early example is [11] that also uses infrared diodes and cameras. In this case, the system automatically corrects the position of the head by a motorized mechanism that corrects its position by translating it so that the isocenter is focused on the correct position. Later, Ploeger et. al. [12] used image matching between a video recorded in the treatment phase and reference CT scan using body contours. They concluded that the outline of the patient's body is a more accurate reference that the markers put on their abdomen. One of the first works investigating the use of 3D surface imaging is by Bert et. al. [13]. They analyze the accuracy of the commercially developed patient positioning system AlignRT, which reconstructs the surface by projecting a speckle pattern on the patient and using active stereo. It needs to be calibrated by a special pattern to the coordinate system of the linear accelerator. The system proves to be of high accuracy in estimating rigid transformation. The tests were done on a human phantom. Around the same time, Bradly et. al. [14] use a stationary multi-line laser projector for 3D reconstruction. They employ the iterative closest point (ICP) algorithm to align the 3D surface obtained from a CT scan to the surface from the optical 3D reconstruction. They employed the cast of a human for evaluation. They concluded that their system is good enough to be used for patient positioning. Bert et. al. [15] compared the quality of calculating the displacement by laser alignment, portal imaging and the AlignRT surface imaging system for breast treatment. They found that the 3D surface imaging system has superior results in comparison to portal imaging and laser alignment. Stieler et. al. [4] evaluated a commercial laser scanner Sentinel (by C-Rad AB, Sweden). They found that it is a good solution for the situations where cone beam CT scan or ultrasound imaging are not used, to improve the accuracy of patient positioning. Desplanques et. al. [1] introduced a patient positioning system using a pelvic cast or a face mask with reflective markers attached to them. The cast (or mask) was tracked using an infrared motion tracking system and the positioning corrections were compared to those of a patient verification system employing x-ray radiation. They conclude that due to large uncertainty because of the relative motion between the immobilization devices and the patient the system cannot be used as a primary assessment for the quality of patient positioning. Gaisberger et. al. [16] proposed a non-commercial surface scanning system for patient positioning using two optical projectors and two cameras. They concluded that their 3D surface scanning system is good enough to be a viable alternative to the normal kV image-guided radiation therapy. Wiencierz et. at. [3] compared the performance of AlignRT to another commercially available system named Catalyst (from C-Rad, Sweden). The Catalyst system, similar to AlignRT, has a depth sensor attached to the ceiling of the radiation room. However, unlike AlignRT, it takes advantage of structured lighting of a stripe pattern instead of projecting a speckle pattern. Furthermore, the Catalyst system has one unit instead of two units. The authors report a better accuracy for AlignRT than for Catalyst. Additionally, both of these systems were shown to have better accuracy than using conventional skin markers. On top of that, both of these solutions have an accurate enough measurement in at least 75 percent of the time. However, they both fail to meet the safe accuracy taking into account the 90 and 95 percentile of the errors. With the advent of consumer-grade RGB-D sensors such as Microsoft Kinect and Asus Xition, many authors have proposed algorithms for three-dimensional reconstruction using this type of sensors [5,17,18,19]. One disadvantage of these methods is that, in general, they require specialized hardware i.e. powerful graphics cards with high amount of on board memory. On the other hand, few authors have employed these sensors in the field of patient positioning despite the detailed reconstructions that can be obtained with such devices. Bauer et. al. [20] suggested a system for coarse initial patient positioning by matching 3D features from the surface data. They employed the original Microsoft Kinect sensor to evaluate their algorithm. They conclude that their method is feasible for coarse initial patient positioning before using a finer scale more accurate positioning approach. An important disadvantage of their approach is that the RGB-D sensor needs to be fixed in the environment and calibrated with respect to it. Furthermore, it forces the sensor to be far from the patient (on the ceiling) therefore the error of the sensor becomes too high to obtain high precision. Additionally, extrinsic calibration of the sensor unit has to be repeated in case of moving it. The most similar approach to ours that we found is a dissertation of Guillet [9], who tested the accuracy and reproducibility the KinectFusion algorithm [5] for patient positioning using a couple of rigidly attached sensors. They scanned the same phantom multiple times with the Microsoft Kinect and aligned the reconstruction manually on the coarse level and then refined it using the ICP algorithm. One downside of the KinectFusion algorithm is that it needs GPU accelerated computing. Another disadvantage of [9] is that its camera pose estimation is prone to drifting. To fix this problem they proposed to attach the two Kinect sensors on a camera rig and move the radiation therapy couch instead of the sensors to have a better reconstruction. However, fixing the sensors limits the possible movements and amount of details that can be captured from different parts of the patient. It also makes the the process slow. This work proposes a novel approach for patient positioning that overcomes the above mentioned problem. First, our method does not need calibration, since obtains the reference location from a set of planar markers placed in the environment (that should not be moved from session to session). Second, our approach works as a handheld scanner instead of having the cameras fixed, which reduces the required infrastructure and the scanning time. But also, it allows to position the camera very close to the patients, thus achieving the best accuracy that the sensor can provide. Finally, our method works in a normal CPU and does not require any special graphic card. A complete reconstruction and visualization using our approach can be done in approximately 23 seconds on a laptop with Intel Core-i7 CPU. Figure 1 shows a visual summary of our approach. First, an RGB-D sequence is created by scanning the patient using the handheld RGB-D sensor. Then, tracking is done on the RGB-D sequence using the UcoSLAM algorithm [21] that can take advantage of visual keypoints and also ArUco planar markers [22,23]. The UcoSLAM algorithm generates keyframes and gives camera poses for each keyframe. These are then used to generate registered point clouds for the keyframes. This registration is then refined by a global iterative closest point (ICP) algorithm and the point clouds are converted to a single heightmap. Finally, the heightmap from the current scene is compared to the heightmap from the reference scene to generate an error map and a pose overlay that could be used for the correction of patient's position with respect to the reference scene. Here the current scene is a scene created in the treatment phase of the radiation therapy and the reference scene is the one generated in the planning phase. Overview Our system takes as input a video sequence recorded (with an RGB-D camera) of the patient from the head to the feet. The video sequence is captured by holding the sensor in hand and moving it over and close to the patient. As already indicated, the environment must have a set of markers placed in arbitrary positions, however they must remain fixed from session to session. The input frames are processed in real-time using a SLAM method able to fuse natural landmarks (keypoints) and the artificial markers. We employ the UcoSLAM system [21] developed by the authors of this work. UcoSLAM simultaneously estimates the camera location and creates a sparse 3D reconstruction of keypoints and markers. In the process, a set of camera locations are stored, called keyframes, that are later employed in the reconstruction process. After UcoSLAM has processed the recorded sequence, a point cloud is generated for each keyframe using the corresponding depth image from the RGB-D sensor. Then, to refine the relative poses between the point clouds we apply a variant of global iterative closest point (ICP) algorithm [24,25] on all of the clouds. One might ask why not using only the ArUco markers to align different point clouds instead of applying the UcoSLAM algorithm. We have two reasons for this. First, there are frames where no markers are visible. By taking advantage of UcoSLAM, the information from these frames can also be used for reconstruction using purely geometrical information. Second, the color-to-depth registration of the RGB-D camera is not perfect, which introduces errors in the alignment process. The second step of our algorithm is aligning the current reconstructed scene to the reference scene which is created in the planning phase of radiation therapy. For this part, our method takes advantage of the fiducial markers which we assume that have remained fixed from one recording session to another. We would like to remind you that a reference scan needs to be done in the planning phase and markers needs to stay in the same place all through the treatment as in the planning phase. This is done to be able to align the scans in the treatment sessions to the one from the planning session. It is worthy to note that if the CT scan is done in the same room as radiation therapy then it could be aligned to our planning phase surface scan using a registration algorithm such as ICP. In case the CT scan is done in a different room, we still need to make a reference surface scan in the radiation room and align it to the CT scan if we want to know the position of the body internals with respect to the surface reconstruction. One could even choose to do a traditional patient positioning in the planning phase and use the surface scan of that session as the reference. Please note that alignment of the CT scan to the reference scan is not part of our algorithm and could be done by rigid or non-rigid registration algorithms. After aligning the current scene to the reference scene, a heightmap is obtained from the 3D body reconstruction by averaging the height of the merged point clouds. This process allows to reduce noise, obtaining a smoothed version of the reconstructed surface. Finally, when the scenes are aligned and the heightmaps generated, they can be employed to visualize the patient's pose difference between the sessions. Since the heightmaps are aligned together we can subtract them from each other or overlay them on top of each other to create a visualization of the error in positioning the patient. The rest of the this section provides a formal description of the proposed method. 3D Reconstruction Let us define a rigid pose, P , as the combination of a rotation matrix, R, and a translation vector, t, in the 3D space, i.e.: Now we take: as the set of all keyframe poses returned from the UcoSLAM method for the scene i ∈ {0, · · · , n}. A pixel q in keyframe j of scene i may or may not have a valid depth value d j i (q). Then, let D j i be the set of all pixel positions q = [q x , q y ] ∈ R 2 with a valid depth value d j i (q). Furthermore, let us assume: is the set of markers detected in keyframe j of scene i and c kl ij , l ∈ {1, . . . , 4} the l-th 2D-corner of the marker M k ij . Keyframe Point Cloud Creation We generate an initial point cloud C j,0 i for each keyframe j of scene i as follows: where K the 3×3 camera matrix and Ψ j i the back projection function for keyframe j of scene i. In the next step we apply the transformations obtained from the SLAM algorithm to our point clouds C j,0 i to obtain new point clouds C j i : where: and: θ(p, T ) = R p + t for T = (R, t). Here R and t are, respectively, the 3D rotation matrix and translation vector related to the transformation T . Furthermore, we define the operator · as the combination operator of two transformations in the following way: T · T = (RR , t + t ) for T = (R, t), T = (R , t ). (9) Global ICP After obtaining the point clouds C j i related to each keyframe j in the scene i, we apply our global ICP on all clouds to refine their registration. You can find a formal description of our Global ICP procedure in Algorithm 1. As can be seen, in each iteration, for each point of every cloud, we find a corresponding point from a different cloud that has the closest distance to it than any other point in any other cloud. Then we find standard rigid registration for the correspondences found in that iteration step. This registration is denoted by the Find-Transform(.) function in Algorithm 1 and is obtained by the Horn's algorithm [26] by fixing the scale parameter. To speed up the process before applying the iterations we randomly subsample the point clouds using a 3D version of the Poisson-disk sampling algorithm. This is denoted by the Subsample(.) function in Algorithm 1. Furthermore, when finding correspondences we only select points that are closer than a certain distance (r) because we assume that the initial registration of our point clouds is roughly correct. We use a fixed number of iterations N I and start with a predetermined maximum value for r, r max , and linearly decrease it to a predetermined minimum value, r min , through the iterations. Now, we can write: which indicates that we obtain the transformations {T j i , j = 1 . . . n i }, corresponding to keyframe poses P j i , from GlobalICP by giving the input point clouds {C j i , j = 1 . . . n i }. Here N G I is the input number of iterations and σ G is the input Poisson subsampling radius and r min and r max are the input values for the parameters with the same name in Algorithm 1. We apply the obtained transformations on their corresponding point clouds C j i to update them to point cloudš C j i with refined poses: . . , n, j = 1, . . . , n i . 3D Marker Corner Positions For each corner of each observed marker in a scene we take the average of 3D positions of the detected corners across all keyframes. We find the 3D position of the corners of detected markers from the point clouds using the depth-RGB registration (obviously we assume that this type of correspondence is available for the RGB-D sensor). We take the obtained mean values as the position of the marker corners in each scene. Then, these marker corners can be used to align two reconstructed scenes together. Let us takeč kl ij as the 3D coordinates corresponding the l-th 2D-corner c kl ij of detected marker M k ij . If the corner has a valid depth value d j i (c kl ij ), we use that to back project the point to get the 3D coordinates. If not, we take the average of the back-projection of points with a valid depth value in the neighbourhood of the corner: where W (c kl ij , D j i , s) is the set of all points with a valid depth value within an s × s region centered at the c kl ij corner and . is the averaging operator. Please note that we need to retrieve the depth value using a region because the valid values of the depth maps could be sparse at times. After assigning the 3D coordinates of the marker cornerš c kl ij for each keyframe j, we calculate the coordinates of the cornersč kl i for the whole scene i by taking their average: Here E kl i is the set of indices for all keyframes whereč kl ij has a valid value. Notice that if W (c kl ij , D j i , s) = ∅, it also indicates that c kl ij / ∈ D j i and thatč kl ij does not have a valid value. Algorithm 1 Global ICP procedure GlobalICP(N I ,(C 1 , ..., C N C ),σ,r min ,r max ) r step ← (rmax−rmin) N I N I : number of iterations r ← r max for j = 1 to N C do N C : number of point clouds T j ← (I 3×3 , [0, 0, 0] ) I: the identity matrix C j ←Subsample(C j , σ) end for Scene Alignment Let us say that we want to register a current scene (e.g. scene reconstructed in the treatment phase of radiation therapy) to a reference scene (e.g. scene reconstructed in the planning phase of radiation therapy) which we assume has index 0, using the corner positions. When all valid 3D corner positions of markers are determined for the scene i, we find the rigid transformation that transforms each marker corner in the current scene to its corresponding marker corner in the reference scene. We find this transformation using the algorithm by Horn [26]. Let N Mi represent the number of different markers in every scene i. Because the number of marker corners in each marker is four, then where L i is the set of point correspondences we use to find the transformation aligning scene i to the reference scene. This transformation, denoted by T i is calculated by: Finally, the point cloud of each keyframe aligned to the reference scene can be obtained by: For the reference scene, however, this step is not necessary, therefore we can write: Height Map Creation In order to create a height map we need to have a plane to create the height map grid. Therefore, a marker is chosen as the reference marker and its plane is taken as the grid plane. For each scene, we take all of the points from all keyframes and move them to the coordinate system the reference marker. To do so, a transformation is found to take the points from the reference scene's coordinate system to the one of the reference marker in the reference scene. Assuming the reference marker has index 0 we have: Since we are using ArUco markers, the position of the marker corners in the coordinate system of the marker are set to: where l is the length of the side of the square marker. Now we can calculate the transformation to the reference marker coordinate system and apply it to the point clouds: for i = 0 . . . n and j = 1 . . . n i . Subsequently, we create a grid on the marker plane and to get the height value for each pixel in this grid we take the average height of the points that project to that pixel. Since there might be several surfaces on the line that projects to a pixel we take only the points that are within a certain distance from the point with maximum height and keep their average as the height value. We define the height map grid corresponding to C j i as follows: where δ is the constant step to create the grid, and x min , x max , y min , and y max are constant minimum and maximum values for x and y, respectively. The boundaries help to trim the height map to our region of interest. Finally the height value in each cell of the height map related to H j i is defined as: where t is a threshold to discard the points tha do not belong to the top surface. Also: and Now to get a single height map for each scene we merge the height maps of that scene in this manner: where H i is the height map grid related to the scene i and h i (x, y) is the height value at grid point (x, y) in scene i. Height maps let us merge the keyframe clouds in the scene in a fast manner and just by averaging the height values in the corresponding grid positions. One might argue that some information is lost in the process of converting the point cloud to the height maps. However, we would argue that since the point clouds are generally created by using the depth sensor held above the patient, the information loss is not very high. Furthermore, the averaging operator creates a smooth surface when converting a point cloud to a height map which reduces the noise on the surface. Results This section explains the experiments conducted to validate our proposal. To record RGB-D sequences we used the Asus Xtion Pro Live sensor employing the OpenNI2 5 Linux driver. We choose this sensor because is it cheap, lightweight and does not need an external power supply, it just needs to be connected to the USB port. We evaluated our method both qualitatively and quantitatively. The quantitative evaluation (Sect. 4.1) aims at analyzing the accuracy of the proposed method in estimating the pose of patients with such an inexpensive RGB-D sensor. To do so, we used a mannequin of the human torso and a commercial motion capture system from OptiTrack 6 . Our system provides a visual output that can be employed to easily position the patient from one session to another which is based on our 3D reconstruction. Thus, Sect. 4.2 provides a qualitative evaluation of the system outputs and 3D reconstruction provided by our method. Finally, Sect. 4.3 provides an analysis of the computing times required by our proposal. As will be explained, our method produces its output within 31 seconds for the human subjects once the video has been recorded, which takes only 10 seconds. Quantitative evaluation This section analyzes the precision of our method in estimating the displacement of a 3D body reconstruction with respect to a reference one. For that purpose, a mannequin of the human torso has been placed and scanned at nine different positions on the floor, where multiple ArUco markers were fixed and visible next to the mannequin (see Fig. 3). The parameter values employed for our method are shown in Table 1. The method's precision has been measured as the error in estimating the rigid transformation between two scans of the mannequin. In order to obtain the ground truth, a commercial motion capture system (OptiTrack) has been employed, which requires to attach several spherical infrared reflective markers on the mannequin surface. The ground truth displacement of the mannequin is calculated by finding the rigid transformation that moves the reflective markers from one scene to another one using Horn's algorithm [26] by fixing the scale parameter. The same rationale is employed to calculate the rigid transformation with our 3D reconstruction. To do so, the 3D location of the reflective markers is manually extracted from the 3D reconstruction obtained with our method. Table 2 shows error in estimating the rigid 3D transformation of the mannequin from each scene to every other scene. In other words, each time one scene is taken as the reference scene and the transformation error is calculated with respect to the reference scene from every other scene. Then the mean and median of the rotational and translational errors are calculated for all these other scenes. Also the rotational and translational mean and medians are reported when taking into account all of the data together. Qualitative evaluation Since the output of our system is a visual aid for patient positioning, this section presents some qualitative results from the reconstructions obtained with our method. These results are made by calculating normals for each point in the heightmap, converting it to a point cloud, and finally applying the Poisson surface reconstruction algorithm done in the CloudCompare 7 software. The images are obtained by rendering the mesh in the MeshLab 8 software. These results can be seen in Figure 4. The figure presents a different scenarios in each row, first the mannequin used in our quantitative evaluation and then three different human subjects. In every row, first, the rendered reconstructed meshes for the reference scene and the new scene are displayed. For each scene the left image is rendered only using the mesh geometry and lighting, and the right image is the same mesh rendered with interpolated colors from the point cloud and no shading. After that, segmented heightmaps for the reference scene and the new scene could be observed. Finally, the error map and the image of overlayed heightmaps in different colors are presented. In the error map, blue shows an error of zero and red presents an error of 10 cm or higher. The errors in between are shown by linearly interpolated colors between red and blue. In the image of overlayed heightmaps, the reference scene is colored in blue and the new scene is colored in red. More reconstructions could be seen in Figure 5. Here the subject takes different poses similar to those that are commonly used in radiation therapy. Sequences related to this sequence were captured with half the resolution of the ones in Figure 4 Computing time The proposed method is suitable for its integration in realistic environments using consumer grade equipment within a reasonable computing time, and without requiring dedicated graphic cards. visualized in Figure 4. The computation time were calculated on a laptop with with the Intel Core i7-4700HQ processor and 50 iterations of global ICP. The Table does not include the computing time needed by the UcoSLAM algorithm for tracking because this method runs in real time while the RGB-D video is being recorded. On average the sequences had 15 keyframes produced by the UcoSLAM algorithm. The average length of recording for these sequences was only 10 seconds. Quantitative evaluation As could be seen (Table 2), in total, the average positional error in estimating the pose of the mannequin is around 11 mm. However, the median of the error is even lower at around 10 mm. This suggests that for most of the scenes the positional error is lower than the average. The same could be seen as true for rotational error where a median of around 1°is achieved. Qualitative evaluation As can be observed in Figure 4, the human subjects are reconstructed with a considerable details. The same can be said for the shape of the mannequin; even the small IR Each mesh is presented twice first rendered with no color but with shading and second with color and no shading. The heightmaps from the two scenes can be seen in column (c). Finally, the error map of the second scene and overlay of the height maps are presented in column (d). Please note that in the heightmaps and their overlay, the patient is segmented for better visualization. reflective markers attached on the mannequin are clearly reconstructed. Please note that the artifacts on the edge of the person and the mannequin are due to the lack of captured points in those areas and not low quality of the point clouds. Furthermore, the last two images (in the far right) for each subject can clearly display the amount of error and how the patient needs to be moved to correct the pose. This is a desired feature since these images are shown to the person in charge of patient positioning as a guide for pose correction. However as can bee seen in Figure 5 the reconstructions are still robust. Again, please note that the artifacts on the edge of the subject are due to lack of the captured 3D points and not the quality of the reconstruction. Computing time As can be observed (Table 3), the most time consuming part of our implementation is the Global ICP. Nevertheless, in less than half a minute, our method is able to produce its results. We find this suitable for its use in real radiotherapy sessions. Conclusion This paper has proposed a novel approach to obtain 3D body reconstructions for patient positioning using inexpensive consumer RGB-D cameras. The main novelty of our approach is the use of a novel SLAM technique that combines natural and artificial landmarks in order to obtain a coarse 3D reconstruction that is later improved without requiring expensive hardware or dedicated graphic cards. By placing a set of squared markers in the environment, that should remain fixed from one session to another, the proposed method is able to align the reconstructions achieving a median translation error of 1 cm and a rotational error of 1°. The use of markers also allows us to employ the RGB-D camera as a hand-held scanner. Thus, the recording distance to the patient is reduced contributing to improve the reconstruction quality for that type of sensors. Our method generates as output a visual superimposition of the patient both in its current position, and in the reference position, along with an error map. These pieces of information allow us to easily check how the pose of the patient needs to be corrected. We have created a robust framework that can produce results of decent quality and could be improved by enhancing different parts of it. We suggest that this method could be extended for non-rigid surface registration which is left to be done in future works. Our method in this paper is focused on using a single CPU so that it is usable in most of the situations. However with the help of GPU accelerated computing it is possible to speed up our method and also include non-rigid registration with a reasonable computation time. Furthermore it is also possible to use multiple consumer level RGB-D sensor on a camera rig to improve the quality of the scan. This can give the opportunity of fixing the camera rig in the room and have a live view of the patient which could be valuable in monitoring breath and live registration of the CT scan on the patient's body.
247484760
s2orc/train
v2
2022-03-17T15:29:26.660Z
2022-01-01T00:00:00.000Z
Virtual reality cognitive intervention for heart failure: CORE study protocol Abstract Introduction Heart failure (HF) is a prevalent, serious chronic illness that affects 6.5 million adults in the United States. Among patients with HF, the prevalence of attention impairment is reported to range from 15% to 27%. Although attention is fundamental to human activities including HF self‐care, cognitive interventions for patients with HF that target improvement in attention are scarce. The COgnitive intervention to Restore attention using nature Environment (CORE) study aims to test the preliminary efficacy of the newly developed Nature‐VR, a virtual reality‐based cognitive intervention that is based on the restorative effects of nature. Nature‐VR development was guided by Attention Restoration Theory. The target outcomes are attention, HF self‐care, and health‐related quality of life (HRQoL). Our exploratory aims examine the associations between attention and several putative/established HF biomarkers (eg, oxygen saturation, brain‐derived neurotrophic factor, apolipoprotein E, dopamine receptor, and dopamine transporter genes) as well as the effect of Nature‐VR on cognitive performance in other domains (ie, global cognition, memory, visuospatial, executive function, and language), cardiac and neurological events, and mortality. Methods This single‐blinded, two‐group randomized‐controlled pilot study will enroll 74 participants with HF. The Nature‐VR intervention group will view three‐dimensional nature pictures using a virtual reality headset for 10 minutes per day, 5 days per week for 4 weeks (a total of 200 minutes). The active comparison group, Urban‐VR, will view three‐dimensional urban pictures using a virtual reality headset to match the Nature‐VR intervention in intervention dose and delivery mode, but not in content. After baseline interviews, four follow‐up interviews will be conducted to assess sustained effects of Nature‐VR at 4, 8, 26, and 52 weeks. Discussion The importance and novelty of this study consists of using a first‐of‐its kind, immersive virtual reality technology to target attention and in investigating the health outcomes of the Nature‐VR cognitive intervention among patients with HF. BACKGROUND Heart failure (HF) is a prevalent, serious chronic illness affecting 6.5 million American adults. 1 The prevalence of this disease is projected to increase by 46% by 2030. 1 A major issue for patients with HF is cognitive dysfunction, which has a prevalence of 24% to 80%. [2][3][4] The most likely etiology of cognitive dysfunction in HF is inadequate cerebral blood flow. 2,3,5 In addition to memory, attention is one of the most commonly impaired cognitive domains in HF. [6][7][8] Attention is critical in supporting effective human activities, and thus poor attention function is often associated with inadequate HF self-care (eg, lack of adherence to a low sodium diet, non-adherence to medication, and poor symptom management). [9][10][11] As shown in past studies, inadequate HF self-care is associated with higher mortality and hospitalizations and diminished health-related quality of life (HRQoL). 12,13 Given that attention is fundamental to learning and maintaining HF self-care, the development of interventions aiming to improve attention among patients with HF is warranted. Approaches to developing interventions to improve attention in HF need to be guided by theory because theory provides fundamental understanding of the problem that interventions target. 14 One relevant theory is Attention Restoration Theory, which proposes that attention can be restored and improved by interacting with nature. 15,16 Studies guided by this theory have shown that natural restorative environment (Nature) interventions can improve attention in diverse populations. For example, engaging in nature-related activities (eg, walking in the park, gardening) improved attention among breast cancer survivors, 17 and viewing nature pictures on computer screens improved attention among healthy adults. 18,19 In our pilot study consisting of a randomized controlled crossover design, a computer-based Nature intervention using two-dimensional pictures of nature was tested for its efficacy compared to a computer-based active comparison intervention of two-dimensional urban pictures among 20 participants with HF and 20 age-matched healthy adults. The results indicated that participants with HF had poorer attention and showed small to near-medium effects of the Nature intervention on improving attention. 20 Newer virtual reality (VR) technology can now create threedimensional environments to better simulate the natural environment and provide a more immersive experience than two-dimensional computer-based pictures. 21,22 Thus to extend our pilot study, our study team developed a prototype of a VR-based nature intervention (Nature-VR) using three-dimensional pictures delivered by a VR headset and tested its feasibility among 10 participants with HF. This feasibility study examined study completion, safety, satisfaction, and attention improvement with the Nature-VR prototype among these 10 participants with HF. 23 The intervention was delivered by laptop computer for five participants with HF and by VR for five participants with HF. Attention was examined immediately pre-and postintervention. All 10 participants completed the study, and no adverse events were reported during or immediately after the interventions. Participants who received the intervention by the VR mode were slightly more satisfied. Participants in the computer-based Nature intervention were provided an opportunity to try Nature-VR at the end of the interview, with four of five participants trying and preferring Nature-VR. 23 With regard to attention improvement, performance on tests of attention was consistently better after Nature-VR compared with the computer-based Nature intervention, except for the Digit Span Forward test. 23 Effect sizes were calculated using Hedge's gs. Changes in measures were modeled using a linear model with a single term for group (Nature-VR and computer-based Nature intervention). 1). Building upon the feasibility study with the prototype the Nature-VR intervention and the active comparison condition (Urban-VR) have now been developed. The purpose of this study is to test the preliminary efficacy of this latest version of Nature-VR on attention, HF selfcare, and HRQoL using a two-group single-blinded randomized controlled design. METHODS The COgnitive intervention to Restore attention using nature Envi- Study design The proposed study is a two-group single-blinded randomized controlled pilot study. Participants with HF will have five data collection interviews ( Figure 2). Primary and secondary outcomes will be examined at baseline, immediately post-intervention (4 weeks), 8, 26 and 52 weeks (1 year). Follow-up data will provide information about intervention effects over time. Specific aims and hypotheses The primary aim is to evaluate the preliminary efficacy of Nature-VR on attention among participants with HF. The secondary aims are to evaluate the preliminary efficacy of Nature-VR on the secondary outcomes of HF self-care and HRQoL. The study will test the following hypotheses: (3) intervention responders and non-responders need to be characterized by examining the relationship between intervention responsiveness and patient characteristics and relevant biomarkers. HIGHLIGHTS • Nature-VR is a theory-based virtual reality cognitive intervention • We describe the framework and methodology of Nature-VR cognitive intervention study • We will test if Nature-VR improves attention and healthrelated outcomes. Sample We will enroll 74 participants (37 in each group) to retain 30 in each group by the end of the study (projecting a 19% attrition rate). This sample size was calculated based on our previous studies. 23 Given the lack of intervention studies testing the efficacy of the Nature-VR intervention on HF self-care and HRQoL, our sample size calculation is based on guidelines for pilot studies. 33 Procedures Written informed consent will be obtained from all participants prior to enrollment. The interviews will be conducted at either the participant's home or our research office at Indiana University School of Nursing. Our single blinded methodology requires us to use a separate intervention and data collection research assistants (RAs). The inter-vention RA will collect participants' demographic and clinical characteristics and conduct the baseline interviews. After baseline interview, the RA will randomize the participant. 1:1 randomization will be generated using SAS statistical software by the team biostatistician. The randomization results will be concealed from the principal investigator (PI) and the data collection RA. The participants will be told not to share their group assignment with the data collection RA and the PI during their follow-up interviews. The two RAs will be instructed to not communicate with each other about randomization. The intervention RA will instruct participants on how to complete the activities for either Nature-VR or Urban-VR with the written intervention manual. During the 4-week intervention phase, participants in each group will receive weekly phone calls to check if they have experienced technical issues with the VR headset and to answer questions. After the intervention phase, the data collection RA who will remain blinded to group assignment will meet participants for four follow-up interviews. Attention and secondary outcomes will be evaluated immediately after comple- 4 weeks (Immediate Post-intervention) 2 nd home visit, outcomes will be examined. The headset will be retrieved. Venipuncture for serum BDNF will be performed. 8 weeks (4 weeks after intervention) 3 rd home visit, outcomes will be assessed. 26 weeks (22 weeks after intervention) 4 th home visit, outcomes will be assessed. 52 weeks (48 weeks after intervention) 5 th home visit, outcomes will be assessed. Electronic medical data will be reviewed to assess cardiac and neurological events, and mortality. F I G U R E 2 Research design and procedure Intervention and active comparison condition The intervention and active comparison condition were developed by the PI with consultation from the Advanced Visualization Lab at Indi- attention interventions likely need to be brief for efficacy without fatigue. Second, a meta-analysis of interventions using nature activities suggested that 5-minute interventions are optimal for improving self-esteem and mood. 38 Third, in our preliminary study participants had interventions of ∼7 minutes, which led to small to medium effect sizes. 32 Finally, participants with HF in our previous feasibility study were asked about the adequacy of the intervention dose and reported it to be adequate. 23 Thus 10 minutes per day was deemed appropriate for the Nature-VR intervention. The 4-week intervention duration was selected based on two previous cognitive intervention studies. First, the ACTIVE study, a large cognitive training trial in 2832 healthy older adults, supports our rationale for 4 weeks of cognitive intervention. 39 (Table 1). Subjective attention-a patient-reported outcomewill also be examined using the Attentional Function Index. 40 This measure is predictive of mortality, and a 4.84-point change in the total score is clinically meaningful. 44 In addition to the attention measures described above we will use Pulse oximetry will be used to monitor oxygen saturation at each study time point. Venipuncture for serum BDNF will be performed at baseline and 4 weeks. Venipuncture for genetic biomarkers (ie, APOE, BDNF, dopamine receptor, and dopamine transporter genes) will be performed at baseline. Statistical analysis plan Distributions based on density plots will be examined for normality of the data and to detect outliers. If not normally distributed, non-parametric methods will be used. Baseline equivalencies between the groups will be evaluated using independent t-tests and chisquare tests. Observed scores of outcome variables at pre-and postintervention will be presented for both groups. An intent-to-treat approach will be used to ensure that observed differences between groups are not attributable to differential dropout. 45 Analyses for each study aim will be performed using the statistical package SAS at an alpha of .05 or less. Multi-Source Interference Task This computerized test examines the cingulo-fronto-parietal cognitive/attention network. 52,53 In this task participants are asked to identify a target number that is different from two other numbers displayed on a computer screen. There are two types of trials: congruent and incongruent. For the congruent trials a target number is always matched by its position on a button (eg, 100, 020, or 223), whereas for incongruent trials the target number is never matched with its position on a button (eg, 010, 233, or 232). A faster response time and lower error rate indicates better attention. In this study, congruential trial scores will be used to assess sustained attention, and differences between congruent and incongruent trials will be used to assess directed attention. 58,59 There are two parts: A and B. In Part A, participants are asked to connect a series of randomly arrayed circles numbered from 1 to 25 in order as quickly as possible. In Part B, participants are asked to connect a series of 25 circles numbered from 1 to 13 randomly intermixed with letters from A to L, alternating between numbers and letters in ascending order. The time to complete including the time for correction of any errors is recorded, called response time. A longer response time indicates poorer attention. In this study, Part A scores will be used to assess sustained attention and differences between Part A and B will be used to assess directed attention. Test-retest reliability ranged from.86 to 96 in 20 patients with HF. 32 Construct validity was supported among healthy adults and patients with closed head injury. 57 Stroop Test This test measures attention involving in selective processing of different visual features while ignoring distractions on the test (letters and ink colors of color words). 60 A computerized Stroop test was programmed using E-Prime software. Participants are asked to read the letters or ink colors of four color words (ie, red, blue, yellow, and green) and press the designated keyboard on the laptop computer. There are two types of trials. In congruent trials, the color names have same letters and print colors (eg, red in red ink). In incongruent trials, the color names do not match to the print colors (eg, red in blue ink). There are two conditions: switching and non-switching. In switching conditions, the commands are switched from word to color, or color to word. In non-switching conditions, the commands are the same (color to color, or word to word). Differences in performance between switching and non-switching conditions will be used to assess attention switching. Congruential trial scores will be used to assess sustained attention and differences between congruent and incongruent trials will be used to assess directed attention. Reliability was satisfactory. 61,62 Construct validity was supported in patients with traumatic brain injury. 63 Impaired performance on Stroop test was most common in patients with frontal lobe lesions in a meta-analysis. 64 TA B L E 2 Protocol changes responding to the COVID-19 pandemic Phone recruitment and verbal consent Possibly eligible adults with HF will be contacted by phone. A copy of the informed consent and HIPAA form will be sent to possibly eligible participants via email or mail prior to the verbal consent and authorization. We will obtain verbal consent without a written signature on the consent form and HIPAA form after explaining our study procedures with our informed consent and HIPAA forms. Participants will be asked if they have any questions before they provide verbal consent to participate in the study. Phone interviews To avoid physical contact, data will be collected by phone interviews instead of face-to-face interviews at baseline, 4, 8, 26, and 52 weeks. Our original protocol included two computerized (ie, Multi-Source Interference Task, Stroop Test) and two paper-pencil based cognitive tests (ie, Benson Figure Copy, Digit Symbol tests) that cannot be administered via phone. Thus, the Oral Trail Making (primary outcome measure) and Digit Span tests will be used to assess attention. A blind version of the MoCA will be used to assess global cognition. Verbal Fluency will be administered to examine executive function. In addition to Category Fluency, the Verbal Naming Test will be administered to examine language. The Hopkins Verbal Learning Test and Craft Story will be administered as planned to assess verbal memory. Oxygen saturation will not be monitored during the phone interviews. Saliva samples Venipuncture was planned to measure serum BDNF levels and collect DNA for genetic biomarkers (ie, BDNF Met, APOE ε4, 7-repeat of dopamine receptor, and 10-repeat of dopamine transporter alleles). Due to the need to collect data remotely, saliva samples will instead be collected for genetic biomarkers using the OGR-600 saliva kit (DNA Genotek). The saliva sample collection kit will be delivered to the participants by the intervention RA with the intervention kit. The intervention RA will instruct how to collect saliva sample and ask to leave their samples outside their front doors for collection by the RA. The intervention RA will deliver the saliva sample to the storage facility on campus. Oxygen saturation and serum BDNF will not be collected in this modified study protocol. Intervention delivery Drop-off based remote option of delivery A study team member will call participants to set up a time to drop off the intervention kit (virtual reality headset, intervention manual binder). The intervention kit will be dropped off outside the participant's front door after which the participant will be called to let them know the kit has been delivered. Instructions for how to use the intervention kit will be given in the follow up phone call. If a participant wants to receive the intervention in person, the intervention RA may give the intervention instructions to the participant outdoors in person. Throughout the entire process, adequate social distancing of 6 feet or more will be maintained. After the intervention phase, participants will be asked to leave the intervention kit outside their front door for a pick up at a pre-arranged date and time. To test the effect of Nature-VR on attention (hypothesis 1), HF selfcare, and HRQoL (hypotheses 2a and 2b, respectively), we will use linear mixed models to model attention performance over time with group, time, and group by time interaction variables. Adherence to the intervention (ie, minutes spent using Nature-VR, which is objective evidence collected by using time-stamping function of the Oculus application in the device), HF severity, comorbidity, and gender will be tested as potential covariates. The mixed models will produce valid estimates for missing data at random. We will employ three analytic strategies to demonstrate the relationships between attention and possible biomarkers. The relationships of baseline attention with oxygen saturation, BDNF serum levels over time, BDNF Met, APOE ε4, 7-repeat of dopamine receptor and 10-repeat of dopamine transporter alleles will be examined using analysis of covariance (ANCOVA) models, while adjusting for participants' ages. The changes in oxygen saturation and serum BDNF levels in relation to attention changes at 4-week follow-up will be examined using ANCOVA models adjusting for age. Those biomarkers that show significant associations with attention will be added to models for hypotheses 1, 2a, and 2b to examine different levels of responsiveness to the intervention based on the biomarkers. We will assess the effect of Nature-VR on the other cognitive domains (exploratory aim 2) using linear mixed models similar to the analysis plan for hypothesis 1, 2a, and 2b. Binomial logistic regressions will be used to test the preliminary efficacy of Nature-VR on cardiac and neurological events and mortality adjusting for covariates. Effect sizes of Nature-VR will be examined by omega-squared for the interaction terms. 46 Ninety-five percent confidence intervals will be estimated using the non-parametric bootstrap method. Potential difficulties and alternative approaches Although HF prevalence is similar for men and women, 1 women are underrepresented in HF research studies. 47 To address this issue we will recruit equal numbers of women and men by monitoring monthly recruitment. If the enrollment of women is <50% for three consecutive months, we will modify the recruitment brochure to be more attractive to women (eg, more pictures of women) and identify more outpatient clinics for recruitment of women. Second, the African American population in the United States is 12.6%. To recruit a diverse sample, study invitations for African American will be prioritized during recruitment. Finally, using a VR headset may be challenging to some participants. To help participants with the use of this technology, written instructions and a phone helpline will be provided. 2.9 Research conduct during the COVID-19 pandemic The CORE study is being conducted during the worldwide pandemic of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) (coronavirus disease 2019 ). The COVID-19 pandemic has led to study protocol modifications to meet the COVID-19 restrictions for human research. Given that our study population is adults with HF who are identified as high risk for more serious symptoms of COVID-19, we are conducting the study with a remote option without direct physical contact with participants. Major changes are recruitment over the phone with verbal consent, data collection by phone interviews, saliva sample collection, and drop-off-based intervention delivery. Because of the remote option, necessary changes were made to measures. Detailed information is presented in Table 2. The modified study protocol using a remote data collection and intervention delivery has been approved by the university institutional review board. DISCUSSION Participant enrollment was initiated in July 2020 to evaluate the preliminary efficacy of the Nature-VR intervention for improving attention, HF self-care, and HRQoL compared to the Urban-VR active comparison condition. In this new study, 74 participants with HF will be enrolled and followed up 52 weeks (1 year) after baseline. The Nature-VR is theoretically sound and innovative. The incorporation of VR technology into this restorative cognitive intervention will provide immersive experiences by visually simulating a real natural environment. The use of this immersive technology is ideal for participants who have limited access to natural environments (eg, due to limited physical function, inclement weather, or urban living with limited access to green space) and less opportunity to engage with nature directly. VR technology is safe and may increase the strength of the intervention effect, as demonstrated by past studies, including our feasibility study. [48][49][50][51] Investigating the effects of Nature-VR on HF self-care and HRQoL will contribute to understanding the relationships between cognitive dysfunction and the health outcomes as well as provide another approach to improve HF management if efficacious. Previous studies have been more focused on educating patients to change self-care behaviors. Cognitive interventions may support the patients' learning during the education, improve perception of the HF symptoms, and help make decisions on managing the symptoms. The Nature-VR has great potential to improve attention and prevent attention impairment, and may lead to better self-care and HRQoL among patients with HF. The Nature-VR intervention can be delivered in patients' homes at low cost (eg, the Oculus Go, head-mounted goggles costs $300). If findings are as hypothesized, the Nature-VR and Urban-VR interventions will be used to test full-scale efficacy of the Nature-VR intervention with a sample size powered from the effect sizes calculated from this study. If no effect is seen, this pilot study will contribute to the development of knowledge about attention changes over time in HF, future design of biobehavioral cognitive interventions, and development of possible biomarkers associated with attention.
118466480
s2orc/train
v2
2011-03-24T19:44:37.000Z
2011-03-24T00:00:00.000Z
Self-Energy Correction to the Hyperfine Splitting for Excited States The self-energy corrections to the hyperfine splitting is evaluated for higher excited states in hydrogenlike ions, using an expansion in the binding parameter Zalpha, where Z is the nuclear charge number, and alpha is the fine-structure constant. We present analytic results for D, F and G states, and for a number of highly excited Rydberg states with principal quantum numbers in the range 13<= n<= 16, and orbital angular momenta l = n-2 and l = n-1. A closed-form, analytic expression is derived for the contribution of high-energy photons, valid for any state with l<= 2$ and arbitrary n, l and total angular momentum j. The low-energy contributions are written in the form of generalized Bethe logarithms and evaluated for selected states. I. INTRODUCTION The self-energy correction to the hyperfine splitting is the dominant quantum electrodynamic (QED) correction to the magnetic interaction of the bound electron with the field of the nucleus. The hyperfine interaction energy of electron and nucleus is proportional to g N α(Zα) 3 m 2 e /m N , where g N is the nuclear g factor, and m e and m N are the electron and nuclear masses, respectively. Relativistic corrections enter at relative order (Zα) 2 . The dominant QED correction is due to the anomalous magnetic moment of the electron and enters at relative order α. Here, we consider the QED correction of order α (Zα) 2 , which is the sum of a high-and a lowenergy part. Relativistic corrections to the anomalous magnetic interaction give one of the dominant contributions to the high-energy part, which can otherwise be calculated on the basis of a form-factor approach, using a generalized Dirac equation in which the radiative effects and the hyperfine interaction are inserted "by hand." The low-energy part constitutes a correction to the Bethe logarithm due to the hyperfine interaction. It can be formulated as a hyperfine correction to the self-energy, the effect being equivalent to the self-energy correction to the hyperfine splitting mediated by low-energy virtual photons [up to order α (Zα) 2 ]. In our treatment, we follow the formalism of nonrelativistic QED (NRQED) detailed in Ref. [1], and refer to Refs. [2][3][4][5][6][7] for a number of previous investigations regarding the treatment of the self-energy correction to the hyperfine splitting in systems with low nuclear charge number. Our paper is organized as follows. The general formalism of the hyperfine interaction is described in Sec. II. For the self-energy correction, the low-energy part is treated in Sec. III, and the high-energy part is calculated in Sec. IV. Results and theoretical predictions are discussed in Sec. V. Conclusions are reserved for Sec. VI. Natural units with = c = ǫ 0 = 1 are used throughout the paper. II. FORMALISM Following the derivation in Ref. [8], the magnetic dipole field of the nucleus is described by the vector potential where x is the coordinate vector and r = | x|. The curl of this vector potential yields the magnetic field and the fully relativistic hyperfine interaction Hamiltonian thus reads The hyperfine interaction couples Dirac eigenstates to the magnetic field of the nucleus. The electronic states can be written as |njm ≡ |njℓm , where n is the principal quantum number, and the orbital and total angular momenta of the electron (ℓ and j, respectively) can be mapped to the Dirac angular quantum number κ = (−1) j−ℓ+ 1 2 (j + 1 2 ). Finally, m is the projection of the total electron angular momentum onto the quantization axis. In this article, we sometimes suppress the orbital angular momentum ℓ in the notation because we consider the coupling of the total electron angular momentum j to the nuclear spin. Nuclear states are denoted as |IM , where I is the nuclear spin and M its projection onto the quantization axis. They are coupled to the electron eigenstates |njm by the hyperfine interaction, to form states with quantum number |nf m f Ij which are eigenstates of the total Dirac+hyperfine Hamiltonian (f is the total electron+nuclear angular momentum, and m f is its projection). Using Clebsch-Gordan coefficients C f m f IMjm , the |nf m f Ij states can be written as The hyperfine energy ∆E hfs thus reads Using the Wigner-Eckhart theorem, the hyperfine energy can be rewritten as where |nj 1 2 is the Dirac eigenstate with a definite angular momentum projection + 1 2 , and [ x × α] 0 is the z component (zero component in the spherical basis) of the indicated vector product. We have thus separated the nuclear from the electronic variables. A detailed analysis of the separation of the nuclear variables can also be found in Ref. [2]. This procedure allows to reduce the evaluations of the hyperfine structure and corrections to it, to the evaluation of matrix elements of operators acting solely on electronic states. Specifically, we consider corrections to the statedependent electronic matrix element Θ e , where The hyperfine interaction energy thus is Relativistic atomic theory leads to the following result for Θ e (see Refs. [4,9]) where γ = κ 2 − (Zα) 2 . The effective principal quantum number is N = (n − |κ|) 2 + 2(n − |κ|)γ + κ 2 . A. General Formalism In order to treat low-energy virtual photons, we apply a Foldy-Wouthuysen transformation to the total Hamiltonian H t which is the sum of the Dirac-Coulomb Hamiltonian and the relativistic hyperfine interaction Hamiltonian, The Foldy-Wouthuysen transformation of this Hamiltonian is carried out as described in Refs. [7,8,[10][11][12]. The only difference to the case of the ordinary Dirac Hamiltonian is that the odd operator O used in the construction of the transformation [11] now reads instead of α · p. The result of the transformation, is the sum of the Foldy-Wouthuysen Hamiltonian H FW from Ref. [10], and H HFS is the nonrelativistic hyperfine splitting Hamiltonian [2,8] It consists of the three parts, whose designation is inspired by the apparent angular momentum association of the terms. Following the notation in Ref. [8], lowercase letters in the subscript are used to label relativistic operators, whereas nonrelativistic operators are denoted by uppercase letters in the subscript. However, we use the lowercase notation for the scaled vector quantity h in order to denote the electronic operators in the nonrelativistic hyperfine Hamiltonian. Furthermore,x = x/| x| is the position unit vector. The zero component (z component) h 0 = h S,0 + h D,0 + h L,0 of the Hamiltonian "vector" h therefore reads as With the help of h 0 , the nonrelativistic limit of Eq. (9) is obtained as which we use to define the nonrelativistic quantity which is commonly referred to as the Fermi energy. The relativistic and QED corrections can be expressed as multiplicative corrections of Θ e , via the replacement By expanding Θ e to second order in Zα, we obtain and the corresponding energy shift The QED term is the subject of this paper. For the QED corrections terms up to relative order α(Zα) 2 with respect to the nonrelativistic hyperfine splitting will be considered. In order to do the calculation, we need the three terms from Eq. (13) and a further correction to the electron's transition current, due to the hyperfine interaction. Namely, in the presence of the hyperfine interaction, the kinetic momentum of the electron finds a modification The current has the zero component which is used in the calculations below. B. Specific Terms Following Ref. [8], there are four corrections, which arise from the correction of the interaction current, from the correction of the Hamiltonian, from the correction of the reference state energy, and finally from the correction of the reference-state wave function. We first treat the hyperfine correction to the interaction current and to this end, define a useful normalization factor The hyperfine correction to the interaction current is then given as The term containing the logarithm of ǫ, which is a scaleseparation parameter that cancels when high-and lowenergy parts are added [13], vanishes after angular integration in the matrix element. The structure of the logarithmic term here is very similar to the Bethe logarithm encountered in Ref. [14]. Terms of this form will arise for the other corrections in the low-energy part as well. In the following, these terms are denoted as β HFS and are evaluated numerically with the methods described in Ref. [15]. Thus, the low-energy correction due to the nuclear-spin dependent current is Next, we treat the corrections to the Hamiltonian, to the energy and to the wave function. The perturbation due to the hyperfine splitting Hamiltonian yields the term [we define the resolvent G( The correction to the energy denominator in the Schrödinger propagator can be written as where the prime indicates the reduced Green function. Finally, the correction to the wave function due to the hyperfine splitting Hamiltonian is Using commutator relations, one can finally sum up all four corrections in the low-energy part to where β HFS is the sum The double commutator vanishes for states with ℓ ≥ 2 up to and including order (Zα) 5 , and hence δΘ L takes the very simple form IV. HIGH-ENERGY PART Up to relative order α(Zα) 2 E F , it is sufficient [8] to consider the problem on the level of the modified Dirac Hamiltonian where F 1 and F 2 are the one-loop Dirac and Pauli form factors of the electron, respectively. Their expressions are known (see Chapter 7 of Ref. [16] However, as already pointed out, the matrix element of ∇ 2 h 0 vanishes on states with ℓ ≥ 2 which are relevant to our investigations, and so The second correction is a second-order perturbation involving the F 1 correction to the Coulomb potential, Again, ∇ 2 V is proportional to the Dirac δ and therefore vanishes for states with ℓ ≥ 1. Accordingly, for states with ℓ ≥ 2 we have The Pauli F 2 form factor gives rise to a second-order perturbation involving a magnetic moment correction to the Coulomb potential, where F 2 is the magnetic form factor. For F 2 (0), the Schwinger value F 2 (0) = α 2π may be used. After a Foldy-Wouthuysen transformation, we can write δΘ H,3 as the sum of two terms. The first of these, δΘ H,3n , involves no mixing of upper and lower components in the Dirac wave function and reads We find the following general result for states with ℓ ≥ 2, . (43) Lower components of the Dirac wave function give rise to the mixing term We find the general result . The F 2 correction to the magnetic photon exchange of electron and nucleus gives rise to the effective interaction (46) Here, h s and h d are the generalizations of h S and h D to 4 × 4 matrices, Taking F 2 ( ∇ 2 ) ≈ F 2 (0) in Eq. (46), we obtain the correction Generalizing results from Ref. [17] for the term of relative order α, we find the result Taking the slope of F 2 in Eq. (46), we obtain with F ′ 2 (0) = α/12π. As F ′ 2 (0) ∇ 2 already is of relative order α(Zα) 2 , this operator only has to be applied to the nonrelativistic wave function where it vanishes for states with ℓ ≥ 2 and thus we have for the high-energy part the result It is quite surprising that the result obtained by adding IV: Low-energy contribution βHFS of the self-energy correction for the hyperfine splitting for highly excited states. The numbers in parentheses are standard uncertainties in the last figure. The exponent of the numerical data is chosen to be the same as in Tables I, II, and III can actually be simplified quite considerably, Restoring the reduced-mass dependence [we define r(N ) ≡ m e /m N ] and adding the relativistic correction of relative order (Zα) 2 , we find that Numerical data for β HFS for D, F , and G states, and selected Rydberg states, can be found in Tables I, II, III, and Table IV where the first term is due to the hyperfine effects calculated here, and the second term is due to relativistic recoil and QED effects calculated in Ref. [18]. The final theoretical prediction for the shift from the Dirac value is ∆ν hfs,1→2 = 175.524 556(13) Hz, where the fundamental constants of CODATA 2006 [19] have been used in the numerical evaluation. The next higher-order term neglected here is the recoil correction of relative order (Zα) 2 r(N ), for which a general expression has been derived in Ref. [20] [the corresponding expression also is given in Eq. (42) of Ref. [4]]. The recoil correction is numerically suppressed for Z = 1. VI. CONCLUSIONS Rydberg states of hydrogenlike ions with medium nuclear charge number have been proposed as a device for the determination of fundamental constants [18]. Here, we demonstrate that it is possible to obtain accurate theoretical predictions for transition frequencies even in cases where the nucleus carries spin. To this end, we calculate the self-energy correction to the hyperfine splitting of the high-lying states. Vacuum polarization effects can be neglected for states with ℓ ≥ 2 to the order relevant for the current investigation. We split the calculation into a low-energy part, which contains Bethe logarithm type corrections (Sec. III), and a high-energy part, which can be treated on the basis of electron form factors (Sec. IV). For the low-energy part, we find that the net result can be expressed as the sum of corrections due to the hyperfine Hamiltonian, due to the energy correction, due to the wave function correction, and due to the hyperfine modification of the electron's transition current. For the high-energy part, we find a sum of two terms, one of which is due to a second-order effect involving the Pauli form factor correction to the Coulomb field, and the second of which is an anomalous magnetic moment correction to the hyperfine splitting, evaluated on relativistic wave functions. The first correction can be split into two terms, which involve/do not involve mixing of the upper and lower components of the Dirac wave function, respectively. Quite surprisingly, the high-energy contribution can be expressed in closed analytic form, valid for an arbitrary excited state [see Eq. (55)]. For the Bethe logarithm type corrections relevant to the low-energy part, a numerical approach is indispensable. Finally, as indicated in Tables I-III, we also find results for D, F , and G states which are of general interest to high-precision spectroscopy.
17041950
s2orc/train
v2
2016-05-04T20:20:58.661Z
2014-07-25T00:00:00.000Z
Combined use of lysyl oxidase, carcino-embryonic antigen, and carbohydrate antigens improves the sensitivity of biomarkers in predicting lymph node metastasis and peritoneal metastasis in gastric cancer The purpose of this study was to determine whether lysyl oxidase (LOX) is a useful marker of metastasis in gastric cancer (GC) patients in combination with tumor markers carcino-embryonic antigen (CEA), carbohydrate antigen 724 (CA724), carbohydrate antigen 19-9 (CA19-9), and carbohydrate antigen 125 (CA125). There were 215 GC patients (67 without metastasis, 102 with lymph node metastasis, and 46 with peritoneal metastasis) who presented to the Affiliated Cancer Hospital of Guangxi Medical University between May 2009 and November 2012 that were enrolled in this study. The LOX expression level and the serum concentration of the four tumor markers were evaluated preoperatively. All patients underwent computed tomography (CT) and ultrasonography (US) before surgery. Statistical analysis, including receiver operating characteristic (ROC) curve analysis, area under the curve (AUC) analysis, and logistic regression analysis, was performed to evaluate the diagnostic value of these markers in predicting metastasis in GC. For predicting lymph node metastasis in GC, the sensitivity of LOX, CEA, CA724, CA199, and CA125 was 44.12, 12.75, 21.57, 23.53, and 15.69 %, respectively, and increased to 79.41 % in combination. For predicting peritoneal metastasis in GC, the sensitivity of these markers was 56.52, 23.91, 34.78, 36.96, and 34.78 %, respectively, and increased to 91.30 % in combination. Combining LOX with CEA, CA724, CA199, and CA125 could increase the sensitivity of predicting lymph nodes metastasis and peritoneal metastasis in GC. Surgeons can use these markers to determine the best treatment options for patients. Additional large-scale, prospective, multicenter studies are urgently needed to further confirm the results of this study. Introduction Gastric cancer (GC) is the fourth most common cancer and the second leading cause of cancer deaths worldwide [1]. Nearly half of GC cases occur in China, with an overall 5-year survival rate of approximately 20 % [2]. Most GC cases are diagnosed in advanced stages [3], and thus the opportunity for radical surgery is lost. Lack of early detection and limited treatment options contribute to the poor prognosis in GC [4]. As the prognosis of GC patients is closely related to timely diagnosis and appropriate treatment, an effective tumor biomarker is urgently needed for screening and diagnosis [5]. Advances in basic research and molecular biology mean that it should now be possible to detect effective tumor biomarkers to diagnose GC [6], thereby improving treatment options for patients with advanced GC metastasis. Lysyl oxidase (LOX) is a copper-dependent amine oxidase encoded by members of a five-gene family that includes LOX and four LOX-like proteins (LOXL 1-4) [7]. LOX controls both the structure and the tensile strength of the extracellular matrix and thus preserves tissue integrity [8]. Numerous studies have highlighted the role of LOX as a marker of tumor progression and metastasis, such as in bronchogenic carcinoma and in breast cancer, colorectal cancer, and ovarian cancer [9][10][11][12]. However, to the best of our knowledge, no studies have investigated the correlation of LOX expression and it predicts information for metastasis in GC patients, in condition of combine LOX with other tumor markers, such as carcino-embryonic antigen (CEA), carbohydrate antigen 724 (CA724), carbohydrate antigen 19-9 (CA19-9), and carbohydrate antigen 125 (CA125). The present study analyzed the association between LOX expression and its diagnostic significance for metastasis GC, in condition of combine LOX with serum tumor markers CEA, CA724, CA125, and CA199. Patients and tissue samples This study was approved by the Research Ethics Committee of the Affiliated Cancer Hospital of Guangxi Medical University in China. There were 215 patients with GC who were diagnosed in the hospital between May 2009 and November 2012 that were enrolled in this study. None of the patients had received preoperative adjuvant chemotherapy or radiotherapy. Written informed consent was obtained from all the patients. Fresh GC specimens were obtained by preoperative gastroscopy and were fixed in 10 % formalin and embedded in paraffin, and pathological examination was performed. Further postoperative pathological analysis was done for surgery patients. All the specimens were handled and anonymized according to ethical and legal standards. All the GC patients underwent diagnostic imaging with computed tomography (CT) or ultrasonography (US) prior to the surgery. According to the pathology report, the GC patients were divided into the following groups based on their degree of metastasis: (1, GC patients without metastasis; 2, advanced GC with lymph node metastasis; and 3, advanced GC with peritoneal metastasis. Immunohistochemistry The expression pattern of LOX in tissue samples was analyzed with the labeled streptavidin-peroxidase immunohistochemical (IHC) technique. Tissue slides were deparaffinized in xylene and rehydrated in graded series of ethanol, followed by heat-induced epitope retrieval in citrate buffer (pH 6.0). LOX expression was detected using a primary antibody against LOX (anti-LOX antibody, rabbit polyclonal to LOX, 1/300; Abcam, Cambridge, MA, USA). The degree of immunostaining was reviewed and scored by two pathologists, taking into account the percentage of positive cells and the staining intensity, as described by Hu et al. [13]. The immunostaining was classified into four groups, with the proportion of cell protein expression categorized as follows [13]: 0-10 % was recorded as 0, 10-30 % was recorded as 1, 30-50 % was recorded as 2, 50-75 % was recorded as 3, and >75 % was recorded as 4. Cell protein expression was then graded according to the sum of the scores: 1, Fig. 1a Blood samples were collected from each patient within 5-7 days before the surgery, and CEA levels were tested with a fluorescence-enzyme immunoassay. CA724, CA125 (Fujirebio Diagnostics, PA, USA), and CA19-9 (Immunotech, Marseille, France) were also measured with an immunoradiometric assay. The cut-off values for CEA, CA72-4, CA19-9, and CA125, were defined as 5.0 ng/ml, 5 U/ml, 37 U/ml, and 35 U/ml, respectively, according to literature reports on a Chinese population and the manufacturer's instructions [14][15][16]. Statistics The Chi-square test was used to evaluate the association between LOX expression and age, gender, tumor location, differentiation, depth of invasion, metastasis status. The area under the curve (AUC) of the receiver operating characteristic (ROC) curve was used to evaluate the predictive value of LOX, CEA, CA724, CA199, and CA125 for GC with different metastasis status. Multivariate logistic regression analysis was used to establish the diagnostic mathematical model. On the basis of this model, the prediction value was calculated, followed by ROC curve analysis. The statistical analysis was performed with the Statistical Package for the Social Sciences, version 16.0 (SPSS 16.0), with a P<0.05 considered to be significant. Results The IHC results revealed that 90 of the 215 (41.86 %) GC patients had different expression levels of LOX. The LOX expression pattern and clinic pathological factors are listed in Table 1. The LOX expression pattern was significantly correlated with tumor metastasis status (P<0.05), but it was not associated with age, gender, tumor location, differentiation, and depth of invasion (P>0.05). Overall, the sensitivity of LOX for predicting metastasis in GC (lymph node metastasis and peritoneal metastasis) was 47.97 %. For predicting lymph node metastasis in GC, the sensitivity of LOX expression was 44.12 %, and this increased to 56.52 % for predicting peritoneal metastasis. In all the GC patients, preoperative levels of CEA, CA724, CA19-9, and CA125 were above the cut-off levels (13.49, 21.40, 23.72, and 18.60 %, respectively). The effect estimates of diagnostic tests of the different markers are shown in Table 2 and Table 3. In predicting lymph node metastasis in GC, CA199 had the highest sensitivity (23.53 %), specificity (85.07 %), and accuracy (47.93 %) among the four serum tumor markers, and CEA had the worst sensitivity (12.75 %), specificity (92.54 %), and accuracy (44.38 %). In GC patients with peritoneal metastasis, CA199 had the highest sensitivity (36.96 %), and CEA had the lowest sensitivity (23.91 %). As the degree of metastasis increased, the positive rate of serum CEA, CA724, CA199, and CA125 increased. The sensitivity of the diagnostic imaging (CT or US) in lymph node metastasis patients and peritoneal metastasis patients was low, (7.84 and 15.22 %, respectively). The positive likelihood ratio and negative likelihood ratio of these markers in detecting different metastasis in GC are also presented in Table 2 and Table 3. As the sensitivity of a single serum tumor marker in predicting metastasis in GC was low, its potential in clinical application would be limited. Therefore, we analyzed the sensitivity when these markers were combined and obtained the AUC of ROC curve. We then calculated their diagnostic values in GC with different metastasis status. The combined markers yielded an ROC value of 0.682, which was significantly higher than that of the single marker (P<0.05) and better able to distinguish lymph node metastasis in GC (Table 2 and Fig. 2). In peritoneal metastasis patients, the ROC value of the five markers combined was 0.787, higher than each single marker (Table 3 and Fig. 3). Discussion Metastasis is one of the main causes of death in patients with GC tumors [17]. Early detection of metastasis and appropriate treatment are of critical importance for patient outcomes. For example, surgical resection with extensive lymphadenectomy was shown to result in better outcomes in GC involving the lymph nodes [18], and the positive effect of neoadjuvant intraperitoneal and systemic chemotherapy on patients with advanced GC and peritoneal dissemination has been demonstrated [19,20]. CT and US can help to predict metastasis GC, but many studies have shown that these are not reliable indicators of metastasis [21,22]. Our data showed that the sensitivity of these diagnostic imaging modalities in lymph node metastasis patients and peritoneal metastasis patients was only 7.84 and 15.22 %, respectively. The predictive value of PET/CT was high in local lymph node metastasis and distant metastasis in GC patients [23]. However, it is costly, and most patients are unable to afford the procedure. Laparoscopic exploration is less invasive than open surgery for diagnosing malignant abdominal disease [24]. However, it is also costly and time-consuming, and surgeons are reluctant to undertake it. Several of the most frequently used tumor markers, such as CEA, CA724, CA199, and CA125, provide additional diagnostic information in gastrointestinal malignancies [25,26], but the sensitivity of any one marker alone is not sufficient [27]. In our group, the sensitivity of CEA, CA724, CA199, and CA125 in the GC patients with lymph node metastasis was only 12.75, 21.57, 23.53, and 15.69 %, respectively. In the peritoneal metastasis patients, the sensitivity of these four markers was 23.91, 34.78, 36.96, and 34.78 %, respectively. As the diagnosis of GC is most often performed when the tumor is at an advanced stage [28], there is an urgent need to identify new markers (diagnostic methods) to provide appropriate treatment and improve prognoses. LOX was initially reported as a copper-dependent amine oxidase responsible for the catalysis of collagen and elastin cross-linking within the extracellular matrix [29]. A recent study highlighted the role of LOX family oxidases in promoting cancer metastasis [30]. LOX is highly expressed in invasive tumors, such as uveal melanoma, colorectal cancer, and gastric cancer, and it is closely associated with metastasis and poor patient outcomes [12,29,31]. Our study demonstrates that increased expression of LOX is correlated with an advanced stage of GC and that it may contribute to tumor development. This finding is consistent with that of Zhang et al. [29]. In lymph node metastasis and peritoneal metastasis in GC, the rate of LOX overexpression was 44.12 and 56.52 %, respectively, in the current study. Therefore, LOX is a correlative biomarker of metastasis in GC. However, based our results, the sensitivity and accuracy of LOX alone are limited (around 50 % and no more than 61 %, respectively). Therefore, the use of LOX alone does not meet the requirements of clinical practice. Several studies found that a combination of different tumor marker may improve diagnostic accuracy in gastrointestinal tract malignancies compared with single biomarkers alone. For example, Emoto et al. [32] showed that the combined use of CEA, CA199, CA725, and CA125 may improve the sensitivity of these biomarkers in detecting peritoneal metastasis in GC. Chen et al. [33] revealed that combining CA724 with CEA and CA199 considerably improves the sensitivity of these biomarkers in detecting GC, without impairing specificity. The choice of serum tumor markers to be combined with LOX requires further investigation to determine how to improve the sensitivity of these biomarkers in the detection of metastasis in GC. We carefully selected other serum tumor markers correlated with tumor invasion and combined these with LOX to improve the sensitivity of these in detecting metastasis in GC. Several studies revealed that CA724 and CA199 are correlated with invasive GC, lymph node involvement, and tumor stage [34][35][36][37][38][39][40] and that combined use of CEA with CA724 and CA199 considerably improves the positive rate, without impairing the specificity [41]. However, our results showed that the preoperative positivity of CEA, CA724, CA19-9, and CA125 was extremely low, making it a poor biomarker of lymph node metastasis and peritoneal metastasis in GC. When we combined all the markers, their sensitivity in detecting lymph node metastasis in GC was 79.41 %. The sensitivity for GC with peritoneal metastasis was 91.30 %, which was higher than when a single marker was used ( Table 2 and Table 3). The ROC curve analysis also revealed that the combination of all markers yielded a value of 0.682 for GC with lymph node metastasis and 0.787 for GC with peritoneal metastasis. These values were significantly higher than the sensitivity with one marker (P<0.05, Table 2 and Table 3). Interestingly, our study showed that in the GC patients with lymph node metastasis, CA125 was positive in only 15.69 % of cases, but it was positive in 34.78 % of GC cases with peritoneal metastasis. This finding is consistent with that reported in a study by Emoto et al. [32], who found that CA125 was correlated with the degree of peritoneal dissemination in GC and that it was highly sensitive in predicting peritoneal metastasis. We did not evaluate other tumor markers, such as carbohydrate antigen 50, alpha fetal protein (AFP), and carbohydrate antigen 242, because these markers are not commonly measured in GC patients, and very few studies have shown any association between these markers and lymph node or peritoneal metastasis in GC. For example, most AFP-positive GC was correlated with liver metastasis [33]. In summary, we found that LOX is a correlative tumor biomarker for GC with lymph node metastasis and peritoneal metastasis in a Chinese population. The combined use of LOX with other markers (LOX+CEA+CA724+CA199+CA125) could improve their sensitivity in predicting metastasis in GC. Our study has several limitations. First, this is a retrospective analysis with a relatively small sample from a single institute. A large, prospective, multicenter study is needed to demonstrate the predictive value of LOX in GC metastasis in combination with other tumor markers. Second, we could not accurately distinguish the metastasis N stage and the peritoneal dimensional status (P 0 , P 1 , P 2 , and P 3 ) because LOX expression was evaluated by qualitative detection, not by quantitative determination, and our sample size was not large. Third, uncontrolled or unmeasured confounding factors, such as selection bias in GC patients and potential laboratory errors in evaluating LOX expression, may have produced bias. Conclusions The combined use of LOX with CEA, CA724, CA199, and CA125 could increase the sensitivity of predicting lymph nodes metastasis and peritoneal metastasis in GC. Surgeons can use these markers to determine the best treatment options for patients. Additional large-scale, prospective, multicenter studies are urgently needed to further confirm the results of this study.
57373910
s2orc/train
v2
2019-01-01T12:04:06.000Z
2019-01-01T00:00:00.000Z
Adaptive Quantile Low-Rank Matrix Factorization Low-rank matrix factorization (LRMF) has received much popularity owing to its successful applications in both computer vision and data mining. By assuming noise to come from a Gaussian, Laplace or mixture of Gaussian distributions, significant efforts have been made on optimizing the (weighted) $L_1$ or $L_2$-norm loss between an observed matrix and its bilinear factorization. However, the type of noise distribution is generally unknown in real applications and inappropriate assumptions will inevitably deteriorate the behavior of LRMF. On the other hand, real data are often corrupted by skew rather than symmetric noise. To tackle this problem, this paper presents a novel LRMF model called AQ-LRMF by modeling noise with a mixture of asymmetric Laplace distributions. An efficient algorithm based on the expectation-maximization (EM) algorithm is also offered to estimate the parameters involved in AQ-LRMF. The AQ-LRMF model possesses the advantage that it can approximate noise well no matter whether the real noise is symmetric or skew. The core idea of AQ-LRMF lies in solving a weighted $L_1$ problem with weights being learned from data. The experiments conducted on synthetic and real datasets show that AQ-LRMF outperforms several state-of-the-art techniques. Furthermore, AQ-LRMF also has the superiority over the other algorithms in terms of capturing local structural information contained in real images. The key idea of LRMF is to approximate a given matrix by the product of two low-rank matrices. Specifically, given an observed matrix X ∈ R m×n , LRMF aims at solving the optimization problem min U,V ||Ω (X − UV T )||, (1) where U ∈ R m×r , V ∈ R n×r (usually, r min(m, n)) and denotes the Hadamard product, that is, the element-wise product. The indicator matrix Ω = (ω ij ) m×n implies whether some data are missing, where ω ij = 1 if x ij is non-missing and 0 otherwise. The symbol || · || indicates a certain norm of a matrix, in which the most prevalent one is L 2 norm. It is well-known that singular value decomposition provides a closed-form solution for L 2 -norm LRMF without missing entries. In addition, researchers have presented many fast algorithms to solve Eq. (1) when X contains missing entries, as well. The L 2 -norm LRMF greatly facilitates theoretical analysis, but it provides the best solution in sense of maximum likelihood principle only when noise is indeed sampled from a Gaussian distribution. If noise is from a heavy-tailed distribution or data are corrupted by outliers, L 2 -norm LRMF is likely to perform badly. Thereafter, L 1 -norm LRMF begins to gain increasing interest of both theoretical researchers and practitioners due to its robustness [12]. In fact, L 1 -norm LRMF hypothesizes that noise is from a Laplace distribution. As is often the case with L 2 -norm LRMF, L 1 -norm LRMF may provide unexpected results as well if its assumptions are violated. Because the noise in real data generally deviates far away from a Gaussian or Laplace distribution, analysts are no longer satisfied with L 1 -or L 2 -norm LRMF. To improve the robustness of LRMF, researchers attempt to directly model unknown noise via a mixture of Gaussians (MoG) due to its good property to universally approximate any continuous distribution [13,14]. Nevertheless, the technique cannot fit real noise precisely in some complex cases. For example, in theory, infinite Gaussian components are required to approximate a Laplace distribution. In practice, we only utilize finite Gaussian components due to the characteristics of MoG. On the other hand, Gaussian, Laplace and MoG distributions are all symmetric. In the conditions with real noise being skew, they may provide unsatisfactory results. As a matter of fact, there are no strictly symmetric noise in real images. For instance, Figure 1 illustrates several examples in which the real noise is either skewed to the left (e.g., (a-4) and (c-4)) or the right (e.g., (b-4)). In these situations, the symmetric distributions like Gaussian or Laplace are inadequate to approximate the noise. In statistics, to deal with an asymmetric noise distribution, a preliminary exploration called quantile regression has been made. Consider a simple case that there is only one covariate X, the quantile regression coefficient β can be obtained bŷ where {(y x , x i )} n i=1 are n observations and κ is a pre-defined asymmetry parameter. Moreover, the quantile loss ρ κ (·) is defined as In (c-4), the fitted ALD is with α = 33, κ = 0.75 and λ = 0.05. Obviously, the distributions of noise shown here are all asymmetric. 3 with I(·) is the indicator function. Evidently, the quantile loss with κ = 1/2 corresponds to the L 1 -norm loss. From the Bayesian viewpoint, the estimate obtained by minimizing the quantile loss in (2) coincides with the result by assuming noise coming from an asymmetric Laplace distribution (ALD) [15,16]. To overcome the shortcomings of existing LRMF methods that they assume the type of noise distribution, we present in this paper an adaptive quantile LRMF (AQ-LRMF) algorithm. The key idea of AQ-LRMF is to model noise via a mixture of asymmetric Laplace distributions (MoAL). The expectation maximization (EM) algorithm is employed to estimate parameters, under the maximum likelihood framework. The novelty of AQ-LRMF and our main contributions can be summarized as follows. (1). The M-step of the EM algorithm corresponds to a weighted L 1 -norm LRMF, where the weights encode the information about skewness and outliers. (2). The weights are automatically learned from data under the framework of EM algorithm. (3). Different from quantile regression, our method does not need to pre-define the asymmetry parameter of quantile loss, because it is adaptively determined by data. (4). Our model can capture local structural information contained in some real images, although we do not encode it into our model. The experiments show that our method can effectively approximate many different kinds of noise. If the noise has a strong tendency to take a particular sign, AQ-LRMF will produce better estimates than a method which assumes a symmetric noise distribution. In comparison with several state-of-the-art methods, the superiority of our method is demonstrated in both synthetic and real-data experiments such as image inpainting, face modeling, hyperspectral image (HSI) construction and so on. The rest of the paper is organized as follows. Section 2 presents related work of LRMF. In section 3, we propose the AQ-LRMF model and also provide an efficient learning algorithm for it. Section 4 includes experimental studies. At last, some conclusions are drawn in section 5. Related work The study of robust LRMF has a long history. Srebro and Jaakkola [17] suggested to use a weighted L 2 loss to improve LRMF's robustness. The problem can be solved by a simple but efficient EM algorithm. However, the choice of weights significantly affects its capability. Thereafter, Ke and Kanade [12] attempted to replace L 2 loss with L 1 loss and to solve the optimization by alternated linear or quadratic programming (ALP/AQP). In order to catalyze convergence, Eriksson and Hengel [18] developed the L 1 -Wiberg algorithm. Kim et al. [19] used alternating rectified gradient method to solve a large-scale L 1 -norm LRMF. The simulated experiments showed that this method performs well in terms of both matrix reconstruction performance and computational complexity. Okutomi et al. [20] modified the objective function of L 1 -Wiberg by adding the nuclear norm of V and the orthogonality constraints on U. This method has been shown to be effective in addressing structure from motion issue. Despite the non-convexity and non-smoothness of L 1 -norm LRMF, Meng et al. [21] proposed a computationally efficient algorithm, cyclic weighted median (CWM) method, by solving a sequence of scalar minimization sub-problems to obtain the optimal solution. Inspired by majorization-minimization technique, Lin et al. [22] proposed LRMF-MM to solve an LRMF optimization task with L 1 loss plus the L 2 -norm penalty placing on U and V. In each step, they upper bound the original objective function by a strongly convex surrogate and then minimize the surrogate. Experiments on both simulated and real data sets testify the effectiveness of LRMF-MM. Li et al. [23] considered a similar problem, but they replace the L 2 -norm penalty imposed on U with U T U = I. This model is solved by augmented Lagrange multiplier method. Furthermore, the authors of [23] designed a heuristic rank estimator for their model. As argued in introduction, L 1 loss actually corresponds to the Laplace-distributed noise. When the real distribution of noise deviates too far from Laplace, the robustness of L 1 LRMF will be suspectable. Recently, the research community began to focus on probabilistic extensions of robust matrix factorizations. Generally speaking, it is assumed that X = UV T +E, where E is a noise matrix. Lakshminarayanan et al. [24] replaced Gaussian noise with Gaussian scale mixture noise. Nevertheless, it may be ineffective when processing heavy-tailed (such as Laplace-type) noise. Wang et al. [25] proposed a probabilistic L 1 -norm LRMF, but they did not employ a fully Bayesian inference process. Beyond Laplace noise, Meng and Torre [13] presented a robust LRMF with unknown noise modeled by an MoG. In essence, the method iteratively optimizes min U,V,θ ||W(θ) (X − UV T )|| L2 , where θ are the MoG parameters which are automatically updated during optimization, and W(θ) is the weight function of θ. Due to the benefit to adaptively assign small weights to corrupted entries, MoG-LRMF has been reported to be fairly effective. More recently, Cao et al. [26] presented a novel LRMF model by assuming noise as a mixture of exponential power (MoEP) distributions and also offered the corresponding learning algorithm. On the other hand, robust principle component analysis (robust PCA) [27] considers an issue that is similar to LRMF, that is, The underlying assumption of robust PCA is that the original data can be decomposed into the sum of a low-rank matrix and a sparse outlier matrix (i.e., the number of non-zero elements in E is small). Clearly, A plays the same role as the product of U and V T . Since Eq. (4) involves a non-convex objective function, [27] consider a tractable convex alternative, called principal component pursuit, to handle the corresponding problem, namely, where || · || * denotes the nuclear norm. It is worthwhile that principal component pursuit sometimes may fail to recover E when the real observation is also corrupted by a dense inlier matrix. To overcome this shortcoming, Zhou et al. [28] proposed the stable principal component pursuit (SPCP) by solving Actually, the underlying assumption of SPCP is the sparse outliers and N is the small-magnitude noise that can be modeled by Gaussian. Both theory and experiments have shown that SPCP guarantees the stable recovery of E. Motivation Generally speaking, researchers employ the L 2 or L 1 loss function when solving a low-rank matrix factorization problem. As argued in introduction, L 2 or L 1 loss implicitly hypothesizes that the noise distribution is symmetric. Nevertheless, the noise in real data is often asymmetric and Figure 1 illustrates several examples. There are two face images and a hyperspectral image in Figure 1. Figure 1 (a) displays a face image captured with a poor light source. There are cast shadows in a large area, while there exists overexposure phenomenon in a small area. As a result, the noise is negative skew. By contrast, Figure 1 (b) illustrates a face image captured under a strong light source. Because of the camera range settings, there are saturated pixels, especially on the forehead. Under this circumstance, the noise is positive skew. Figure 1 (c) shows a hyperspectral image that is mainly corrupted by stripe and Gaussian noise. Its residual image indicates that the signs of the noise are unbalanced, i.e., more pixels are corrupted by noise with negative values. Actually, the skewness values of three residual (noise) images are −0.72, 0.69 and −0.55, respectively. Note that a symmetric distribution has skewness 0, the noise contained in these real data sets is thus asymmetric. As a matter of fact, the noise in real data can hardly be governed by a strictly symmetric probability distribution. Therefore, it is natural to utilize an asymmetric distribution to model realistic noise. In statistics, researchers usually make use of a quantile loss function defined in (3) to address this issue. It has been shown that quantile loss function corresponds to the situation that noise is from an asymmetric Laplace distribution [15,16]. In order to improve the performance of low-rank matrix factorization, we attempt to use a mixture of asymmetric Laplacian distributions (MoAL) to approximate noise. Asymmetric Laplace distribution In what follows, we use AL( |α, λ, κ) to denote an ALD with location, scale and asymmetric parameters α, λ > 0 and 0 < κ < 1, respectively. Its probability distribution function (PDF) is Obviously, the location parameter α is exactly the mode of an ALD. In Figure 2, we demonstrate the PDF curves for several ALDs with different parameters. The asymmetry parameter κ controls the skewness of an ALD and sk ALD ∈ (−2, 2). In general, an ALD is positive skew if 0 < κ < 0.5, and is negative skew if 0.5 < κ < 1. If κ = 0.5, the ALD becomes a Laplace distribution. The smaller the scale parameter λ is, the more heavy-tailed ALD is. It is worthwhile that skew Gaussian distributions [29] are also prevailing in both theory and applications. However, it is not ideal for the analysis of LRMF. On one hand, the PDF of a skew Gaussian distribution is complex. On the other hand, its skewness lies in (−1, 1) which is only a subset of the range of sk ALD . Due to this fact, the fitting capability of an ALD is greater than that of a skew Gaussian distribution. AQ-LRMF model To enhance the robustness of LRMF in situations with skew and heavy-tailed noise, we propose an adaptive quantile LRMF (AQ-LRMF) model by modeling unknown noise as an MoAL. In particular, we consider a generative model of the observed matrix X ∈ R m×n . For its each entry x ij , suppose that there is where u i is the ith row of U, v j is the jth row of V, and ij is the noise term. In AQ-LRMF, we assume that ij is distributed as an MoAL, namely, in which AL s ( ij |0, λ s , κ s ) stands for an asymmetric distribution with parameters α = 0, λ = λ s and κ = κ s . Meanwhile, π s indicates the mixing proportion with π s ≥ 0 and S s=1 π s = 1, and S means the number of mixture components. To facilitate the estimation of unknown parameters, we further equip each noise ij with an indicator vector z ij = (z ij1 , z ij2 , · · · , z ijS ) T where z ijs ∈ {0, 1} and S s=1 z ijs = 1. Here, z ijs = 1 indicates that the noise ij is drawn from the sth AL distribution. Evidently, z ij follows a multinomial distribution, i.e., z ij ∼ M(π 1 , · · · , π S ). Under these assumptions, we can have Now, it is easy to obtain the probability of x ij as where Λ = {λ 1 , λ 2 , · · · , λ S }, K = {κ 1 , κ 2 , · · · , κ S } and Π = {π 1 , π 2 , · · · , π S } are unknown parameters. To estimate U, V as well as Λ, K, Π, we employ the maximum likelihood principle. Consequently, the goal is to maximize the log-likelihood function of complete data shown below, namely, where Ω denotes the index set of the non-missing entries of data. Subsequently, we will discuss how to maximize (U, V, Λ, K, Π) to get our interested items. Learning of AQ-LRMF Since each x ij associates with an indicator vector z ij , the EM algorithm [30] is utilized to train the AQ-LRMF model. Particularly, the algorithm needs to iteratively implement the following two steps (i.e., E-step and M-step) until it converges. For ease of exposition, we let e ij = x ij − u i v T j and abbreviate AL s (e ij |0, λ s , κ s ) as AL s (e ij ) in the following discussions. E-step: Compute the conditional expectation of the latent variable z ijs as In order to attain the updating rules of other parameters, we need to compute the Q-function. According to the working mechanism of EM algorithm, the Q-function can be obtained by taking expectation of the 8 log-likelihood function shown in (12) with regard to the conditional distribution of the latent variables Z ij1 , Z ij2 , · · · , Z ijS . Specifically, it can be derived as where M-step: Maximize the Q-function by iteratively updating its parameters as follows. (1). Update π s : To attain the update for π s , we need to solve the following constrained optimization via the Lagrangian multiplier method. By some derivations, we have in which N stands for the cardinality of Ω. (2). Update λ s : Compute the gradient ∂Q ∂λs and let it be zero. Consequently, the update of λ s can be obtained as (3). Update κ s : Compute the gradient ∂Q ∂κs and let it be zero, we can have where the coefficients η s = λ s (i,j)∈Ω γ ijs e ij . It is a two-order equation with regard to κ s . Obviously, Eq. (19) has a unique root satisfying 0 < κ s < 1, that is, 9 (4). Update U, V: By omitting some constants, the objective function to optimize U, V can be rewritten where the (i, j)th entry of W is Hence, the optimization problem in Eq. (21) is equivalent to the weighted L 1 -LRMF, which can be solved by a fast off-the-shelf algorithm such as the cyclic weighted median filter (CWM) [21]. On one hand, it is interesting that the M-step in AQ-LRMF is the same as that of MoG-LRMF [13], except that the latter one minimizes a weighted L 2 loss. Due to this feature, AQ-LRMF is more robust than AQ-LRMF has more capacity to process heavy-tailed skew data. Based on the above analysis, we summarize the main steps to learn the parameters involved in AQ-LRMF in the following Algorithm 1. Essentially, CWM minimizes the objective via solving a series of scalar minimization subproblems. Letũ i andṽ i be the ith column of U and V, respectively. To update v ji , we assume that the other parameters have been estimated. As a result, the original problem can be rewritten as the optimization problem regarding v ji , i.e., where E i = X− j =i u j v T j , andw j andẽ i j are jth column of W and E i , respectively. In Eq. (23), c denotes a constant term not depending on v ji . In this way, the optimal v ji , say v * ji , can be easily attained by the weighted median filter when minimizing Eq. (23) can be provided by weighted median filter. Specifically, if let e =w j ẽ i j and u =w j ũ i , we can reformulate Eq. (23) as From the format of Eq. (24), it can be seen that the optimal v * ji coincides with the weighted median of the sequence {e l /u l } m l=1 under weights {|u l |} m l=1 . As for the update of u ij , it can be handled in a similar procedure. In short, the optimal U, V can be obtained by employing CWM to repeatedly update v ij (i = 1, · · · , n; j = 1, · · · , r) and u ij (i = 1, · · · , m; j = 1, · · · , r) until the algorithm converges. Some details of learning AQ-LRMF Tuning the number of components S in MoAL: Too large S violates Occam Razor's principle, while small S leads to poor performance. In consequence, as described in step 8 of Algorithm 1, we employ an effective method to tune S. To begin with, we initialize S to be a small number such as 4, 5, · · · , 8. After each iteration, we compute the cluster that x ij belongs to, by C(i, j) = arg max s γ ijs . If there is no entry belonging to cluster s, we remove the corresponding ALD component. Initialization: In Algorithm 1, we initialized the (i, j)th entry of U as 2ξ ij c − c, where ξ ij denotes a random number sampled from the standard Gaussian distribution N (0, 1). In addition, c = x/r wherex is the median of all entries in X. Due to the characteristics of U and V, we initialize V similarly. Moreover, the parameters λ s and κ s is randomly sampled from [0, 1]. Convergence condition: By following the common practice of EM algorithm, we stop the iteration if the change of ||U|| is smaller than a pre-defined value or the maximum iteration number is reached. Experimental Studies We carried out experiments in this section to examine the performance of AQ-LRMF model. Several state-of-the-art methods were considered, including four robust LRMF methods (namely, MoG [13] 1 , CWM [21], Damped Wiberg (DW) 2 [31], RegL1ALM 3 [20]) and a robust PCA method (SPCP solved by quasi Newton method) 4 [32]. We wrote the programming code for CWM. For the other compared algorithms, the codes provided by the corresponding authors were availed. Since SPCP is not available in presence of missing entries, it was thus excluded from some experiments which involve missing data. Notice that DW is only considered in section 4.1 because it meets the "out of memory" problem for large-scale datasets. In the meantime, we assigned the same rank to all the considered algorithms but SPCP since it can automatically determine the rank. To make the comparison more fair, all algorithms were initialized with the same values. Each algorithm was terminated when either 100 iterative steps are reached or the change of ||U|| is less than 1 × 10 −50 . In order to simplify notations, our proposed method AQ-LRMF was denoted as AQ in later Synthetic experiments First, we compared the behavior of each method with synthetic data containing different kinds of noise. For each case, we randomly generated 30 low rank matrices X = UV T of size 40 × 20, where U ∈ R 40×r and V ∈ R 20×r were sampled from the standard Gaussian distribution N (0, 1). In particular, we set r to 4 and 8. Subsequently, we stochastically set 20% entries of X as missing data and corrupted the non-missing entries with the following three groups of noise. It is worthwhile to mention that the two mixture noises simulate the noise contained in real data, where most entries are corrupted by standard Gaussian noise and the rest entries are corrupted by heavy-tailed or skew noise. To evaluate the performance of each method, we employed the average L 1 error that is, In our experiments, all algorithms were implemented with true rank r. Tables 1 and 2 summarize the metrics averaged over 30 randomly generated matrices. When r = 4, it is quite obvious that our method reaches the minimum L 1 error in all situations, while MoG and CWM almost take the second place. And the approaches RegL1ALM and DW can hardly deal with the heavy-tailed and skew noise well. From the results of r = 8, similar conclusions can be drawn. However, CWM evidently outperforms MoG when r = 8, which indicates that MoG may be instable when the real rank in observed data is high. In summary, our model performs very well to cope with different kinds of noise. To delve into the difference between AQ and MoG, we further compared the distributions of residuals with real noises. Specifically, two symmetric and two asymmetric cases are illustrated in Figure 3. Here, the shown distributions fitted by AQ and MoG correspond to those reach the maximum likelihood over 30 random experiments. It is obvious that AQ does a much better job to approximate the real noise than MoG. Particularly, AQ almost provides a duplicate of real noise. In contrast, MoG is able to fit the tails, while, at the same time, it results in bad approximation to peaks. Hence, AQ has more power in fitting complex noise than MoG. Image inpainting experiments Image inpainting is a typical image processing task. In real applications, some parts of an image may be deteriorated so that the corresponding information is lost. To facilitate the understanding of the image, some sophisticated technique need to be adopted to recover the corrupted parts of the image. This is exactly the objective of image inpainting. There is evidence that many images are low-rank matrices so that the single image inpainting can be done by matrix completion [33]. In image inpainting, the corrupted pixels are viewed as missing values and then the image can be recovered by an LRMF algorithm. In this paper, three typical RGB images 5 of size 300 × 300 × 3 were employed. In our experiments, each image was reshaped to 300 × 900. By following the common practice in the research of image inpainting, we artificially corrupted the given images by putting some masks onto them. In doing so, it is convenient to examine how well each method performs to restore the original images. Here, three kinds of masks were considered, namely, random mask where 20% pixels were stochastically removed, text masks with big and small fonts, respectively. The rank was set to 80 for all algorithms. Figure 4 visualizes the original, masked and reconstructed images, and Table 3 reports the average L 1 errors of each algorithm. It is obvious that removing a random mask is the easiest task. In this situation, there is no significantly visible difference among the reconstructed images. In terms of average L 1 error, MoG performs best and AQ can be ranked in second place. In contrast, the results shown in Figure 4 and Table 3 indicate that text mask removal is more difficult, especially when the images are corrupted with big fonts. The main reason lies in that the text mask is spatially correlated while it is difficult for any LRMF algorithm to effectively utilize this type of information. Under these circumstances, it can be observed in Figure 4 and Table 3 that AQ outperforms the other methods to remove the text masks with regard to both average L 1 error and visualization. RegL1ALM and MoG perform badly and the clear text can often be seen in their reconstructed images. Although CWM produces slightly better results, its average L 1 error is still higher than that of AQ. In a word, AQ possesses the superiority over the other algorithms in our investigated image inpainting tasks. In particular, AQ achieves the smallest average L 1 error in 6 cases and the second smallest one in 2 cases. Figure 4: The original, masked and inpainting images. Multispectral image experiments In this subsection, we study the behavior of all algorithms in image denoising tasks. The Columbia Multispectral Image database 6 was employed, where every scene contains 31 bands with size 512 × 512. To The rank was set to 4 for all algorithms. Face modeling experiments Here, we applied the LRMF techniques to address the face modeling task. The Extended Yale B database 7 consisting of 64 images with size 192×168 of each subject was considered. Therefore, it leads to a 32256×64 matrix for each subject. Particularly, we used the face images of the third and fifth subjects. The first column of Figure 5 demonstrates some typical faces for illustration. We set the rank to 4 for all methods except for SPCP which determines the rank automatically. The second to sixth columns of Figure 5 display the faces reconstructed by the compared LRMF algorithms. From Figure 5, we can observe that that all methods are able to remove the cast shadows, saturations and camera noise. However, the performance of SPCP seems to be worse in comparison with other algorithms. Evidently, AQ always outperforms the other methods due to its pretty reconstruction. As shown in Figure 1, there is an asymmetric distribution in the face with a large dark region. Because of this, the techniques MoG, CWM, RegL1ALM and SPCP which utilize the symmetric loss function lead to bad results, while AQ with the quantile loss function produces the best reconstructed images. Hyperspectral image experiments In this subsection, we employed two HSI datasets, Urban and Terrain 8 , to investigate the behavior of all algorithms. There are 210 bands, each of which is of size 307 × 307 for Urban and 500 × 307 for Terrain. Thus, the data matrix is of size 94249 × 210 for Urban and 153500 × 210 for Terrain. Here, we utilized the same experimental settings as those used in subsection 4.4. DW was still unavailable in this experiment due to the computational problem. As show in the first column of Figure 6, some parts of bands are seriously polluted by the atmosphere and water absorption. The reconstructed images of bands 106 and 207 in the Terrain data set and the band 104 in the Urban data set are shown in Figure 6 (a), (c) and (e), respectively. Their residual images (i.e., X −ÛV T ) are also demonstrated below the reconstructed ones. Obviously, the band 106 in Terrain is seriously polluted. Nevertheless, our proposed AQ method still effectively reconstructs a clean and smooth one. Although MoG, CWM and RegL1ALM remove most parts of noise, they miss a part of local information, that is, the line from upper left corner to bottom right hand side (i.e., the parallelogram marked in the original image). As for SPCP, it only removes few parts of noise. The residual images also reveal that AQ behaves better to deal with the detailed information. Note that the band 207 in Terrain and the band 104 in Urban are mainly corrupted by the stripe and Guassian-like noise. Under these circumstances, AQ still outperforms the others because the latter fails to remove the stripe noise. In particular, for the interested areas that are marked by rectangles and amplified areas, the bands reconstructed by MoG, CWM, RegL1ALM and SPCP contain evident stripes. As far as the reconstructed images produced by AQ are concerned, however, this phenomenon does not exist. We conjectured that the main reason for the different behavior of these algorithms lies in their used loss function. For CWM, RegL1ALM and SPCP, too simple loss function lead them to work not well when encountering complicated noise. In contrast, AQ and MoG perform better because they use multiple distribution components to model noise. It is very interesting to study the difference between AQ and MoG. For these two algorithms, we found that they both approximate the noise in our considered three bands with two components. For AQ (MoG), we denoted them as AQ1 and AQ2 (MoG1 and MoG2), respectively. In Figure 7, we presented de-noised images and residual images produced by each component. Take the de-noised image in the column AQ1 as an example, it corresponds toÛV T + AQ2 and the residual image shown below it corresponds to AQ1 (i.e., X −ÛV T − AQ2). The other images can be understood similarly. In doing so, we can further figure out the role that each component in AQ or MoG plays. When dealing with the band 106 in Terrain, the first AQ component is seen to de-noise the center parts, while the second one targets at the left and right edges. For the band 207 in Terrain, two AQ components de-noise the bottom and the rest parts, respectively. Regarding the band 104 in Urban, they focus on the right upper and center parts, respectively. By inspecting the results generated by MoG, however, we cannot discover some regular patterns for the role that two components play. Therefore, it can be concluded that AQ can capture the local structural information of real images, although we do not encode it into our model. The reason may be that the pixels with the same skewness in real images tend to cluster. In this aspect, AQ also possesses superiority over MoG. Conclusions Aiming at enhancing the performance of existing LRMF methods to cope with complicated noise in real applications, we propose in this work a new low-rank matrix factorization method AQ-LRMF to recover subspaces. The core idea of AQ-LRMF is to directly model unknown noise by a mixture of asymmetric Laplace distributions. We also present an efficient procedure based on the EM algorithm to estimate the parameters in AQ-LRMF. Actually, the objective function of AQ-LRMF corresponds to the adaptive quantile loss like those used in quantile regression. Compared with several state-of-the-art counterparts, the novel AQ-LRMF model always outperforms them in synthetic and real data experiments. In addition, AQ-LRMF also has the superiority to capture local structural information in real images. Therefore, AQ-LRMF can be deemed as a competitive tool to cope with complex real problems.
253996270
s2orc/train
v2
2022-11-27T16:39:52.205Z
2022-11-24T00:00:00.000Z
Validation and Normative Data on the Verbal Fluency Test in a Peruvian Population Ranging from Pediatric to Elderly Individuals In neuropsychological evaluation, verbal fluency is a crucial measure of cognitive function, but this measure requires standardized and normative data for use. The present study aimed to obtain validation and normative data for the verbal fluency task in the Peruvian population, with participants ranging from 6 to 94 years and varying in age, educational level, and sex. We recruited 2602 healthy individuals and used linear regression analysis to determine the effect of age, sex, and educational level. We also evaluated internal consistency between categories and phonological tasks with Cronbach’s alpha and Pearson’s correlation analysis and calculated test-retest reliability after three months. We found significant effects of age, educational level, and sex on phonological and semantic fluency. Participants with more than 12 years of education had the highest scores overall. Regarding age, middle-aged participants (between 31 and 40 years old) had the highest scores; scores gradually decreased outside of this age range. Regarding sex, men performed better than women. These results will increase the ability of clinicians to precisely determine the degree to which verbal fluency is affected in patients of different ages and educational levels. Introduction Verbal fluency (VF) tasks are a group of neuropsychological assessments widely used in clinical practice and research. These tasks consist of naming as many words that follow a series of orthographic or semantic rules as possible in a given period (usually 60 s) [1]. Each list of words follows a specific criterion, such as starting with a particular letter (phonological VF) or mentioning words in a category (semantic VF) [2,3]. Despite the apparent similarity between these two types of tasks due to the use of language as a critical component [4] and search strategies through memory [5,6], neuroimaging studies have indicated that they utilize different underlying brain circuits. Phonological fluency is related to greater activation of the frontal lobe and executive function [4], while semantic fluency is an alternative to the lexicon and lexical access and requires activation of the temporal lobe [7][8][9]. In addition, the VF tests are quick and easy to administer and sensitive to cognitive impairment in a variety of disorders [1], facilitating the detection of early stages of neurodegenerative diseases such as mild cognitive impairment, Alzheimer's disease [10,11], Huntington's disease [12], attention-deficit/hyperactive disorders [13], traumatic brain injury [14] and aphasia [15]. VF performance is evaluated by recording the total number of words produced during the task and the number of words retrieved during individual portions. During the first 30 s, healthy subjects provide approximately two-thirds of their total words, followed by a drastic increase in the retrieval period [14,16] because the effort required to produce words increases with time, necessitating progressive increases in attention and executive control as the task continues. The interpretation of VF results may differ from that of the normative data used for comparison [16]. However, the cognitive processes involved in the evaluation are similar. Phonological fluency is widely interpreted as a strategic search-and-retrieval measure within orthographic or phonological networks that involves a series of higher-order functions (working memory, inhibition, and alternation). Other research has highlighted substantial contributions of verbal intelligence and information processing speed to phonological fluency in healthy populations [1]. For semantic fluency, subjects must generate and follow a strategy to efficiently explore the semantic network. In general, healthy subjects exploit the internal organization of this network to explore a semantic category (for example, fruits). Then, they flexibly move between different subcategories or elements to select one of the available options. Afterward, the subjects must extract entries from semantic memory and monitor and verify the output to avoid repetitions or out-of-category responses. Finally, subjects must maintain an "active" state during task execution to address the limited time available for production [11]. Other features related to VF performance include vocabulary size, lexical access speed, updating, and inhibition, which are mainly associated with the speed of the first responses [17]. Semantic VF has been assessed in more than fifteen languages, including Indo-European, Semitic, Sino-Tibetan, Austroasiatic, Dravidian, and even Amerindian languages [18]. Similarly, phonological VF has been assessed in different languages, including Spanish, regardless of participants' first language and ethnic background, and is helpful for diagnostic purposes [19]. Although cognitive evaluation is a crucial part of the clinical approach in neuropsychology or clinical psychology, in the Peruvian context, many important instruments, such as the VF test, are not standardized or lack normative data. Furthermore, test performance is usually influenced by sociodemographic variables such as age, educational level, and sex [3,16,20]. Several studies have documented the clear need to obtain normative data from a country to interpret the results of neuropsychological tests [21,22]. According to Hazin et al. [23], significant differences in children's performance on formal academic performance tests among regions are observed only in developing countries [22]. Additionally, a review by Ramírez et al. [24] suggested possible cultural effects, in addition to variables such as age and schooling, on the VF results obtained in Hispanic samples; however, their results were contradictory. Despite sharing the same language, Hispanic countries may differ in the quality of education, which could generate masking patterns unique to a particular ethnic group [25]. Professionals should be cautious when applying qualification standards because of the variability in demographic factors among geographic regions, which can interfere with performance [26]. In Peru, a large and regionally diverse country, normative data are needed considering the variation in a number of factors, including languages spoken (monolingual or bilingual), residential area (urban or rural), and educational level (including illiteracy). A recent contribution [8] obtained normative data for the VF test from eleven countries in Latin America, including Peru. However, the age range spanned only 6 to 17 years; thus, these data remain insufficient. Given the above factors, it is necessary to determine specific parameters of the Peruvian population to determine the influence of different demographic variables on VF performance. This study aimed to obtain validation and normative data on the VF task in the Peruvian population ranging from 6 to 94 years of age, accounting for age, educational level, and sex. Participants An initial sample of 3524 individuals was recruited from Arequipa, Lima, and Chiclayo, Peru. After applying the inclusion and exclusion criteria, the final selection consisted of 2602 healthy participants. The mean ages in this study ranged from 6 to 94 years, and 55.3% of participants were female. Participants were selected according to the following criteria (which vary according to age group): (1) verbal or written consent to participate provided by the subject or caregiver, legal guardian, or another proxy; (2) IQ > 85 as evaluated with the computerized version of the Raven progressive matrix test or the test of nonverbal intelligence (TONI version II); (3) absence of cognitive impairment (in older people) as indicated by a Mini-Mental State Examination (MMSE) score ≥ 24; (4) without depression, as determined by scores on the Children's Depression Inventory (CDI; children), the Hamilton Depression Scale (young and middle-aged individuals), or the Geriatric Depression Scale (GDS; elderly individuals); (5) no history of neurological or psychiatric disease according to clinical history or psychological assessment; (6) no sensorimotor or language impairment; and, (7) use of Spanish as their primary language or an extensive history of speaking Spanish (more than 20 years). Further details are provided in Figure 1. the Peruvian population ranging from 6 to 94 years of age, accounting for age, educational level, and sex. Participants An initial sample of 3524 individuals was recruited from Arequipa, Lima, and Chiclayo, Peru. After applying the inclusion and exclusion criteria, the final selection consisted of 2602 healthy participants. The mean ages in this study ranged from 6 to 94 years, and 55.3% of participants were female. Participants were selected according to the following criteria (which vary according to age group): (1) verbal or written consent to participate provided by the subject or caregiver, legal guardian, or another proxy; (2) IQ > 85 as evaluated with the computerized version of the Raven progressive matrix test or the test of nonverbal intelligence (TONI version II); (3) absence of cognitive impairment (in older people) as indicated by a Mini-Mental State Examination (MMSE) score ≥ 24; (4) without depression, as determined by scores on the Children's Depression Inventory (CDI; children), the Hamilton Depression Scale (young and middle-aged individuals), or the Geriatric Depression Scale (GDS; elderly individuals); (5) no history of neurological or psychiatric disease according to clinical history or psychological assessment; (6) no sensorimotor or language impairment; and, (7) use of Spanish as their primary language or an extensive history of speaking Spanish (more than 20 years). Further details are provided in Figure 1. Participants were recruited from public and private schools, a technological institute, and senior centers from Arequipa, Lima, and Chiclayo. After obtaining approval from the specific institution, subjects were informed about the study's purposes and provided verbal or written consent. The present study was an instrumental study [27]. Testing Procedure Subjects were tested individually in a quiet room in their specific location (school, institute, or clinic). The sequence in which tests were administered was identical for all subjects. The procedure included two to three sessions (almost two hours). Participants were tested at 10 am and provided 15 min to relax between sessions. We used letters (phonological fluency: F-A-S-M-R-P) and categories (semantic fluency: animals and fruits) in the VF test because these rules are the most studied in the literature [3,4]. All Participants were recruited from public and private schools, a technological institute, and senior centers from Arequipa, Lima, and Chiclayo. After obtaining approval from the specific institution, subjects were informed about the study's purposes and provided verbal or written consent. The present study was an instrumental study [27]. Testing Procedure Subjects were tested individually in a quiet room in their specific location (school, institute, or clinic). The sequence in which tests were administered was identical for all subjects. The procedure included two to three sessions (almost two hours). Participants were tested at 10 am and provided 15 min to relax between sessions. We used letters (phonological fluency: F-A-S-M-R-P) and categories (semantic fluency: animals and fruits) in the VF test because these rules are the most studied in the literature [3,4]. All participants were native Spanish speakers. Non-native Spanish speakers were not included in this study. Participants were given the following instructions to assess phonological fluency (representative examples provided): "I am going to say a letter of the alphabet, and I would like you to say as many words as you can think of that start with that letter, excluding proper nouns (i.e., names of people or places). Are you ready? You have one minute, and the letter is P." "Now we will try a different letter. Similar to the previous task, please say as many words as you can think of that start with the new letter, avoiding proper nouns (i.e., names of people or places). The new letter is F." To assess semantic fluency, participants were provided with the following instructions (representative example provided): "Now, please name as many animals as you can that start with any letter. Again, you have one minute. Start now". Verbal Fluency Scoring We recorded the total number of responses on the phonological or semantic fluency tasks. To calculate the raw score of a participant, we awarded one point to each correct answer and excluded any repetitions or derivative responses (diminutive or augmentative responses). Errors were classified as perseverations or intrusions. Group Stratification We stratified our groups by age, sex, and educational level (shown in Supplementary Material), considering the Peruvian educational system. The first group consisted of children between 6 and 8 years old, as Peruvian children acquire and consolidate reading and writing skills in primary school (first and second grade). The second group consisted of children from 9 to 11 years old (from third to sixth grade), at which point children are typically fully literate. The third group consisted of children from 12 to 14 years old (in secondary school, from first to third grade). The fourth group consisted of adolescents 15 to 17 years old (finishing secondary education, from fourth to fifth grade). The fifth group consisted of individuals 18 to 20 years old; at this age, young Peruvians usually attend university or pursue technical education. After this first division, we subsequently grouped individuals according to age from 21 to 90 years old, as we observe no differences between the percentiles of the groups. These age groups allowed larger groups at specific ages. We also present descriptive statistics and percentiles for males and females to assess sex differences. Educational level was divided into the following three categories: between 1 and 6 years of education (primary school), between 7 and 11 years (secondary school), and more than 12 years (technical school or university). We believe that our stratification system allows more realistic and ecological assessments of Peruvian VF performance. Ethical Statement The study complied with the ethical considerations related to clinical trials, and all methods were performed according to the relevant guidelines and regulations of the Declaration of Helsinki. All participants were informed about the aims and risks of this study and provided written or verbal informed consent. For minors, parents provided informed consent. Institutional approval was obtained from each institution (public and private schools, regular primary education: IE Florentino Portugal, IE San Pablo, IE Miguel Grau; secondary education: IES San Jose; and health centers-Peru Ministry of Health [MINSA]). In addition, a Local Research Ethics Committee (Neuroscience Group Ethics Committee; CEI number 001-2020) approved the study. All data were collected in an anonymous database. Data Analysis The sociodemographic characteristics of the participants included in the study were compared with t tests and chi-square tests. A linear regression analysis assessed the effect of age, sex, and educational level. Performance significantly differed according to educational level and age. No effect of sex was found after adjusting for age and educational level. We investigated different age groups, ranging from 6 to 94 years old (every five years). Nine educational level groups were considered. Therefore, the sample was stratified according to age, educational level, and obtained percentile (Table 1). These effects were assessed by multivariate analysis of variance (MANOVA). Additionally, Cronbach's alpha was calculated to evaluate internal consistency and intraclass correlation coefficients (ICCs) were calculated between categories and phonological tasks to assess the reliability of each measure. We also performed a Spearman correlation analysis with a subsample (n = 179) who underwent the same protocol after three months (post-test; semantic category: animals, phonological letter: P) for test-retest reliability. Statistical analysis was performed with SPSS version 24 (SPSS, Chicago, IL, USA). Significant results are indicated with * p < 0.05 and ** p < 0.01. Results We present the results of 2602 healthy participants. The total correct phonological and semantic VF responses were first calculated for statistical analysis. Table 1 shows the mean, standard error of the mean (SEM), standard deviations (SDs), perseveration errors and perseveration rates for each letter and semantic category. Correct answers for each letter according to educational level (three categories) are shown in Table 2. We found significant effects of age, educational level, and sex (Table 3) on phonological and semantic fluency, as revealed by MANOVA. The highest scores in phonologic fluency were from participants with more than 12 years of education. In contrast, middle-aged participants (between 31 and 40 years old) had the highest scores on semantic fluency; scores gradually decreased outside of this age range. Males exhibited better performance than females. Pearson correlation analysis was performed and ICCs were calculated (Table 4) for each letter category in phonologic fluency and both semantic categories (animals and fruits) for semantic fluency. As shown in Table 4, the correlations between letter performance (F-A-S-M-R-P) ranged from 0.693 to 0.863. The correlation of semantic fluency (between the two semantic categories) was 0.690. The ICCs for phonologic and semantic fluency were 0.954 and 0.811, respectively. Moreover, test-retest reliability was evaluated with Spearman correlation analysis (Table 5) in a subsample of participants (n = 179). There was a significant correlation between pre-and post-test performance (rho = 0.3.666, p = < 0.001 **). Normative data are presented in Supplementary Material (descriptive statistics and percentile tables). We considered different percentiles regarding educational level, which had the most impact on VF performance. Table 5. Spearman correlation coefficients for phonologic and semantic fluency between pre-and post-test performance. p < 0.05 *, p < 0.01 **. Finally, a linear regression was performed to verify the relationship of phonologic (R 2 = 0.187) and semantic fluency (R 2 = 0.076) with age, sex, and educational level (see Tables 6 and 7). The variables selected explained almost 18% of the variance in phonologic fluency and 7% of the variance in semantic fluency. We believe that our sample is representative and unlikely to be conditioned or influenced by the sociodemographic variables studied. Thus, 82% of the variability in phonological fluency and 93% of the variability in semantic fluency may be explained by other cognitive variables, such as processing speed, working memory, and executive function. Discussion This study specifically attempted to obtain normative data on the neuropsychological VF test in the Peruvian population. We recruited more than 2600 healthy native Spanish speakers. These individuals varied in educational level and age, ranging from 6 years to 94 years (seventeen age groups). Percentiles were obtained for age, educational level, and sex (see Supplementary Material). Education directly influences performance on several neuropsychological tests and modifies the brain's functional organization after exposure to reading and writing [3]. We found that participants with more than twelve years of education had better scores on each letter category (phonological fluency) and semantic category (semantic fluency). Thus, the years of education are highly correlated with performance on this test [3,25]. Ratcliff et al. [28] reported that educational level influenced phonological fluency more than semantic fluency and that participants with fewer years of education generated fewer words. However, according to Ostrosky-Solis et al. [3], age is the most robust predictor of verbal fluency in highly educated people (with >10 years of education). Nonetheless, the total semantic fluency score of individuals with 0 to 4 or 5 to 9 years of education is most strongly influenced by educational level without a significant contribution of age. This effect may be due to the educational ranges included in most studies, which range from participants with little or no education to those with up to 8 years of formal education [29]. Our results regarding the influence of age on performance are similar to those of other studies. Previous studies have shown that increased age is associated with significant decreases in incorrect words and increases in repeated words [6,14]. Phonological fluency usually exhibits a curvilinear relationship with age, with an increase in fluency between the third and fourth decade followed by a gradual decrease. In contrast, semantic fluency shows a linear decline with age [4]. The semantic advantage persists until the eighth decade of life [6]. Generally, phonological VF requires more elaborate organization and retrieval strategies than semantic VF [30]; thus, these differences in difficulty persist throughout life [2]. In addition, previous studies have reported inconsistent effects of sex on VF performance. Many studies have not detected significant sex differences, while others have found that women exhibited superior performance [4]. Our data indicate a male advantage, in contrast to the results obtained by Mitrushina et al. [31] or Vaughan et al. [6], which shows a significant effect of sex on F-A-S performance, with better performance exhibited by women. In our study, men performed slightly worse on the phonological test (using the letter "F") but produced, on average, 0.5 more examples than women on the semantic fluency test (animal category). These sex differences are further complicated by sex differences in familiarity with specific semantic categories. The present study provided validation and normative data on the performance of people between 6 and 94 years of age on phonological and semantic fluency tasks; such data from Peruvians were previously lacking. Our results will increase the ability of clinicians to precisely determine the severity of VF impairment in patients with different ages and educational levels to make differential diagnoses of other disorders. Nonetheless, this study have some limitations such as non-representative population of rural or other languages from Peru, or relatively low explained variance in the regression analyses; the fact that the results could not be generalized to individuals outside Peru and some groups were reduced. Conclusions Educational level directly influenced VF during schooling and was highly correlated with VF performance. In our study, participants with more than twelve years of education performed better on each letter category (phonological fluency) and semantic category (semantic fluency). Age-based changes in phonological fluency typically assume a curvilinear pattern, with an increase in phonological fluency between the third and fourth decades, followed by a gradual decline. In contrast, semantic fluency exhibited a linear decrease with age. The semantic advantage persists into the eighth decade of life. In addition, previous studies have reported inconsistent effects of sex on VF performance. We found that men exhibited better performance. Although male performance was slightly worse on the phonological task (using the letter "F"), male participants produced, on average, 0.5 more examples than women on the semantic fluency task (animal category). These sex differences are further complicated by sex differences in familiarity with specific semantic categories.
201651020
s2orc/train
v2
2019-08-27T18:00:02.000Z
2019-08-27T00:00:00.000Z
Entanglement entropy and $T\bar T$ deformations beyond antipodal points from holography We consider the entanglement entropies in dS$_d$ sliced (A)dS$_{d+1}$ in the presence of a hard radial cutoff for $2\le d\le 6$. By considering a one parameter family of analytical solutions, parametrized by their turning point in the bulk $r^\star$, we are able to compute the entanglement entropy for generic intervals on the cutoff slice. It has been proposed that the field theory dual of this scenario is a strongly coupled CFT, deformed by a certain irrelevant deformation -- the so-called $T\bar T$ deformation. Surprisingly, we find that we may write the entanglement entropies formally in the same way as the entanglement entropy for antipodal points on the sphere by introducing an effective radius $R_\text{eff}=R\,\cos(\beta_\epsilon)$, where $R$ is the radius of the sphere and $\beta_\epsilon$ related to the length of the interval. Geometrically, this is equivalent to following the $T\bar T$ trajectory until the generic interval corresponds to antipodal points on the sphere. Finally, we check our results by comparing the asymptotic behavior (no Dirichlet wall present) with the results of Casini, Huerta and Myers. We then switch on counterterms on the cutoff slice which are important with regards to the field theory calculation. We explicitly compute the contributions of the counterterms to the entanglement entropy by considering the Wald entropy. In the second part of this work, we extend the field theory calculation of the entanglement entropy for antipodal points for a $d$-dimensional field theory in context of DS/dS holography. We find excellent agreement with the results from holography and show, in particular, that the effects of the counterterms in the field theory calculation match the Wald entropy associated with the counterterms on the gravity side. Introduction One remarkable development in the recent years has been a novel access to irrelevant (nonrenormalizable) deformations in two dimensional quantum field theories (QFTs). Unlike the usual irrelevant deformations, the so-called TT deformation [1][2][3] has the intriguing feature that it is -unlike the usual irrelevant deformations -exactly solvable. Starting from a generic seed QFT, we are able to define a trajectory from the IR to the UV in the field theory space triggered by deforming the QFT with a TT deformation in each step. Even through the theory flows towards the UV, we are still able to derive a lot of interesting quantities in exact form simply from possessing an understanding of undeformed theory. These quantities include the finite volume spectrum, the S-matrix and the deformed classical Lagrangian -all of which have been extensively discussed in the literature (see [39] for lecture notes). An interesting approach to TT deformations is the proposal of a holographic dual by McGough, Mezei, and Verlinde [4] in order to use the powerful toolkit provided by holographic dualities for studying problems in strongly coupled field theories. From a bulk perspective, deforming a field theory by an irrelevant deformation has drastic effects on the UV behavior. McGough, Mezei, and Verlinde conjectured to simply chop off the asymptotic region of the spacetime. In other words, deforming the conformal field theory (CFT) by the TT operator is dual to introducing a hard radial cutoff (Dirichlet wall) at a finite radial position r = r c in the bulk. The hard radial cutoff removes the UV region of the spacetime and the dual field theory which lives on the cutoff surface is no longer conformal. For Anti-de Sitter (AdS) this was more extensively studied in [11]. Note that we are using the AdS/CFT correspondence in the weak form throughout this work which means that we are working with a strongly coupled CFT at large N on the field theory side dual to weakly coupled classical gravity. One interesting aspect of quantum theories -especially with regards to quantum information -is the entanglement of quantum states. The entanglement entropy provides a measure of how much quantum information is stored in a specific quantum state and it may be defined in the universal language of quantum fields (although explicit calculations are extremely difficult to do). Calabrese and Cardy developed a powerful approach to calculate entanglement entropies in QFTs by applying so-called replica trick to entanglement entropy calculations in 2D QFTs [40]. For strongly coupled field theories, however, there is a very elegant way to compute entanglement entropies. Based on the observation that the Bekenstein-Hawking entropy is proportional to the area of the black hole, Ryu and Takayanagi [41] derived that the entanglement entropy of a subsystem may be computed holographically by computing the area of minimal surface in the bulk enclosing the subsystem. The authors of [12] were able to give further evidence in favor of the conjecture of [4] by showing that the entanglement entropy for antipodal points in a two-dimensional CFT deformed by a TT deformation matches the entanglement entropy computed in AdS 3 in presence of a hard radial cutoff. This analysis has been extended to higher dimensions [13,14,34,35], and to dS 3 [36] in the context of the DS/dS duality which we will review shortly. This leads to the question -what happens to the entanglement entropy for intervals different from antipodal points? On the field theory side, this seems to be a notoriously hard question to ask. The authors of [42] were able to calculate the first order corrections for a field theory in Minkowski space while the authors of [6] estimated the entanglement entropy for subintervals. We will answer this question on the gravitational side of the duality and derive the exact form of the entanglement entropy in general dimensions. While the AdS/CFT-correspondence provides us with a definition of quantum gravity in AdS, quantum gravity in dS has yet to be established. One proposal for how to apply holography to dS is the so-called DS/dS correspondence [43] which is based on uplifting the AdS/CFT correspondence [44][45][46][47]. The basic idea of DS/dS becomes apparent when we express the metric of D = d+1-dimensional (Anti-)de-Sitter space with curvature radius L as a warped space given by the metric where the radial direction is denoted by r and the warpfactors L sin(r/L) and L sinh(r/L) correspond to dS and AdS, respectively. In both cases, the warpfactors vanish linearly at the horizon, located at r/L = 0. In dS, we see that the warpfactor has a maximum at the central "UV slice" (r/L = π/2), whereas the AdS warpfactor is growing boundlessly for r → ∞. It is interesting to note that the bulk AdS and dS spacetime are identical in the highly redshifted region r/L 1 since sin(h)(r/L) ∼ r/L. For dS d sliced AdS D (1.1), we have a well-established description of the CFT living in dS d in terms of the AdS/CFT-correspondence. Since the two spacetimes are indistinguishable in the IR region, the authors of [43] conjectured that infrared degrees of freedom of the CFT dual to AdS D are also a holographic dual for the infrared region of dS D . By this identification, we are able to establish a holographic dual to dS. The authors of [36] showed in d = 2 how to systematically derive this dual by first starting with the CFT dual to AdS D ; by deforming the theory with the TT operator, they were able to remove the UV part of the geometry. In the IR region AdS and dS are identical and CFT dual of AdS (via the AdS/CFT-correspondence) is also the CFT dual of dS; by deforming the theory by yet another TT deformation, we can "grow back" the UV part of the spacetime -this time for DS D instead of the asymptotic AdS region. One natural question is, how do these TT deformations look in higher dimensions? The UV regions corresponding to dS and AdS are quite different from one another; the fact that the warpfactor in the dS case reaches its maximum in the UV means that the dual CFT intrinsically possesses a cutoff in the UV. In contrast, the warpfactor of AdS grows without bound. Another difference occurs in dS where there is a second near horizon region beyond the central slice at r/L = π -meaning that there is a second dual CFT. Furthermore, the author of [48] showed that the dual CFT also contains dynamical gravity. Last but not least, since the origin of the DS/dS duality being the AdS/CFT correspondence, we may infer how to calculate entanglement entropies in the d-dimensional field theory in terms of minimal surfaces in the D-dimensional geometry [41] as was explored in [49,50]. In fact, the authors of [50] found a one parameter family of entangling surfaces which all reproduce the dS entropy correctly. This means that independent of the turning point of the entangling surfaces in the bulk, we will always end up with the same area. The paper is organized as follows: the first part consists of deriving the entanglement entropies for arbitrary intervals in (A)dS D in presence of a hard radial cutoff. This extends the results of [12,34,36] from antipodal points to generic intervals in both AdS and dS. In the second part, we generalize the work of [36] to higher dimensions. We compute the entanglement entropies on the field theory side for a CFT deformed by a TT deformation dual to dS D with a hard radial cutoff. The calculation follows [34], where this has been derived for a field theory dual to AdS D with a hard radial cutoff. Finally, we compare the field theory results to the results obtained from the gravitational theory. Dirichlet walls and Entanglement Entropy in holography In this section, we will compute the entanglement entropy in (A)dS with a Dirichlet wall, that is located at r = r c . We consider the metric (1.1) for (A)dS D in dS slicing in static coordinates 1 In these coordinates, the horizon is located at r = 0, the AdS boundary at r = ∞ and the dS central slice at r/L = π/2. We want to calculate entanglement entropies associated with spherical entangling surfaces centered around the center of the static patch for an observer located at According to the Ryu-Takayanagi formula, our task at hand is to calculate the surface minimizing the area. We will do this by committing to a parametrization and determining the entangling surfaces by solving the Euler-Lagrange equations. Dirichlet walls and Entanglement Entropy in dS We start by studying the entangling surfaces in dS. This has been done in previous work by the author in [50]. Concretely, the authors found a one parameter family of entangling surfaces which all correctly reproduce the dS entropy. These surfaces may be found by considering the standard "U"-shaped surfaces that are hanging down towards the IR and are parametrized in terms of β(r) 2 3) The equations of motions associated with the Lagrangian are solved by [50] β(r) = arcsin [tan (r /L) / tan (r/L)] , (2.4) where r is the turning point of the entangling surface. These surfaces all reach the cosmological horizon (located at β = 0) for r/L = π/2 with the first derivative vanishing. The integration constant has been chosen in a way so that we reach β = π/2 for r/L = r /L. As pointed out in [50], computing the area of these surfaces always leads to the whole dS entropy and is independent of the value of r . However, since the second derivative is non-vanishing on the UV slice, the integral will lead to different values if we introduce a UV cutoff 3 . We will place this hard Dirichlet cutoff on which the entangling surfaces end at More precisely, the entangling surfaces will not all go to β = 0 anymore but depending on the position of the turning point r , scan through all possible values of β with the value of β on the cutoff surface given by β = arcsin(tan(r /L)/ tan( /L)). This is already the case for the AdS spacetime with no cutoff present. As we will see, by solving the integral for the entanglement entropy, the entanglement entropy gets smaller for smaller intervals (larger values of β ). The Dirichlet wall "eats up" the entangling surfaces for increasing values of ε due to the requirement r /L > ε/L (Fig. 2). In order to present the entanglement entropy in a compact way, we switch to yet another parametrization for the entangling surfaces r(β) in which the entangling surfaces minimize the Lagrangian 6) and are given by r(β) = L arccot (sin(β)/ tan(r /L)) . (2.7) The entanglement entropy follows by computing the area of the minimal surfaces; evaluating the Lagrangian (2.6) on the analytical solution (2.7) and integrating from the cutoff surface at β = β to the turning point of the entangling surfaces β = π/2 gives us the area where 2 F 1 is the hypergeometric function 2 F 1 (a, b; c; z). EE dS denotes the the full dS entropy which we get for the special cases r = 0 (studied in [36]) or = 0 (studied in [50]). We see that the entanglement entropy gets smaller for > 0 and r > 0. Since r is a bulk variable which does not have any obvious field theory interpretation, we want to eliminate it from the result. This may be done by using the analytical solution (2.7) once more by calculating the position of the turning point (β = π/2) r = L arctan R sin(β ) √ L 2 −R 2 , where we also introduced the radius R = L sin( /L) on the slice which is determined by evaluating the warpfactor for the position of the cutoff surface. With this, we finally arrive at (2.9) Dirichlet walls and Entanglement Entropy in AdS The entanglement entropies of the preceding section may be interpreted in terms of the DS/dS correspondence. In this section we will focus on its parent, the AdS/CFT correspondence and mimic the calculation of the preceding section for AdS. In contrast to dS, AdS may be sliced in AdS, flat, or dS slicing. AdS d sliced AdS D follows from dS d sliced dS D by Wick rotation of both, the D-dimensional curvature constant and the d = D−1-dimensional curvature constant on the slice 4 , while dS d sliced AdS D follows by only Wick rotating the D-dimensional curvature constant; the latter will be used in this work. In this spirit, the entangling surfaces are the solution to the equations of motion following from the Lagrangian It is not hard to find the solution to the equations of motion, given by β(r) = arcsin(tanh(r /L)/ tanh(r/L)). (2.11) Analogous to the dS case, we introduce a hard radial cutoff at r/L = /L, with the corresponding radius of the sphere on the cutoff surface given by R = L sinh( /L). Note that the turning point of the entangling surface in the bulk at r is related to the position β where the entangling surface ends on the Dirichlet wall by β = arcsin(tanh(r /L)/ tanh( /L)). We may calculate the entanglement entropy by evaluating the Lagrangian for the analytical solution and integrating along the entangling surface to yield the minimal area (2.12) To solve this integral, it was convenient to switch variables by introducing the auxiliary variable y 2 = −1 + cosh(r/L) 2 / cosh(r /L) 2 , which transforms (2.12) to (r (β)) 2 + L 2 cosh 2 r L , with analytical solution r(β) = L arctanh (cosh(β) tanh(r /L)). Entanglement entropies for general intervals on the sphere In equation (2.8) and (2.13), we derived expressions for the entanglement entropies for generic intervals in the presence of a Dirichlet wall which follow from the minimal area surfaces by (2.14) By varying the starting point of the entangling surfaces in the bulk r , we are able to change the size of the interval on the sphere and thus calculate the entanglement entropy for subintervals. The case r = 0 corresponds to antipodal points on the sphere; smaller intervals on the sphere occur for larger values of r . The radius R of the sphere appears in equations (2.8) and (2.13), but only in combination with the cosine of the ending point of the entangling surfaces on the cutoff surface R cos(β ); it is therefore useful to introduce an effective radius R eff (β ) = R cos(β ). Introducing the effective radius makes it apparent that the entanglement entropies of the one parameter family still have the same form as the entanglement entropy of the special case r /L = 0 (β = 0), which is for AdS D and for dS 3 known in the literature [34,36]; the entanglement entropies are decreasing for increasing β . For the sake of convenience, we list the results for the entanglement entropies in D = 3 to D = 7 and we label them with the dimension d = D − 1 of the dual field theory. In the spirit of [36], we introduce η, with η = 1 corresponding to AdS and η = −1 to dS. Furthermore, the (h) in expressions arcsin(h) corresponds to the AdS case. The entanglement entropies read The results are more straightforward if seen from a geometric perspective (see figure 1 and 2). From the definition of the cosine, we see that the effective radius corresponds to the sphere where the endpoints of the interval are north and south pole. Without the cutoff, the one parameter family of entangling surfaces in the dS case (found in [50]) are all just great circles on the sphere with the limiting surfaces r = 0 and r = πL/2 corresponding to the equator and crossing over the north pole. Since they are all half-circles on the sphere, they all have the same area. If we introduce a cutoff surface at r/L = /L, the surfaces all yield to a different area and thus to a different entanglement entropy. The Dirichlet wall cuts the one parameter family into surfaces of different length, depending on r . As a result, we are able to calculate the entanglement entropy for different intervals on the circle. As shown in the graphic, those surfaces may be rotated along the sphere until they correspond to a half-circle again; the half-circle has the radius R eff = R cos(β (r )). On this half-circle, the entangling surface corresponds to the entanglement entropy of two antipodal points; moving the cutoff surface up to the effective radius may also be done by following the TT trajectory. In the AdS case, this may be done by rotating the entangling surface up to the apex of the cone with a spacetime rotation and then applying a special conformal transformation to bring the entangling surface on the surface of the cone; these transformations map the points of a generic interval on a sphere with radius R to antipodal points on a sphere with radius R eff . It is important to note that the angle β measures how much the interval gets smaller compared to an interval of antipodal points on the sphere. The case β = 0 corresponds to antipodal points. In order to measure the length of the interval, it makes sense to introduce the angle δ = π/2 − β, with R eff = R cos θ = R sin δ. In order to further confirm our results, we expand the results for pushing the cutoff surface to the boundary. We reach the boundary for R → ∞ (AdS) and R = L (dS), respectively. Introducing the cutoff Λ, the entanglement entropies for AdS read which matches the result of Casini, Huerta and Myers [51]. The results for d = 2 match the The entangling surface for r /L = π/3 -(θ, r) are the polar and azimuthal angles, respectively, in the static patch of Euclidean dS 3 in presence of a cutoff (magenta surface). The cutoff surface restricts the entangling surface to the bolder line. We can rotate this surface by θ 0 = π/3 to bring it to the top of the sphere. If we draw a line through the ending points, we see that this corresponds exactly to a cutoff surface with radius R eff = R cos(β (r )), which is depicted in blue. By rotating the surface on the circle, we see that the entangling surface exactly corresponds to the half-circle, i.e. the interval consists of antipodal points. The field theory lives on the circle on the magenta surface. Right: The analogous picture for Euclidean AdS 3 . Note that the transformation consists of a spacetime rotation and a special conformal transformation. well known field theory result for a subsystem in a system of length L [40, 52, 53] with the cutoff a (a → 0). In the dS case, ∆ in (2.8) vanishes and we get back the full dS entropy as was observed in [50]. Renormalization and generalized entanglement entropies In general, if we consider entanglement entropies, we expect the result to be divergent, i.e. the leading divergence is the so-called area term [54][55][56][57][58]. In CFT calculations, however, we usually work with renormalized quantities instead of the bare ones since those quantities are universally well defined and still make sense when we take the continuum limit. The TT deformation, on the other hand, acts as UV regulator and all quantities are automatically finite. In principle, we could add an arbitrary amount of counterterms to the dual effective field theory action but we will restrict ourselves to only considering the standard holographic counterterms ( [59,60]). If we add counterterms to the field theory action, living on the cutoff slice, these finite counterterms will affect the result for the entanglement entropy (see for example [24] for a discussion about this) [61][62][63][64][65]. In the discussion of the next section, we will consider the renormalized stress tensor on the cutoff slice which can be derived by supplementing the gravitational action with counterterms in order to render it finite as explained in [59,60]. Specifically, we are considering the action whereR,R ab are the Ricci scalar-and tensor, respectively, on the boundary slice with the induced metric γ. Furthermore, the c (2.30) However, this is a statement about the bare partition functions of both theories. If we renormalize the field theory partition function, we have to take into account the counterterms in the gravitational theory which act as higher curvature terms on the cutoff slice. Thus, we may take into account the counterterms on the cutoff slice by adding the contributions of the Wald entropy associated with the counterterms to the holographic entanglement entropy. The Wald entropy [66] is given by [67][68][69] whereˆ ab are the binormals to the horizon. For pedagogical reasons, we rewrite the metric eq. (2.1) with R(r) = L sin(h)(r/L), R ≡ R(r c ) and ρ = cos φ (2.32) In these coordinates, we see that on the cutoff slice r/L = r c /L in the static patch, theˆ τ ρ , are the binormals and we have to vary the Lagrangian in eq. (2.31) with respect toR τ ρτ ρ in order to find the Wald entropy. For the counterterms given in eq. (2.29), we thus find on the slice r/L = r c /L for an entangling surface with ρ = cos(β ) and R eff = R(r c ) ρ where h ab is the induced metric on the unit sphere and where we used that on the cutoff slicẽ R = d (d − 1)/R 2 eff and R ab = (d − 1)/R 2 eff h ab . Evaluating the expression in eq. (2.33) for 3 ≤ d ≤ 6, we find Note that in d = 2, we do not see a contribution from the counterterms to the entanglement entropy since the counterterm acts as a boundary cosmological constant. d-dimensional TT deformations in field theory In the second part of this work we take a closer look at the field theory side and compute the entanglement entropy for antipodal points in general dimensions in context of DS/dS. In order to find the entanglement entropies, we have to derive the analog of the higher dimensional TT like deformation for dS. As in the preceding section, we establish a notation in which the AdS and the dS case go hand in hand. We keep the derivations in this chapter short and refer the interested reader to [12,13,[34][35][36]. The d-dimensional deforming operator for holographic stress tensors We may extract the Brown-York stress tensor from the renormalized action eq. (2.26) by considering where γ is the induced metric on the cutoff slice. The stress tensor of the boundary field theory T bdy ij is related to the bulk stress tensor by rescaling T BY ij = r d−2 c T bdy ij . This is also true for the metric of the CFT: g ij (r = r c , x) = γ ij (x) = r 2 c γ bdy ij (x). For a complete dictionary on the cutoff slice see [13]. In the following discussion, we will set r c = 1. The holographic stress tensor dual to a d-dimensional field theory may be expressed in terms of the extrinsic curvature K ij and the induced quantities on the boundary slice: the metric γ ij , the Einstein tensor G ij , the Riemann tensorR ijkl , the Ricci tensorR ij and the Ricci scalarR [13,34,59,70]. The renormalized stress tensor consists of two components T ren , the standard holographic stress tensor on the cutoff surface r = r c , T ij , and the corresponding curvature contributions of the counterterms eq. (2.29), denoted by C ij . In order to remove the divergences in the action eq. (2.26), we add counterterms eq. (2.29) -which are scalar quantities -to the action. The divergences arise when we push the cutoff surface to the AdS boundary. On the cutoff surface, the stress tensor is automatically regularized and reads ). The c with λ being the size of the deformation. In d = 2, the TT deformation satisfies the factorization property [3] In general, this would no longer be true in higher dimensions. However, since we are working in a CFT at large N , the factorization property is still valid at d > 2 [4,13]. In order to derive the deforming operator X d = − 1 d λ d T i i , we derive the trace flow equation for the holographic stress tensor using Einstein's equations. This is accomplished by using the general form of the holographic stress tensor (3.2) and then by using the Hamilton constraint in appendix A. The d-dimensional expression is given by where η = 1 corresponds to the AdS case and η = −1 to the dS case, respectively. Furthermore, the α d are dimensionless numbers and correspond to the number of degrees of freedom in the field theory. We denote the coupling of the deformations by λ d . The parameters of the field theory are related to the parameters on the gravity side by [34] In a two-dimensional CFT, the central charge c is related to the bulk quantities as [71] c = 12πL P . (3.8) For example in d = 2 dimensions the deforming operator reads which matches the result of [36]. Sphere partition functions and entanglement entropy Let us consider a generic seed CFT in d-dimensions at large central charge on a sphere with radius R. Our goal is to compute the exact sphere partition function Z S d . From the sphere partition function, it is straightforward to calculate the entanglement entropy for antipodal points on the sphere as was outlined by [12]. What we are interested in is the change of the partition function in response to deformations of the sphere. As argued in [12,34] and since changes of the metric manifest in the vacuum expectation value of the stress tensor, the symmetries on the sphere dictate we can write the deformation of the d−dimensional sphere partition function as from which we can compute the entanglement entropy using the replica trick. It now becomes apparent why we chose dS slicing for (A)dS in the first place: the dS ground state corresponds to the Euclidean path integral on the sphere S d . We may apply the replica trick by considering the n-folded cover of the sphere of radius R [12,34] with θ j ∈ [−π/2, π/2] for j = 1, . . . , d − 1 and θ d ∈ [0, 2π]. For simplicity, we set d = 2 for now which reduces (3.12) to ds 2 = R 2 (dθ 2 + n 2 cos(θ) 2 dφ 2 ), (3.13) with φ ∈ [0, 2π] and θ ∈ [−π/2, π/2]. The angle θ is chosen so that it corresponds to the angle θ in the gravitational theory; it is the azimuthal angle on the sphere and the antipodal points θ = −π/2 and θ = π/2 correspond to the north and south pole of the sphere. Since the entangling surface consists of two antipodal points we may -due to the rotational symmetry -continuously vary n which allows us to compute the entanglement entropy with (3.14) In order to calculate the sphere partition function (3.11), we compute the expression for ω d using the flow equation With help of the deforming operator X d (defined in eq. (3.6)), we end up with a quadratic equation for ω d . To derive an explicit expression for ω d , we have to evaluate the stress energy tensor and the counterterms for a d-dimensional sphere of radius R. The quadratic equation yields a positive and a negative solution so there are two possible signs for the TT deformation. For now, we will denote the signs of the square root simply by s. In the case d = 2 all the c s are zero and it is straightforward to check that Since we are working in a large N CFT where the coupling of the deformation λ 2/d is small but N λ 2/d is finite, we may write (in d > 2) the expansion parameter as t d = α d λ 2/d . With this, we find the expression for the sphere partition function in d > 2 to be From the expression for ω d , it is straightforward to calculate the entanglement entropy. The procedure goes as follows: we are taking ω from eq. (3.15) and (3.16), respectively and inserting them into eq. (3.11); this yields an expression for the R-derivative of the partition function. To obtain the entanglement entropy, we must first integrate the expression and plug the result into eq. (3.14). However, this integration results in an integration constant that must be fixed before proceeding. We may fix the integration constant either in the IR or the UV region of the theory. In d = 2, we may follow [36] and fix the integration constant by matching the partition function for AdS in the R 2 /λ 2 → ∞ limit to the CFT partition function The integration constant for the dS case is obtained by matching the partition function in the R 2 /λ → 0 case to the AdS partition function 5 . However, the CFT partition function is not known in d > 2 and we chose to follow [12], which fixes the integration constant by demanding the logarithm of the partition function to vanish for R 2 /λ → 0. This leads to a trivial theory in the UV. Note that this is only possible in presence of the UV cutoff since the theory does not change as a function of the scale at arbitrarily short distances anymore. There is a different way to extract the universal quantities of the entanglement entropy as the approach we took in section 2.4 based on [61,62]. In analogy to [34,36,72,73], we also compute the cutoff independent renormalized entanglement entropy 6 (3.18) Entanglement Entropy from field theory in general dimensions After reviewing the methods of how to compute the entanglement entropies for a TT deformed field theory, we are able to derive the expressions for the entanglement entropy in general dimensions. We eventually compare the expressions derived from field theory with the ones we computed on the gravity side in section 2. d = 2 Let us reproduce the cases which are known in the literature so far. The results for the entanglement entropy for a TT deformation in a two-dimensional CFT are given in [12] (AdS) and [36] (dS), respectively. We have where we denote the sign of the square root of the TT deformation by s. With ω 2 at hand, we may compute the partition function of the deformed CFT using eq. (3.11). As argued in the previous section we choose the integration constant so that log Z S 2 (R = 0) = 0. This yields (4. 2) The entanglement entropy for two antipodal points on the sphere follows from the partition function via eq. (3.14) (for the negative sign of the square root) In two dimensions, we may calculate the cutoff independent renormalized entanglement entropy immediately from the knowledge of the derivative of the partition function. Plugging (4.1) into eq. (3.11) and combining eq. (3.14) and (3.18), we find the renormalized entanglement entropy which plays the role of the running C-function in RG flow as Comparison to the result from holography The entanglement entropy from holography is given by eq. (2.15). In order to compare to the field theory results, we use the dictionary relating the holography parameters with the field theory ones. This is done by 4πl/ p = c/3 and c λ 2 = 3 π L 2 with the corresponding renormalized entanglement given by the R-derivative C = dS/dR In d = 2 dimensions, we find the entanglement entropy from field theory matches the entanglement entropy from holography exactly for η = 1, s = −1 (AdS) and η = −1, s = −1 (dS). 3 ≤ d ≤ 6 The calculation for higher dimensions follows analogous to the calculation in d = 2: compute the corresponding partition function by integrating the corresponding (3.16), use the partition function to compute the entanglement entropy according to (3.14) and eliminate schemedependent finite counterterms by differentiating using the prescription of (3.18). To avoid redundancy, we have moved the calculation to appendix B and only display the results here. The bare holographic entanglement entropies in 3 ≤ d ≤ 6 match the field theoretic results only up to area terms, which are removed if we compute the renormalized entanglement entropies by taking derivatives with respect to R according to the prescription of [72,73]. The reason for this discrepancy is due to the fact that we took counterterms into account in our field theory calculation which should also effect the gravity calculation since the counterterms are added on the cutoff slice. The area terms which appear with a negative sign in the field theory calculation -and are thus subtracted from the result -can be traced back to the counterterms. Since the TT deformation acts as a UV regulator, all quantities are already finite, especially the usually divergent leading area term contributions. However, if we take the contributions of the counterterms (2.29) also in the gravity calculation into account, we see an exact match. In the following, we match the entanglement entropies obtained from the field theory calculation with the entanglement entropies obtained from holography (taking counterterms on the gravity side into account). Note that these expressions lack the so-called area terms which are removed by the renormalization (4.10) All the entanglement entropies calculated on the field theory side exactly match the entanglement entropies calculated on the gravity side of the duality for the negative sign of the square root. Furthermore, the entanglement entropies dual to the gravity theory in AdS match the entanglement entropies calculated in [34]. Conclusions In this work, we calculated the entanglement entropies for generic intervals in a (A)dS spacetime in presence of Dirichlet wall. This hard radial cutoff chops off the asymptotic UV region of the gravitational theory which is proposed to be the holographic dual to a CFT deformed by the irrelevant TT operator. Starting from one parameter families of analytical solutions for the entangling surfaces in (A)dS D , we derived the associated entanglement entropies using the recipe of Ryu and Takayanagi. The entanglement entropies for antipodal points in (A)dS were already known in the literature [12,34,36]. Surprisingly, we may write the entanglement entropies for generic intervals formally in the same way as the already known results by introducing an effective radius R eff = R cos(β ). The basic definition of the cosine shows that the effective radius corresponds to a sphere where the endpoints of the interval are antipodal. Geometrically, this corresponds to the scenario where we follow the TT trajectory (move the cutoff inwards for intervals smaller than antipodal points) until the points of the generic interval are antipodal on a sphere with radius R eff . In the dual field theory this means, we may compute the entanglement entropy of generic intervals on the sphere by the sphere partition function as explained in [12], if we follow the TT trajectory. Note that in the AdS case this corresponds to a rotation in the spacetime followed by a special conformal transformation which brings the interval to antipodal points on the circle with radius R cos(β), as was illustrated in figure 2. In the limit of pushing the Dirichlet cutoff to the boundary, we find agreement with the results of Casini, Huerta and Myers [51] (see eq. (2.20)-(2.24)). The authors of [42] calculated perturbative corrections in the TT coupling to the entanglement entropy in a two dimensional field theory for finite intervals. They found that the leading order agrees with [51] while the first order corrections in the TT coupling vanish, as we see in (2.20). In the second part of this paper, we derived the entanglement entropies for antipodal points for a d-dimensional field theory in context of DS/dS holography. The TT deformations play an interesting role in DS/dS holography since they provide a mechanism to better understand the possible CFT dual to dS [36]: with the help of TT deformations, we move the boundary inwards to the IR region; in the IR region the AdS and dS spacetimes are indistinguishable and the AdS/CFT-correspondence provides us with a CFT dual. In the IR, we may trigger the flow by using the TT deformation with the opposite sign. This time, we use the TT deformation, derived for the dS trajectory and we are able to move the boundary back to its original place. In this way, we are able to "grow back" the spacetime we previously cut off but with a different sign for the cosmological constant. In section 3, we extended the work of [36] and derived the TT deformation in the context of DS/dS in general dimension. Compared to the deformation in AdS/CFT, the deformation gets an extra contribution proportional to the cosmological constant and the dimension of the field theory. We furthermore derived the (renormalized) entanglement entropies in general dimensions, which match our results derived from the gravitational theory perfectly. In particular, we worked out the contributions of the counterterms on the UV slice on the entanglement entropy in the gravitational theory by considering the Wald entropy associated with the counterterms. These contributions match exactly the contributions of the counterterms in our field theory calculations. The results thus shed light on the seeming mismatch of the entanglement entropies observed in [34]. As the authors observed correctly, both bare entanglement entropies match. However, if we switch on counterterms in the field theory calculation, the Ryu-Takayanagi prescription does not give the correct answer anymore; rather, we have to take the corrections of the counterterms into account by considering their Wald entropy. A d-dimensional TT deformation in DS/dS The the radial Einstein equation for a (d+1)−dimensional gravitational theory in with metric (1.1) in presence of a Dirichlet wall reads in terms of the extrinsic curvature K ab with the induced d-dimensional Ricci scalarR (d) . We may write the trace of the energy momentum tensor (3.2) together with the counterterms (3.3) as which we may solve for the extrinsic curvature K by considering the specific combination The deforming operator (3.6) follows immediately by using eq. (A.1). Note that in AdS the term ∼ d(d − 1)/L cancel, while in dS they have the same sign and lead to an extra contribution to the deforming operator. B Entanglement entropies from field theory In this section, we present the computation to the results quoted in eq. (4.10). Since the computation is very repetitive, we focus on displaying the relevant steps. The procedure in d = 3 is very similar to the case d = 2; we may read off ω 3 from eq. (3.16) It is straightforward to determine the corresponding partition function, given by The second term is chosen to ensure log Z(R = 0) = 0. Finally, we find with η 2 = 1 and eq. (3.14) and the negative sign of the square root The scheme independent renormalized entanglement entropy is obtained from the entanglement entropy by using (3.18) and reads in d = 3 dimensions Comparison to the result from holography The entanglement entropy from holography is given by eq. (2.16) and may be expressed in terms of field theory quantities using eq. (3.7) (with 6 λ 3 = 2 We see that the field theory calculation and the results from holography match up to a scheme dependent area term ∼ −4 t 3 π 2 R/λ 3 . We obtain the exact same contribution from the Wald entropy associated with the counterterms given in eq. (2.34) which yields (in field theory variables) exactly ∼ −4 t 3 π 2 R/λ 3 . The entanglement entropies on both sides match, if the contributions of the counterterms -which have been added to the field theory side -are also taken into account in the gravitational theory. Similar to the literature, we may compare scheme independent quantities aka the renormalized entanglement entropy. From the entanglement entropy, we immediately obtain the renormalized entanglement entropy by using eq. (3.18) We see that the results from holography and field theory perfectly match one another for the negative sign of the square root η = 1, s = −1 (AdS) and η = −1, s = −1 (dS), respectively. B.2 d = 4 In d = 4 we have using eq. (3.16) We can compute the sphere partition function by integrating with respect to R, where we fix the integration constant by demanding that log Z S d (R = 0) = 0 We obtain the entanglement entropy by using the replica trick (3.14). This gives us Again, this matches exactly our field theory computation up to a scheme dependent area term −8π 2 R 2 t 4 /λ 4 for the negative sign of the the square root. The area term with the negative sign comes from adding counterterms to our action. If we also consider the contributions of the counterterms in the gravitational theory eq. (2.35), we see that we observe the exact same term there and thus the results of both sides match. From the holographic entanglement entropy, we may derive the scheme independent entanglement entropy using eq. (3.18) We see that the renormalized entanglement entropies from field theory and holography in d = 4 match perfectly for η = 1, s = −1 (AdS) and η = −1, s = −1 (dS).
251001000
s2orc/train
v2
2022-07-24T15:14:13.616Z
2022-07-22T00:00:00.000Z
In Situ Laser Light Scattering for Temporally and Locally Resolved Studies on Nanoparticle Trapping in a Gas Aggregation Source Gas phase synthesis of nanoparticles (NPs) via magnetron sputtering in a gas aggregation source (GAS) has become a well‐established method since its conceptualization three decades ago. NP formation is commonly described in terms of nucleation, growth, and transport alongside the gas stream. However, the NP formation and transport involve complex non‐equilibrium processes, which are still the subject of investigation. The development of in situ investigation techniques such as UV–Vis spectroscopy and small angle X‐ray scattering enabled further insights into the dynamic processes inside the GAS and have recently revealed NP trapping at different distances from the magnetron source. The main drawback of these techniques is their limited spatial resolution. To understand the spatio‐temporal behavior of NP trapping, an in situ laser light scattering technique is applied in this study. By this approach, silver NPs are made visible inside the GAS with good spatial and temporal resolution. It is found that the argon gas pressure, as well as different gas inlet configurations, have a strong impact on the trapping behavior of NPs inside the GAS. The different gas inlet configurations not only affect the trapping of NPs, but also the size distribution and deposition rate of NPs. Introduction Noble metal nanoparticles (NPs) are used in many applications, ranging from catalysis, [1] photocatalysis, [2][3][4] optics, [5] and resistive switching [6][7][8][9][10][11] to sensors. [12][13][14][15][16] Especially the optical properties are well tunable because they depend strongly on the shape, size, size distribution, and the surrounding medium. [17,18] A lot of synthesis methods for NPs are available and range from biological over chemical to physical processes. [19,20] The most often used strategy is the solutionbased chemical synthesis. This approach has the drawback that the synthesized NPs are contaminated with, for example, surfactants. [19] Physical vapor deposition (PVD) techniques stand out with their extremely high purity of the synthesized NPs. Moreover, it is much easier to produce alloy particles and compound particles with tailored composition. [21] The PVD strategies range from surface energy-related self-organization of NPs on solid substrates [22][23][24][25] and in liquids [26][27][28] to gas phase synthesis. [8,29] The gas phase synthesis relies often on so-called gas aggregation sources (GASs). They encompass laser ablation, [29] pulsed microplasma cluster source, [8] and magnetron sputtering. [30] The GAS equipped with a magnetron was firstly developed by Haberland et. al. in 1992. Here a magnetron is operated at comparatively high pressures (typically between some 10 Pa and few 100 Pa) in contrast to normal magnetron sputtering for the preparation of thin films. Due to the higher pressure, the mean free path of the sputtered atoms is shorter, which enables three-body collisions (two sputtered atoms are colliding with one gas atom) and leads to the formation of dimers. [31] This enables further attachment of sputtered atoms, and the dimers can grow further to clusters and NPs. These NPs are guided by the drag force due to the gas flow outside the GAS into the deposition chamber, where they can be deposited onto various substrates. [30] Although the fundamental three-body collision process and consecutive nucleation, growth, coalescence, and transport are well discussed in literature, still not all ongoing processes inside such sources are completely understood. For example, the impact of the gas flow pattern inside a GAS was for a long time only superficially investigated. Nevertheless, previous studies have shown that the gas flow pattern inside a GAS plays a crucial role. [32][33][34][35][36] For example if low-velocity regions are present inside a GAS, NPs can get lost in the chamber walls or the target. [36] This effect strongly impacts the material conversion efficiency of a GAS. To improve the performance of GAS sources, in the last decade several in situ techniques were utilized already to analyze the growth and transport of NPs inside the GAS. Examples are in situ small angle X-ray scattering (SAX) [37,38] and in situ UV-Vis. [33,36] Both techniques provided further insights into the dynamic processes inside the GAS and have revealed NP trapping at different distances from the magnetron source. Even though these techniques can provide information about the NP size, the techniques suffer from their low spatial resolution. Therefore, a technique with good spatial and temporal resolution to investigate the processes inside the GAS is urgently needed. Different laser light scattering (LLS) techniques have been successfully used to investigate NP growth and transport in situ in plasmas but not inside a GAS. Some methods rely on Mie scattering [39][40][41] and others on Rayleigh scattering. [42][43][44][45] Mie scattering techniques are Mie ellipsometry, angular-resolved Mie scattering, and 2D imaging Mie ellipsometry. By these methods, NPs of radii between 80-200 nm can be detected, for example, inside a dusty plasma. By evaluation of the polarization also the size distribution of the NPs can be evaluated in situ inside the plasma. [39][40][41] The limit for the NP diameter between Mie scattering and Rayleigh scattering is about 1/10 of the wavelength. Because the usual size of NPs which are prepared with a GAS is typically below 50 nm and the wavelength of the applied laser is 532 nm, Mie scattering techniques are not applicable to the GAS. Therefore, in this study LLS based on Rayleigh scattering has been used. A laser plane through the GAS is created and in a 90° configuration a camera with color filter is mounted to the GAS. The scattered light by the NPs can be detected and so the location of NPs inside the GAS can be observed. In this way the LLS technique was often used in conventional RF dusty plasmas but not in a GAS. [42][43][44][45] Here no information about the size distributions can be found, but other techniques like in situ SAX or in situ UV-Vis can provide this information. However, they cannot provide precise information about the location of NPs. For that reason, LLS can be seen as a complementary tool to understand the ongoing processes inside a GAS. Here, LLS of Ag NPs will be utilized to extract better spatial information about the trapping positions of NPs inside a custom-built GAS based on a typical design. To study the impact of the Ar gas flow pattern on the NP formation and transport process, three different types of gas inlets are investigated. The results from the LLS for different gas inlets will be complemented by size distributions obtained by scanning electron microscopy (SEM) measurements of the deposited NPs and the measured deposition rates. Setup of the Experiment In typical gas aggregation deposition experiments, the GAS sources have commonly only 4 flanges. In this common 4-flange-setup, one flange was used to attach the magnetron and one flange (on the opposite side) was used for the orifice. Additionally, two ports perpendicular to the main axis of the GAS were used for analytical purposes. [21,33,37,38,46] Computational fluid dynamic simulations (CFD) have shown that additional ports, which do not taper the cross-section of the GAS, do not significantly influence the gas velocity distribution. [33,46] Therefore, additional analytical ports were considered uncritical for the growth and transport of NPs. For this study more than two analytical ports were needed, which motivated the choice to build a GAS based on a CF 63 cross with 6 ports (Figure 1). On the back-left port, a 2-inch custom-made magnetron was mounted. The design of the magnetron was based on the "Ionix" magnetron series from the company Thin Films Consulting. On one of the ports (front right in Figure 1a) a cone with a 3 mm orifice was installed and connected to the deposition chamber. Two other ports of the GAS, which were located directly next to each other, were equipped with glass windows for the optical scattering system. On the remaining ports, the pressure gauge and a blind flange were installed. The deposition chamber was equipped with a QCM (Quartz Crystal Microbalance) and load-lock, which enables fast sample transfer without breaking the vacuum. The GAS was mounted to the deposition chamber. The orifice connects the GAS and the deposition chamber. The deposition chamber was evacuated by a turbo pump (Pfeiffer, HiPace 60 P) with a scroll fore pump (Edwards, nXDS 6i). The base pressure is in the range of 10 −7 mbar. The loadlock was equipped with a turbo pump (Pfeiffer, TMU 071 P) with a scroll fore pump (Edwards, nXDS 6i). The gas flow was controlled The custom-made magnetron enables three different gas inlet configurations. Therefore, the original design of thin Films consulting was professionally overhauled. The three different inlet configurations are shown in Figure 1b. The first configuration was termed normal inlet based on the default design of thin Films Consulting. In the normal inlet configuration, the gas was injected into the chamber between the magnetron and the ground cap. The custom-made magnetron used here enables the gas to be injected through the middle of the target. This is called middle inlet. Here the difficulty was to guide the Ar gas between the ground cap and the magnetron and finally under the target through the central bore (5 mm diameter) inside the target. It was important to prevent large stagnation pressures between the magnetron and the ground cap, because this can cause plasma ignition at this undesired point. The bore was closed with a stainless-steel mesh on the same electrical level as the target surface. Otherwise, plasma ignition can take place inside the bore. The third inlet configuration was the behind inlet configuration, which can be often found in literature. [33,37,38] Here the gas inlet was located somewhere behind the magnetron. Ag was always used as a target material (99.99% purity, Kurt J. Lesker, 2 inch)). An MDX 500 from Advanced Energy supplies the DC power to the magnetron. It was used in the power regulation mode, which is always set to 300 W. The scattering system consists of a laser with a wavelength of 532 nm with a power of 450 mW (Roitner Laser Technik, RLTMGL-532 1-450 mW) and a CMOS camera (XIMEA, MQ042CG-CM), with a color filter, which transmits light with a wavelength of 532 nm but blocks most of the light with different wavelength, which was generated by the plasma. The framerate of the camera was fixed at 25 frames per s. In front of the laser, a grid was installed which created out of the single beam a laser plane under an opening angle of 30°. The laser and the camera were installed under a 90° angle, and the laser plane includes the center line from the target to the orifice and lies normal to the camera ( Figure 1). For SEM investigations, NPs were deposited inside the vacuum chamber at a distance of 16 cm from the GAS exit orifice. Here, as substrate material, p-doped, (100) oriented Si wafer pieces with native oxide (cut to 1 × 1 cm 2 , SiMat) were used. The SEM analysis was done with a Zeiss Ultra Plus microscope. Image Formation First, the image formation and the image processing will be described. In general, three different effects were contributing to the recorded raw image of the camera: The reflections of the incident laser light from the chamber walls, the plasma emission, and the scattered light from the NPs. The latter one contains the information about the location of NPs, which was of interest for this study. scattering experiments in the GAS. The GAS is based on a CF63 cross with 6 flanges. On the back left flange, the magnetron is mounted. On two side flanges, two windows are mounted. On one window the camera with a color filter for green light is mounted and on the other window the laser. The laser beam is split by a grid so, that the plane from the center of the magnetron to the orifice is illuminated. b) The three different types of gas inlets are shown. For the normal gas inlet, the gas is inserted between the magnetron and the ground cap so that the gas is directed onto the target surface. The middle gas inlet means that the gas is injected through the middle of the target. The third gas inlet configuration is termed behind inlet, which means that the gas is injected somewhere behind the magnetron and the gas flows around the ground cap. Information about the locations of NPs can be obtained due to the Rayleigh scattering phenomenon. The Rayleigh scattering equation tells us that the scattering Intensity (I) depends on the particle size (a), the refractive index of the medium (n med ), the intensity of the incident light (I 0 ), the distance from the scattering object to the detector (d), the wavelength of laser light in vacuum (λ), the particle relative refractive index (m), and the test angle (θ): [47] For the LLS method, the two most important varying parameters were the particle size and particle density. The scattering equation shows that the scattering intensity depends on particle size to the sixth power. Since this equation gives the scattering intensity for one particle, the scattering intensity depends only linearly on the particle density. The dependence on the size of the NPs and the particle density was important for the interpretation of the LLS images. The different contributions to the raw image are schematically shown in Figure 2. The plasma emission and the reflections from the chamber walls do not contain the desired spatial information about the NPs. Therefore, the videos have to be processed after the experiments before further analysis. The color filter used in front of the camera already filters out the main portion of the plasma emission but still, a small portion contributes to the image. Since the chamber walls were curved, always some reflected laser light can enter the camera. In the end, the remaining plasma emission and the reflected light from the chamber walls have to be subtracted to obtain the information about the location of the NPs. This was done with a Matlab script and will be described in detail in the next chapter. Image Recording and Processing To extract only the signal of NPs out of the video, a MatLab script was used to subtract the emission and the scattered light from the chamber walls. The procedure was as follows: The camera recording was started prior to the experiment. The program detects the start of the plasma discharge and synchronizes the recording of the camera with the plasma ignition. Half a second after the plasma was ignited a background frame was taken. This frame contains the plasma emission and the reflected light from the chamber walls but no signal from NPs. A previous study has shown that it takes some seconds until NPs were detected inside a GAS. [36,38] For that reason, half a second was chosen for the background frame. This background frame was subtracted afterwards from the whole video. Then the program transfers the video into a color plot to increase the visibility in comparison to a mono-colored picture ( Figure 2). In addition, the program sums up all pixel values for each frame after the background subtraction. These summed-up values are called cumulative intensity and can be plotted over time to analyze the temporal development of the intensity within one experiment. Furthermore, one can integrate over the whole experimental time to obtain the total intensity of one experiment to compare experiments with each other. Results and Discussion This study is structured in four different sections. In the first section, the temporal changes in the spatial distribution of NPs inside the GAS will be discussed for one gas inlet configuration with a specific flow. By this example, the dynamic processes inside the GAS will be visualized and discussed. In the second part, the spatial NP distribution inside the GAS for three different gas inlets and five different gas flows and pressures will be evaluated. The aim is to study the influence of GAS geometries and different gas flow patterns on the trapping behavior of NPs inside the GAS. In the third section, the change in the size distribution of the deposited NPs will be discussed to understand the dependence between the growth processes of NPs and the gas flow pattern. In the last section, the impact of different gas inlets and pressures on the deposition rate will be analyzed to understand the impact of different gas inlets and flows on the efficiency of the NP synthesis. Dynamic Processes of NP Formation and Transport In the first part of this study, an exemplary typical LLS image time series for the middle inlet configuration will be discussed. observed. These include the center region in the middle of the picture (marked as 1) and the edge regions left and right in the image (marked as 2). Directly after starting the magnetron discharge, the NP formation process is far away from equilibrium, as at 0 s there are no nuclei or preformed particles in the source. Therefore, the early stages of the gas phase synthesis are particularly interesting to study. For this reason, the time period between 0 and 5 s is depicted in detail in Figure 3c. The time 0 s corresponds to the moment where the plasma is switched on. After 2 s scattered light from the NPs becomes visible and the intensity is increasing up to 15 s. Not only the intensity is increasing over time but also the shape and the dimensions of the trapped NPs in the edge regions are changing. The NP cloud looks like vortexes are present. The vortex behavior is much more visible in the processed videos. Therefore, one video for each inlet configuration can be found in the supporting information. In Figure 3c only at the edge regions, NPs are visible and not in the center regions. Additionally, the image series shows that the growth and transport of NPs inside the gas is a highly dynamic process. Finally, Figure 3b shows the cumulative intensity of the whole LLS image over time. This means that all pixel values of the images are summed up for each frame. This plot indicates also that the processes of NP formation and transport inside the GAS are time-dependent. In the beginning, the cumulative intensity is increasing until ≈5 s before it reaches a local maximum and shortly decreases until ≈10 s. Then it increases further until ≈15 s and then decreases until the plasma is switched off. That the intensity increases, in the beginning, is due to the NP formation and further growth of these NPs, which can be also seen in the LLS images from 0 to 5 s. The decrease is caused by fewer NPs or smaller NPs, respectively. Since the Rayleigh scattering depends strongly on the size of the NPs (to the power of 6) but also on the number of particles, this method cannot distinguish between the impact of size and number of NPs (discussed in detail in Section 2.3). One other reason could be that the NPs are simply moving in and out of the inspected region which is related to the small width of the laser plane. By the example of the time series of LLS images, it is presented how complex the NPs' growth and transport behavior in GAS is. Since prior to the deposition no metal atoms and no NPs are present in the gas phase it will take a certain time until it is possible, that nucleation, cluster growth, and transport of NPs out of the growing region are in a steady state. Perhaps a stable steady state can never be reached, because of the fast kinetics and also increasing temperature of the chamber walls. Nevertheless, the fundamental features of the LLS image stay relatively constant over the whole deposition time, which indicates that the NPs are trapped by an interplay out of drag forces and electromagnetic forces. This is in line with earlier reports on trapping of NPs inside the GAS. [33,34,[36][37][38] In these earlier studies, techniques like in situ UV-Vis or in situ SAX were used to analyze the growth and transport of NPs. These techniques average data out of the whole interaction volume of the light beam or X-ray beam. In comparison to these studies, LLS has the distinct advantage, that the signal originates from one two-dimensional plane out of the GAS. This enables the exact localization of NPs inside the GAS although the size and the number of NPs cannot be evaluated. Also, the fact that the cumulative intensity drops down extremely fast after the switch-off, shows that the NP trapping must be related to electromagnetic forces which are missing when the plasma is switched off. Assuming the NPs are trapped only because of turbulences inside the gas flow, the NPs would still stay in the turbulences when the plasma is switched off because the gas flow is not much affected by the plasma. The fact that the NPs are immediately vanishing when the plasma is switched off indicates trapping due to the interplay of electromagnetic forces and drag force. [33] Impact of Gas Inlet and Gas Flow/Pressure on the NP Trapping Behavior After having discussed the fundamental features of the dynamic phenomena after starting the gas phase synthesis of NPs, in the following section it will be shown how different gas inlet configurations and different Ar gas flows and pressures will change the trapping behavior of the NPs inside the GAS. Figure 4 shows 15 LLS images taken after 30 s of operation and all with the same discharge power. 30 s were chosen because the gas phase synthesis process approaches an equilibrium, where the relative intensity distribution between the different trapping locations does not change significantly over time. Each row corresponds to one pressure and gas flow, which is increasing from left to right. Each line corresponds to one inlet configuration: first the middle configuration, then the normal inlet configuration, and in the last row the behind configuration. The first point to make is that the intensities for all inlet configurations are increasing with an increase in gas flow/ pressure. This can be caused by more efficient NP nucleation at higher pressures which produce more NPs. [48] One can also distinguish between two different trapping regions. One is located at the edge region and one is located in the center region like it was shown in Figure 3a. For the behind configuration, the NPs are only trapped in the center region, and trapping was only observed for high pressures of 183 and 204 Pa. In contrast to that, the middle configuration shows only trapping at the edge regions, and trapping was observed from 141 to 204 Pa. The normal inlet configuration shows a transition from vortexlike trapping at the edge regions (at lower pressures) towards a superposition of both trapping regions (at higher pressures). For better visualization of the exact positions of the NP trapping regions, the field of view of the camera is divided into 9 quadrants (Figure 5a). Then the intensities of the quadrants I and III (representative for edge regions) are summed up and divided by the sum of quadrants V and VIII (representative for center region) for each frame (Figure 5b). This is done because the edge trapping regions are always located in the quadrants I and III. The trapping of NPs in the central region is always located in the quadrants V and VIII. Therefore, for the calculation of the ratio of intensities only the quadrants I, III, V, and VIII are considered. In every time bin, 25 sequential frames are taken to calculate the mean intensity values for regions I, III, V, and VII. One bin contains 1 s of time and the bin 1 s, ranges from 0 to 1 s. For each bin, the ratio of intensities is calculated from these mean intensity values. The color scale for the time bins goes from black over red to yellow and represents increasing deposition time. By this method the time dependence of the ratio of intensities becomes visible. If the calculated ratio is clearly smaller than 1, the center regions are dominating. If the ratio is clearly above one, the edge regions are dominating. Assuming that the size distribution of NPs in all quadrants is similar, the ratio gives the information at which position the majority of NPs are located. Figure 5b shows for the normal configuration, that for a low pressure no clear trend of the location of trapped NPs is present. This is related to the low LLS signal in this experiment. For pressures of 141 and 163 Pa the majority of NPs are clearly located at the edge regions. With increasing time the ratio of intensities increases which shows that more and more NPs are located in the edge regions over time in comparison to the center regions. At higher pressures the trend is different. Here the majority of NPs are trapped at the center region, which is most probably caused by the increasing pressure and flow, which leads to higher drag forces and finally changes the trapping position from the edge regions to the center region. The behind configuration shows no clear tendency for pressure from 118 to 163 Pa, which is related to the low signal, which can be also seen in Figure 4. For pressures of 183 and 204 Pa the majority of NPs are trapped in the center regions. That no trapping is observed in the behind configuration in the edge regions in contrast to the normal configuration is most probably caused by different gas velocity distribution inside the GAS. CFD simulations have shown, that the velocity is always highest at the inlet and the outlet orifice. [32][33][34][35][36] In regions, which are not in the direct path between inlet and outlet the velocity is small. For the normal configuration, a low velocity at the edge regions can be assumed. In contrast to that, the velocity for the behind configuration will be higher. Therefore, less or no trapping is expected for the behind configuration at the edge regions, which the experiments also proved. For the middle configuration for pressures higher than 141 Pa, the majority of NPs were always at the edge regions. For the lowest pressure of 118 Pa again no clear trend is visible, which is also related to the low LLS signal. The reason for the trapping at the edge regions for this inlet configuration is also explained by the gas velocity distribution. The highest gas velocity is expected in the center of the gas since the gas inlet and outlet are located in the central axis of the GAS. Therefore, trapping in the center is unexpected, as the LLS results have successfully shown. Influence of Gas Inlet and Gas Flow/Pressure on the NP Size Distribution In this part of the study, the influence of gas flow, pressure, and inlet configuration on the size distribution of the deposited NPs will be discussed. It will be shown that the mean diameter of the deposited NPs does not exhibit the same trend for all inlet configurations in dependence on the gas flow/pressure. Figure 6 shows six size distributions with corresponding SEM images as insets. The left column corresponds to a pressure of 118 Pa and the right column to 204 Pa. The first row corresponds to the middle inlet configuration, the second row to the normal configuration, and the last row to the behind inlet configuration It is important to note that for the middle inlet row, for the normal inlet for a pressure of 204 Pa and for the behind inlet for a pressure of 204 Pa the largest NPs are not visible in the distribution. Their size is shown directly in the SEM The labeling of the quadrants is important for following calculations. b) The plot shows the ratio of intensities vs. pressure for the three different types of gas inlets. The intensities of the quadrants I and III are summed up and divided by the sum of quadrants V and VIII for each bin. For each bin, the intensities of 25 sequential frames are considered, which corresponds to 1 s. The color scale goes from black (first time bin) over red to yellow and represents an increasing bin number, which is proportional to increasing deposition time (b). pictures because the visibility of the size distributions would be worsened when these particles would be included. In general, for all gas configurations and pressures log normal distributions are obtained except the behind configu-ration, which shows a bimodal log normal distribution for a pressure of 204 Pa. All fitting parameters, the mean value, and the fitting function for all size distributions are presented in Table S1 and Equation S1, Supporting Information. When Part. Part. Syst. Charact. 2022, 39, 2200112 Figure 6. Size distributions and SEM micrographs as insets for two different flows for each type of gas inlet configuration. The left column corresponds to pressure of 118 Pa and the right to 204 Pa. The first row corresponds to the middle gas inlet, the second to the normal gas inlet, and the last one to the behind gas inlet. Additionally, the mean diameter of the size distribution is depicted in the histograms. comparing the mean values of the distributions, the normal configuration shows an increase from 9.9 to 13.6 nm. In contrast to that, the mean for the middle configuration and the behind configuration is decreasing from lower to higher pressure. The mean for the middle configuration decreases from 11.9 to 8.7 nm and for the behind configuration from 9.3 to 8.7 nm. This behavior of the NPs size is counter-intuitive because one would expect the same trend of the size distribution with increasing flow/pressure for all inlet configurations. This shows once more how important the influence of the gas inlet configuration is. The literature explanation for the flow/ pressure dependence on the NPs' size is, that for an increase in Ar flow/pressure the mean NP size and broadness of the distribution is firstly increasing and after a maximum at higher flow decreasing again. This is related to the more effective nucleation with higher pressures in the beginning. At higher flows, the reduced residence time stops the growth of NPs at earlier stages and reduces the probability for coalescence of NPs, which leads to smaller NP sizes. [49][50][51] From this explanation one would expect the same trend for all gas inlets, but the results indicate different behaviors for different gas inlets. Although most of the NPs have a size of around 11.9 nm, the middle inlet configuration additionally shows a pressure of 118 Pa, an NP with a diameter of ≈490 nm and for a pressure of 204 Pa, an NP of ≈205 nm in the examined region (20 µm 2 ). The shape suggests that these NPs are monocrystalline due to crystalline facets. A diameter of ≈490 nm was never produced before in our experiments with normal gas inlets inside a Haberland-type GAS. In the Supporting Information, SEM pictures with lower magnifications are shown ( Figure S1, Supporting Information). They show that in the observed region for the middle inlet configuration for a pressure of 118 Pa, indeed, only one NP with a diameter of ≈490 nm was found. On the other hand, for a pressure of 204 Pa more NPs with diameters larger than 100 nm were found in the analyzed area. The different trends in the flow/pressure dependence on the gas inlet position and also the observation of large NPs (greater than 200 nm) for the middle inlet configuration can be explained perhaps with different gas velocity distributions inside the GAS for different gas inlets. CFD simulations in other publications have shown that a broad velocity distribution is present inside the GAS and that also vortex regions can be present. The simulations have also shown that the highest velocity was always found at the inlet and outlet of the gas. [32][33][34][35][36] Since the gas inlet position was varied, a different gas velocity distribution is expected for all gas inlet configurations. This can cause different release probabilities for the trapped NPs in the GAS. In the middle configuration, for example, the trapping regions are in the edge regions. Here it is most probably more difficult for NPs to escape and get deposited onto the substrate. It is possible that the larger NPs with diameters above 200 nm are originating from these regions. For the normal and behind configuration one trapping region is in the center. These NPs may escape more often than NPs in the edge regions. Since the residence time of trapped NPs is higher, they have more time to grow. This is perhaps the reason why the size distributions are also showing different trends depending on the gas inlet and pressure. This potentially also explains the bimodal distribution for the behind inlet at a pressure of 204 Pa. This shows once more how important the gas inlet and fluid dynamics are inside a GAS. Impact of Gas Inlet and Gas Flow/Pressure on the Deposition Rate After the effect of different gas inlets and pressures on the location of trapped NP and their size distributions were discussed in the last section, we will show how the deposition rate is influenced by the different gas inlet geometries and gas flows and pressures. To determine the deposition rate, a QCM is used. The change in the resonance frequency of the QCM crystal is directly proportional to the deposited mass and, therefore, to the mass of deposited NPs. Figure 7 shows the absolute difference in frequency of the QCM for a deposition of 60 s for all inlet configurations in dependence on the pressure. It is obvious that for all inlet configurations the deposited mass for increasing from 118 to 183 Pa. Up to a pressure of 183 Pa, the behind configuration has a higher deposited mass in comparison to the other configurations. The middle inlet configuration shows the lowest deposited mass in this pressure interval. For 204 Pa all configurations are showing a drastic increase in the deposited mass. The configuration with the highest deposited mass is the middle configuration with 521 Hz, followed by the normal inlet with 212 Hz, followed by the behind inlet with 128 Hz. The general increase of deposition rate with increasing flow and pressure for all configurations can be explained by better NP growth conditions and better NP transport. [52] But this does not explain why the behind configuration up to a pressure of 183 Pa always shows higher deposition rates than the other configurations. This must be related to less NP trapping for the behind configuration in comparison to the other configurations, which is in line with the results from the LLS measurements (Section 3.2). The reason for this behavior can be explained by the gas flow. In the middle configuration, the highest gas velocity can be assumed to be in the center region of the GAS from the inlet to the orifice. On the other hand, the gas velocity at the sides will be much lower. This means that the drag force which can release the particles from the GAS is in the center much higher in comparison to the edge regions. NPs which are located at the edge regions are not efficiently dragged to the orifice in comparison to the NPs in the center. Figure 4 shows clearly that in the middle configuration particles are only trapped at the edge regions and not in the center region. Comparing this behavior with the behind configuration, the strongest difference is that no NP trapping at the edge regions of the GAS occurs, because in the edge regions the gas velocity, and so the drag force, is higher in relation to the middle inlet ( Figure 4). Also, the deposited mass is higher up to 183 Pa for the behind configuration than for the middle configuration ( Figure 7). This indicates that trapping is less pronounced in this configuration in comparison to the middle configuration. The position of the trapping regions for the normal inlet configuration is of interest, too. Up to a pressure of 163 Pa, the NP are predominantly trapped at the sides. For higher pressures, the signal in this trapping region is reduced and a new trapping zone in the bottom center appears. It is clear that due to different GAS inlet configuration, the gas velocity is low at the edges in comparison to the behind configuration. But in the center, the velocity is most probably still lower in comparison to the middle configuration. At 204 Pa the order of the gas inlets with the maximum deposition rate is changing. Here, the middle inlet shows the highest deposition rate, followed by the normal inlet and then the behind inlet. The middle inlet shows the highest deposition rate, which may be caused by stronger turbulences inside the GAS. This may affect NPs from the edge regions leaving the trapping by too high centrifugal forces, which push them into the central regions where the gas flow guides them to the orifice. This is perhaps also the explanation for the higher deposition rate of the normal inlet in comparison to the behind inlet configuration for a pressure of 204 Pa. Because also for the normal inlet configuration still NP trapping was observed in the edge regions. Interestingly the finding, that the behind configuration showed up to a pressure of 183 Pa always a higher deposition rate in comparison to the normal inlet, is contrary to the findings by Sanzone et. al. [32] They observed that the deposition rate of Au NPs for a normal configuration was 20 times higher compared to behind inlet even though the sputtering power was roughly 4 times higher for the behind configuration (31 W vs. 8 W). The observed different outcomes can be potentially traced back to differences in the source geometry as well as the applied DC power (300 W in this study compared to 31 or 8 W in the work by Sanzone et. al.). On the one hand, a change in power strongly affects the plasma parameters, which can lead finally to a variation of the trapping forces acting on the NPs. On the other hand, Sanzone et. al. observed a strong increase from 8 to 31 W in the deposition rate for the normal configuration. This indicates that the nucleation process was most probably not extremely efficient at the lower power settings. The reason could be, that for low power fewer sputtered atoms are present in the aggregation volume, which reduces the probability for three-body collisions. These aspects are expected to severely impact the trapping forces and the nucleation process, which may explain the observed different results. Taken all together, the middle inlet seems to produce more efficient trapping in comparison to the behind and normal inlet. This is caused by different gas velocity distributions inside the GAS. To reduce the amount of trapping it would be beneficial to design an inlet configuration where the gas enters the GAS at the middle inlet and also from behind the magnetron. This would reduce the trapping regions and can increase the overall deposited mass and material conversion efficiency. Additionally, the general source dimension and geometry can potentially be improved with the help of CFD simulations, which aid the prediction of low-velocity regions inside the source during the design process. One option for future improvements could be to decrease the diameter of the source to nearly the diameter of the ground cap of the magnetron. In this case, the development of trapping regions, which were observed in the edge regions would be impeded by the constrained GAS dimensions. This may prevent trapping in these regions, but simultaneously may cause more sputtered atoms to be deposited onto the chamber walls, which in turn would reduce the efficiency of the GAS. Here, CFD simulations combined with experimental tests should allow to find the optimum geometry. Conclusion and Outlook NP formation and transport inside a GAS are highly dynamic processes. A deeper understanding of the gas phase synthesis in a magnetron-based GAS requires elaborate in situ diagnostic methods. This study demonstrates how LLS can be applied to obtain in situ time-resolved information on the location of NP trapping. In the future, the LLS technique could be improved by light sources with different and smaller wavelengths to estimate the sizes of the trapped NPs. Additionally, more powerful light sources and a better camera, with a high frame rate and a low exposure time, could be used to learn more about the forces acting on the NPs, if the release could be monitored when the plasma is switched off. Nevertheless, the LLS results have shown that NPs are trapped in different regions inside a GAS. The trapping position and LLS intensity of NPs inside the gas depend strongly on the gas flow and pressure. Additionally, three different gas inlet configurations and their impact on the NP trapping were studied. It turned out, that the location of the gas inlet is the most important parameter, which affects the confinement and the size distribution of NPs. The middle inlet showed the strongest trapping of NPs with a vortex-like behavior. However, the position of the trapping was only at the edge regions, which was also expectable because of the high gas velocity in the center of the source. This indicates that this configuration is perhaps not the most efficient gas inlet configuration for a GAS. Nevertheless, it has also shown how efficiently the gas inlet position can change the trapping behavior and the size distribution of the resulting NPs. Even when only three types of gas inlet locations were investigated in this work, many options are possible to improve the transport of NPs inside the GAS. Different kinds of inlet configurations can be used to prevent NP trapping or even make use of the trapping to tailor the properties of the resulting NPs. A combination of the LLS technique, together with in situ UV-Vis or in situ SAX and CFD simulations may enable the development of a novel and highly efficient GAS. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
231822650
s2orc/train
v2
2021-02-06T14:19:38.347Z
2021-02-05T00:00:00.000Z
FACEts of mechanical regulation in the morphogenesis of craniofacial structures During embryonic development, organs undergo distinct and programmed morphological changes as they develop into their functional forms. While genetics and biochemical signals are well recognized regulators of morphogenesis, mechanical forces and the physical properties of tissues are now emerging as integral parts of this process as well. These physical factors drive coordinated cell movements and reorganizations, shape and size changes, proliferation and differentiation, as well as gene expression changes, and ultimately sculpt any developing structure by guiding correct cellular architectures and compositions. In this review we focus on several craniofacial structures, including the tooth, the mandible, the palate, and the cranium. We discuss the spatiotemporal regulation of different mechanical cues at both the cellular and tissue scales during craniofacial development and examine how tissue mechanics control various aspects of cell biology and signaling to shape a developing craniofacial organ. INTRODUCTION The vertebrate head is an intricate and complex part of the animal body, composed of organs with diverse functions and types. These craniofacial structures, including the cranium, sensory organs, mandible, temporomandibular joint (TMJ), palate, muscles, and teeth, are all constructed in their own unique forms and shapes to facilitate their functions. The complexity of craniofacial skeletal shapes was well appreciated by early naturalists, such as Johann Wolfgang von Goethe (1749-1832), who coined the word "morphologie". 1 Goethe's study on morphological features set the foundation for the work by D'arcy Thompson, which formally recognized the role of physical laws in shaping biological structures, such as the vertebrate skull, during development and across evolution. 2 Now a century after Thompson's morphometric study, craniofacial structures with their diverse shapes and architectures once again serve as important models to investigate the developmental processes and cell biological events that propel organ morphological changes. With the advent of novel imaging and biomechanical techniques, we have gained a deeper understanding of how mechanical forces and other physical quantities regulate craniofacial morphogenesis. These studies unveil the interplay between biochemical and mechanical signals during organ formation and provide targetable pathways and guiding principles for developing new regenerative strategies. Here we will first review different types of physical quantities that contribute to tissue development and shape changes, and then focus on the mechanical regulation of selected examples of craniofacial structures. PART I. SOURCES AND TRANSDUCTION OF MECHANICAL SIGNALS DURING ORGAN MORPHOGENESIS Organ morphogenesis is a physical process that integrates mechanical and biochemical information into the regulation of coordinated cell property and behavior changes. 3,4 There are four main categories of mechanical inputs during development: (1) tissue volumetric changes; (2) generation of cellular forces by cytoskeletons; (3) large scale forces by muscle contraction; and (4) tissue material properties (Fig. 1). Below we discuss the molecular and cellular setup of each process, as well as their functional contribution to organ shape changes. Craniofacial examples are included when applicable. It should also be noted that while these are separate physical quantities, they are often coregulated and interconnected during organ development. Finally, we will discuss the signaling process through the Hippo pathway and Piezo ion channels that convert mechanical inputs into biochemical signals within cells. Differential tissue growth and volumetric changes In a developing organ, progenitor cells can divide, apoptose, and change in size; all of which contribute to the overall growth of the tissue. Spatiotemporal regulation of these processes can therefore lead to differential growth and shape changes. For instance, spatially localized proliferation has been observed at the ventral edge of the developing opercle, a dermal bone of the zebrafish craniofacial skeleton, and this inhomogeneous distribution of cycling cells is responsible for sculpting the correct shape of the bone. 5 Similarly, during avian beak formation, localized proliferation zones exist in the frontonasal process of different avian species, and spatiotemporal control of the location and size of these proliferation zones directly determines the beak shape in birds. 6 Differential growth also affects tissue mechanics. As cell numbers and tissue volume increase within a space constrained by surrounding tightly connected cells and/or the extracellular matrix (ECM), the expanding population would experience an increased pressure (compression) and stretch the surrounding cells, which would experience increased strain 7 (Fig. 1). These force changes have been shown to function as a mechanical feedback to further alter cell behaviors and induce cell differentiation, proliferation, or cell rearrangement. [8][9][10][11][12][13] Proliferation can also occur in a directional (anisotropic) manner. As cells often divide along the long axis of tissue elongation, such as during the outgrowth of the vertebrate limb bud and the Drosophila wing disc, oriented cell division has been thought to contribute to tissue lengthening. [14][15][16] However, division orientation may be a cellular response to dissipate preexisting anisotropic stresses (forces) within the tissue, 17 as opposed to driving shape changes directly. Indeed, randomizing cell divisions does not significantly affect morphogenesis, as demonstrated in the developing Drosophila wing disc and during zebrafish gastrulation. 18,19 Consistent with these findings, proliferation alone cannot account for the morphological changes observed in the developing vertebrate limb bud and mandibular arch, 20,21 highlighting the importance of other mechanical inputs, such as actomyosin tension and tissue material properties in regulating organ morphogenesis. Force generation by cytoskeletons Actin microfilaments and microtubules are dynamic cytoskeletal structures and components of force generating machineries that convert energy from ATP or GTP hydrolysis into pushing or contractile forces. These forces propel various cellular processes, including cell migration, cell shape changes, and transportation of organelles. Pushing forces are produced when these filaments polymerize against a barrier, such as the cell or nuclear membrane. 22,23 Contractile forces are primarily produced by the interaction between actin and the motor protein, non-muscle myosin II (MyoII), where activated MyoII assembles into bipolar filaments that crosslink and slide filamentous actin in opposite directions. 24 Actomyosin tension is critical for many morphogenetic events and spatiotemporal control of the contractile machinery and MyoII activity underlies an important mechanism for generating the anisotropic stress required to deform cells and morph developing tissues. 25,26 One such example is apical constriction during epithelial invagination. Prior to invagination, signaling cues organize actomyosin cables at the apical side of cells in an epithelial monolayer and apically activate MyoIIdependent contraction via the small GTPase RhoA and Rhoassociated coiled-coil kinase (ROCK), effectively shrinking the apical cell surface and driving epithelial buckling. 27,28 This type of actomyosin-driven shape changes has been implicated in the folding of lens and inner ear placodes, as well as the tongue circumvallate papillae. [29][30][31] Polarized Rho and MyoII activity has also been observed in tissues patterned by planar cell polarity, such as during body axis elongation. In this context, Rho kinase and active MyoII concentrate at the cell junctions perpendicular to the elongating axis. [32][33][34] The resultant increase in anisotropic actomyosin tension shortens that cell-cell boundary and allows neighboring cells to intercalate along the direction of boundary contraction, thus generating convergent extension movement and tissue elongation. [32][33][34][35][36][37][38] Tissue shape changes, such as those driven by apical constriction and convergent extension, require transmission and coordination of forces produced by individual cells at the supracellular level. In the epithelium, cells are joined to each other via cell adhesion proteins, including the membranespanning E-cadherin and P-cadherin in the adherens junctions (AJs). The cytoplasmic tails of cadherins bind to β-catenin, which connects actin filaments to cadherins via α-catenin. 39 The maturation of AJs and their stable attachment to actin cables are mechanosensitive. Tensional forces transmitted through cadherins and actins unfold α-catenin from an autoinhibited state to an open conformation that allows vinculin binding; vinculin in turn becomes activated to stabilize α-catenin conformation and to promote further actin assembly at AJs. [40][41][42][43][44] Concurrently, cell contractility can alter cadherin function and junctional integrity. [45][46][47] The mechanosensory function of AJs thus allows cells to dynamically react and coordinate mechanical forces at the supracellular level and to adjust adhesion strength for tissue remodeling. In addition to actin-based cell mechanics, forces originating from non-centrosomal microtubules can also contribute to morphogenetic changes. Microtubules are characterized by their high bending rigidity and capable of bearing compressive stress to maintain cell shapes. 48 Fig. 1 Force generation and signal transduction. Organ morphogenesis is modulated by several different physical quantities: volumetric changes, actomyosin contractility, tissue material property, and muscle contraction. a Anisotropic distribution of proliferating cells within a tissue contributes to its directional growth. If the tissue surrounding the proliferating zone does not expand in the same rate, the proliferating zone will experience compression (red arrows); while the surrounding cells will experience tension (blue arrows). b Cells generate active forces via actomyosin contractility. Actin cytoskeletons are connected to adherens junctions (AJs) and focal adhesions (FAs), which are mechanosensitive and can mediate increased cell-cell and cell-extracellular matrix (ECM) adhesions, respectively upon increased actomyosin tension and/or substrate stiffness. Both cell adhesion and ECM composition help determine the tissue material properties. The Hippo/YAP/TAZ pathway can also respond to mechanical signals. When there is low mechanical input, the transcription cofactors YAP and TAZ are phosphorylated and restricted in the cytoplasm. When there is high mechanical input, YAP/TAZ are localized to the nucleus and bind to TEAD transcription factors to drive the expression of target genes. Finally, mechanical deformation of cell membranes open up the mechanosensitive Piezo 1 and Piezo 2 ion channels, leading to calcium (Ca 2+ ) influx and activation of downstream signaling. c Muscle contraction generates large tissue forces that can impact morphogenesis of nearby musculoskeletal elements. Blue arrows represent force directions. α, α-catenin; β, β-catenin; FAK, focal adhesion kinase; p, phosphorylation Drosophila, cell polarity signals have been shown to reorganize the apical-basal distribution or planar orientation of noncentrosomal microtubules in epithelial cells. This allows tissue level coordination of anisotropic pushing forces generated by microtubule polymerization or dynein-dependent microtubule sliding to modulate cell shapes and overall tissue morphologies. 50,51 Beyond the direct mechanical control of cells, microtubules can also indirectly influence tissue mechanics by transporting cell adhesion components to targeted regions and promoting local MyoII activation to drive clustering of Ecadherin. 52,53 Interestingly, mutations in genes encoding factors that are involved in microtubule assembly and dynamics can affect the development of several craniofacial structures in vertebrates, and it will be imperative to determine if microtubule-dependent mechanical regulation plays a role in these processes. [54][55][56][57] Forces from muscle contraction Muscles generate forces through the sliding of actin and muscle myosin filaments, and muscle contractile forces have been shown to provide key mechanical signals to regulate the morphogenesis of skeletons, tendons, ligaments, and joints. 58 Studies using chick embryos with chemically paralyzed muscles and mouse embryos carrying mutations that inhibit muscle formation or contraction have demonstrated that a functional musculature is required for attaining proper bone growth and circumferential shapes of long bones, [59][60][61] for promoting the enlargement of bone ridges that attach to tendons, [62][63][64] for regulating the size and development of tendons, 65,66 and for maintaining the fate of joint progenitor cells during joint morphogenesis. 67 Similar results have also been found in the craniofacial system of several experimental models. For example, mechanical inputs from muscles contribute to the acquisition of species-specific mandible shapes in avian species; 68 while muscle forces are necessary for the morphogenesis of both the pharyngeal cartilage, as well as cranial tendons in the zebrafish. 69,70 Consistent with these findings, in both mice and human patients with muscular dystrophy, reduced skull growth and altered craniofacial skeletal shapes are evident, likely as a result of weakened mastication muscles. [71][72][73] Muscle contraction is therefore an integral part of morphogenesis and it functions in part by regulating cell rearrangements or ECM remodeling. For instance, muscle forces facilitate skeletal elongation by enabling intercalation of chondrocytes and thus generating cell stacking during bone growth. 69 The mechanical property of the developing cartilage may also be influenced by muscle contraction, as tensile forces can alter the ECM composition by controlling the expression level of collagens and proteoglycans from chondrocytes. 74,75 Finally, during tendon development muscle forces can directly affect the ECM organization and stimulate the release of active Tgfβ from the ECM to regulate tendon elongation and branching. 70 Future studies will focus on how cells sense and convert mechanical signals from muscles to generate specific cell behavior for tissue morphogenesis, which remains an important question in the field. Material properties of developing tissues While tissue growth and cytoskeletons produce forces that enable cell movement during morphogenesis, the extent of ensuing cellular rearrangements and tissue deformations (i.e., the rheological response to forces) depends on the material properties of the developing tissue. These physical properties, such as stiffness and viscoelasticity, are determined by the biochemical and biomechanical states of the constituent cells and their surrounding ECM. Spatiotemporal regulation of tissue material properties can therefore guide morphogenetic events and various cellular processes. For example, a tissue tends to be soft and more fluidlike during early stages of morphogenesis, but cells subsequently increase cortical actin crosslinking and tension that stiffens and maintains the maturing tissue architecture. [76][77][78] The viscoelasticity of tissues is further controlled by cadherin-dependent adhesion, as strong cell-cell adhesion can increase the viscosity and the yield stress of the tissue (more solid-like); while reduced adhesion allows tissues to be more fluid-like. [79][80][81] In elongating tissues such as during vertebrate body axis extension, establishing a spatial gradient of cadherin-mediated viscoelasticity thus guides progenitor cells through a fluid-to-solid transition, in which cells are initially permissible to rearrange and extend tissues posteriorly in a fluid-like state but become progressively "jammed" anteriorly to preserve the tissue architecture. 81,82 Such mechanism may similarly function to drive tissue lengthening during craniofacial development, as differential viscosity leading to differences in cell intercalation has been observed during mandibular arch elongation. 21 ECM is composed of proteoglycans and fibrous proteins (e.g., collagens, fibronectin, laminins), and its composition and structure convey crucial biochemical and mechanical information to guide cell proliferation, differentiation, and movement during development. 83 Cell-ECM adhesion and signal transduction are primarily through binding of ECM proteins to the transmembrane heterodimeric integrin receptors that are part of the focal adhesions (FAs). In nascent FAs, talin connects cytoplasmic tails of β-integrin subunits to the actin cytoskeleton. Similar to the α-catenin in AJs, the regulation of talin conformation and function is mechanosensitive. In response to an optimal substrate stiffness, cells can exert more forces at FAs via increased actomyosin contractility. Tensile forces at FAs then stretch talin and expose cryptic sites for vinculin binding, which reinforces the talin-actin linkage mechanically and promotes FA maturation. 84,85 Integrin activation also recruits focal adhesion kinase (FAK) and SRC kinase, which activate downstream biochemical signaling to regulate cytoskeletal organization and gene expression. 83 FAs therefore allow cells to sense and react to the mechanical property of ECM, and in turn cells can control the matrix stiffness via actomyosin contractility or by modulating the ECM contents and crosslinks. 86 One example of ECM-guided tissue shape change is the branching morphogenesis of the submandibular gland. Several studies showed that collagens and fibronectin accumulate at the branching point. 87,88 where integrin activation signals through FAK and RhoA to induce actomyosin contractility. 89,90 This has two consequences: first it enhances cell motility to facilitate branching; and second, it reciprocally triggers further local fibronectin assembly to support branching structures and promote cell proliferation. 56,89,90 The ECM can also transduce tissue mechanical forces, for example during the initiation of cephalic neural crest cell migration. 91 In this context, convergent extension of the head mesoderm results in increased cell density and tissue stiffness. This information is then relayed through the ECM to activate integrin/vinculin signaling in the overlying neural crest cells and induce their migration. The ECM thus plays important roles during craniofacial development. Given that ECM and integrin signaling has been implicated in the morphogenesis of several craniofacial structures that develop from invaginating ectodermal epithelia, including the tooth, optic, otic and olfactory placodes. [92][93][94][95][96][97][98] it is plausible that changes in ECM mechanical properties modulate cell behaviors to facilitate epithelial invagination in these developing organs. Sensing and integrating mechanical information with biochemical signaling via the Hippo pathway and Piezo proteins Above, we discussed the role of integrin signaling in sensing substrate stiffness and then converting that information into biochemical signals to regulate cell behaviors. Another important mechanotransduction pathway downstream of FAs and AJs is the Hippo signaling cascade that controls gene transcription by regulating the activation and nuclear translocation of the transcription cofactors Yes Associated Protein (YAP) and its Mechanical regulation of craniofacial morphogenesis Du et al. paralog WW Domain Containing Transcription Regulator 1 (TAZ) 99 ( Fig. 1). The localization of YAP/TAZ in the cytoplasm or nucleus (and thus their transcriptional function) depends on their phosphorylation state. When the Hippo pathway is active, several phosphorylation events lead to the activation of LATS1 and LATS2 kinases, which then phosphorylate YAP/TAZ at several amino acid residues. This restricts YAP/TAZ in the cytoplasm through binding with proteins associated with adhesion complexes, such as 14-3-3 and angiomotin, and promotes YAP/TAZ degradation as well. 100,101 Conversely, inactivation of Hippo signaling allows unphosphorylated YAP/TAZ to accumulate inside the nucleus and function with other transcription factors to drive the expression of genes that regulate cell proliferation and differentiation. It should be noted that other kinases, including FAK and SRC, can also directly phosphorylate YAP/TAZ, 102,103 and different signaling mechanisms can be employed to control YAP/TAZ functions. Biomechanically, talin-mediated tension sensing at FAs enables cells to respond to substrate stiffness and trigger actomyosindependent YAP activation. 104,105 Upon integrin engagement, FAK and SRC can additionally signal through PI3K to inhibit LATS1/2 and induce YAP nuclear localization. 106 Forces transmitted through FAs are also capable of directly deforming the nucleus to allow YAP entry through stretched nuclear pores. 107 In addition to FAs, AJs are important sites for integrating mechanical signals to the Hippo signaling as well. For example, under low cell density, tension-dependent recruitment of LIM domain proteins Jub (in Drosophila) or LIMD1 and TRIP6 (in mammals) to AJs triggers complex formation of LATS1/2 at AJs, thereby inhibiting LATS1/2 function and promoting YAP activity. 108,109 By analyzing mouse mutants with organ-specific Yap deletion, it was shown that YAP is critically required for the development of several craniofacial structures, including the cranial neural crest, teeth, and palates. [110][111][112] However, whether YAP mediates mechanical signals to control aspects of their formation remains to be studied. Beyond cell junctions, mechanical forces can also be detected by mechanosensitive ion channels, such as Piezo family proteins (Piezo1 and Piezo2), and mutations in Piezo2 are known to cause several craniofacial syndromes. 113 Piezo proteins function by responding to pressure and mechanically deformed cell membrane to open its pore for the inflow of positively charged ions, such as Ca 2+ , which in turn activates downstream Ca 2+ -dependent signaling. 114 Piezo channels thus enable cells in a tissue to sense crowding forces and control cell density through cell extrusion, 115 and to modulate stem cell proliferation and differentiation in response to tissue mechanical changes. 116,117 Mechanosensation through Piezo proteins also intersects with Hippo signaling 21 and how these pathways are coordinated to elicit specific cell responses during development is an intense area of research. PART II. SHAPING CRANIOFACIAL STRUCTURES BY FORCES AND MATERIAL PROPERTIES How mechanical forces and tissue properties control organ morphogenesis and cell differentiation is an important developmental question and several craniofacial structures have served as model systems to investigate this subject. These studies have led to paradigms that describe mechanical regulation of various morphogenetic events, as well as integration of biochemical signals that mediate these processes. In this section, we will center our discussion on the mechanical regulation of developing mandibular arches, teeth, palates, jaws, TMJs, and crania, roughly following the order of their developmental initiation. We primarily focus on the mouse as a model organism, where the majority of studies have been conducted. Mandibular arch Pharyngeal arches are transient metameric structures composed of a mesenchymal core and an outer single layer of epithelium that are formed on either side of the developing head at around mouse embryonic day 8-8.5 (E8-8.5). 118,119 Together with the frontonasal prominence in the medial aspect of the head, these structures undergo extensive outgrowth and morphological alterations to eventually give rise to the face and the neck of an animal. Among them, the first pharyngeal arch (also called the mandibular arch) is subdivided into the dorsally positioned maxillary prominence and the ventrally located mandibular prominence at E9.5. While the maxillary process later forms the upper jaw and the palate, the mandibular process is the precursor of the lower jaw. The epithelium associated with these prominences also gives rise to other craniofacial structures, including the tooth and the salivary gland. Incorrect development of the mandibular arch can therefore cause many craniofacial anomalies with facial and mandibular defects. 120 The initial morphogenesis of the mouse mandibular arch involves tissue elongation and bending towards the midline between E8.5 and E9.5. During elongation, the mandibular arch also acquires a morphology characterized by a narrow central waist and a distal bulbous region. While cell proliferation and survival are clearly required for the growth of the arch, 121-123 how tissue volume changes may drive its lengthening has not been thoroughly studied. However, as the length of cell cycle is similar throughout the middle and distal mandibular process, that quantity alone does not appear to be responsible for the initial morphogenesis of the mandibular prominence. 21 At the level of signaling regulation, the non-canonical Wnt ligand Wnt5a has been found to be critically required for the outgrowth of epithelium-encapsulated mesenchymal tissues, such as the mandibular arch and the limb bud. 124 Mutations in Wnt5a thus can cause craniofacial abnormalities (and shortened limbs) in both mouse mutants and human Robinow syndrome patients. 125 Functionally, Wnt5a regulates cell polarity and controls directional cell movement and oriented cell division to propel tissue lengthening. 15,126,127 In the central segment of the developing mandibular process, Wnt5a acts upstream of YAP/TAZ and the mechano-sensitive Ca 2+ channel, Piezo1, to induce actomyosin polarity and oscillation of cortical tension as measured using a genetically encoded vinculin tension sensor 21 (Fig. 2). This reduces local tissue viscosity and facilitates cell intercalation to drive the convergent extension of the arch. The middle arch is therefore more "liquid-like". In the distal portion of the arch, reduced cell rearrangement stiffens the tissue, which has been demonstrated by measuring the displacement of magnetic beads in the middle and distal regions of the arch using magnetic tweezers. 128 In addition, distal arch expresses higher amount of fibronectin that also exhibits a mediolateral angular bias. Such spatial variation in ECM abundance and orientation can potentially further contribute to the regulation of arch material property and directional cell movement. 128 Below we will further examine how different physical properties modulate the development of structures derived from the first branchial arch, including the tooth, the palate, and the mandible. Tooth Tooth morphologies are amazingly diverse across different vertebrate species, but their development all begins with the formation of the dental lamina that is discernable as a thickening of the oral epithelium at future tooth sites. 129 In mice, tooth development begins at around E11, when the dental lamina stratifies and invaginates to form the dental placode. 130 The stratified dental epithelium then grows further into the underlying cranial neural crest-derived mesenchyme and progresses through increasingly complex morphological changes over time until the tooth erupts. The distinct shapes of the dental epithelium are also used to name each tooth developmental stage: the bud (E12.5-13.5), the cap (E13.5-E15), and the bell (E15-post natal day 7) (Fig. 3). In adults, the tooth crown surface is composed of the enamel, which is generated by dental epithelium-derived ameloblasts during development. Below the enamel is dentin, which is laid down by the neural crest-derived odontoblasts and encloses the dental pulp and the neurovascular bundles within. 131 Because the tooth is a relatively simple structure during its early development and amenable for ex vivo live imaging, the mouse tooth has become a powerful system to study the cell behavior and associated biomechanical inputs that drive epithelial bending. 132 This adds to decades of research that have uncovered the reciprocal signaling interactions between the dental epithelium and the mesenchyme, 131 providing a comprehensive understanding on how mechanical and biochemical signals work in concert to regulate cell movements, divisions, and fate decisions during tooth development. Similar to other ectodermally-derived organs, dental epithelium begins as an epithelial monolayer, which first bends towards the mesenchyme and then stratifies. 133 While apical constriction is responsible for the bending of several epithelial organs, 134 the dental placode clearly utilizes a different mechanism as it lacks apical localization of actomyosin and cells are columnar shaped without apical narrowing. 135 Instead, dental epithelial cells in mice undergo a process called "vertical telescoping", where cells send out centripetally-oriented apical protrusions that push on their more centrally located neighbors, and collectively they deform the epithelium downwards. 135 The formation of these protrusions depends on actin polymerization and branching and requires both hedgehog (Hh) and fibroblast growth factor (FGF) signaling, as chemically inhibiting any of these processes reduces protrusion numbers and abolishes epithelial invagination. The same mechanism also enables the invagination of the salivary gland epithelial monolayer. 135 In the developing molar, epithelial invagination is accompanied by vertical cell divisions to produce suprabasal cells and FGF signaling functions as a necessary cue to induce cell proliferation and epithelial stratification. 136 The increase in tissue volume in the suprabasal space can therefore in theory generate 3 Mechanical regulation of the developing molar. The tooth epithelium undergoes progressive shape changes during its development. a At the lamina stage, cells in the epithelial monolayer extend centripetally oriented protrusions to migrate vertically and push neighboring cells towards the mesenchyme (vertical telescoping). Concurrently, vertical cell divisions contribute to epithelial delamination and generation of suprabasal cells. b, c During the placode and bud stages, suprabasal cells organize their actomyosin cables in the planar orientation and cells (dark green) intercalate towards the center of the bud to generate planar contractile stresses. This mechanically seals the top of the tooth bud and facilitates epithelial invagination by bringing the connecting basal layer cells (light green) towards the center. Concomitantly, mesenchymal cells condense around the dental epithelium and increased compressive stress due to cellular crowding triggers mesenchymal differentiation. d The cap shape is postulated to arise as a result of differential tissue growth between the enamel knot (EK) and non-EK epithelium. Basal constriction has also been observed in basal cells neighboring the EK, potentially resulting in the upward buckling of those cells. e Mechanical constraints from the alveolar bones play a role in establishing the alignment offsets between the lingual and buccal cusps. Solid blue arrows represent force directions and gradient arrows represent cell or tissue movements Mechanical regulation of craniofacial morphogenesis Du et al. pressure to further bend the epithelium. However, to direct this pressure into driving invagination only, a physical barrier needs to be established apically to restrict tissue buckling upwards. This was in part achieved by planarly-oriented tissue tension in the suprabasal cells, which display prominent actomyosin bundles at the supracellular level in the same direction as the tension. 133 The evidence of tissue tension was demonstrated through a series of mechanical cutting experiments, in which an initially tensed tissue would recoil from the point of cutting. For example, following an incision made in the middle of the mouse molar suprabasal layer, the bisected tissues recoiled in opposite directions and the degree of epithelial bending was reduced, indicating the presence of contractile forces that facilitate invagination. Complementing this finding, an incision made outside the tooth epithelium incurred additional epithelial bending, as the suprabasal contraction was no longer resisted. On the contrary, if an incision was made first in the suprabasal layer to relieve local tension and followed by a lateral cut outside the tooth germ, no recoil was observed. Similarly, if a lateral cut was made on tissues cultured in the presence of Blebbistatin that inhibits MyoII function, no recoil was detected, either. Together these results concretely show that actomyosin-dependent epithelial contraction is integral to the tooth invagination process. As the molar placode enlarges in volume and gradually transforms into a bud shape, portions of the suprabasal layer continue to narrow and forms a neck region that connects the bud to the surface epithelium. Live imaging of the mouse molar bud showed that this results from a convergent extension type of cellular movement. 133 In this context, suprabasal cells migrate towards the center of the placode and intercalate with one another. At the same time, basal cells at the edge of the placode are both anchored to their neighbors and attached to adjacent suprabasal cells via E-cadherin, drawing themselves towards the placode center. Collectively, these movements generate even more planar tissue contraction that not only seals the top of the dental placode but also pulls the basal cells in the neck region towards each other in a pinching fashion, effectively driving epithelial buckling toward the mesenchyme. At E12.5 the developing mouse tooth bud concomitantly induces the underlying mesenchyme to condense. Mesenchymal cells migrate towards the invaginating epithelium in response to the long-range chemo-attractant FGF8 secreted from the tooth bud, which also produces SEMA3F, a short-range repulsive signal, to augment mesenchymal compaction by the epithelium. 137 The compressive stress from cellular crowding is thought to function as a mechanical signal to initiate mesenchymal cell differentiation, as compressing dissected mandibular mesenchyme in culture promotes expression of odontogenic markers Pax9 and Msx1. Cell crowding during condensation also modulates the ECM composition by inducing the expression of collagen VI. 93 The presence of a structurally organized ECM is clearly important for maintaining mesenchymal differentiation, as chemical inhibition of lysyl oxidase, which catalyzes collagen crosslinking and therefore regulates ECM stiffness, results in diminished mesenchymal condensation, as well as Pax9 expression. 93 Consequently, both condensation-induced compression and changes in the ECM property contribute to the regulation of odontogenesis. It remains unclear whether mesenchymal condensation and the associated material properties also provide mechanical cues to control the development of the overlying epithelium. The epithelial bud-to-cap transition between E13 and E13.5 in mice coincides with the formation of the tooth signaling center, known as the primary enamel knot (EK). 138,139 The EK is composed of a group of postmitotic cells that are specialized in signal secretion, expressing various ligands, including sonic hedgehog (SHH), bone morphogenetic proteins (BMPs), and FGFs, which maintain the proliferation and continuous extension of the epithelium surrounding the EK (called the cervical loop). 140 The cap shape was posited to arise as a result of differential proliferation between the non-dividing EK and the proliferative neighboring cervical loops. 141 Support for this idea came from tracking the development of cultured molar slices, which showed higher growth rates in the epithelium adjacent to the EK than in the EK itself. 142 Computational modeling that combines these experimental data with consideration of the physical constraint provided by the less proliferative mesenchyme, and differential adhesion between the mesenchyme and the epithelium, predicts that the tooth bud is guided by these factors to buckle at the presumptive cervical loop areas and to grow downward from those sites. [142][143][144] Surprisingly, chemical inhibition of cell proliferation in slice culture does not prevent bud-to-cap morphogenesis, 145 suggesting that while differential proliferation may contribute to the cap shape, it is in fact not required for this process. Proliferation-independent mechanisms must then exist to initiate the bud-to-cap shape changes. One possible mechanism is through basal constriction of cells on either side of the EK. 145 Prior to the cap stage, myosin heavy chain IIB and actin bundles were observed to accumulate on the basal surface of cells that are adjacent to the forming EK. Quantifying the shape of these cells at different developmental timepoints between E13.5 and E15.5 in mice further revealed that they have decreased basal width over time. 145 As a result, it is conceivable that actomyosin tension contracts the basal surfaces surrounding the EK and drives evagination of the inner dental epithelium away from the mesenchyme, thus creating the cap shape. However, this would require further experimental confirmation. The same study also showed that inhibition of FAK signaling abolishes bud-to-cap transition 145 and therefore perhaps regional activation of integrin/ FAK signaling through interactions between the epithelium and the mesenchyme is crucial for shaping the epithelium at this stage. At the same time, given that EK formation depends on αcatenin-mediated inhibition of YAP activity, 146 it will be interesting to explore how tissue forces generated by cell shape changes, such as the basal contraction described above, regulate YAP localization and activity to control EK differentiation. Between the cap and bell stages, the cervical loops invaginate further into the mesenchyme and this may be mechanically powered by both increased actin-dependent cell motility and oriented cell divisions along the axis of the extending epithelium. 142 During the bell stage of mouse molar development between E15.5 and E16.5, the primary EK undergoes apoptosis and secondary EKs are formed. 142,[147][148][149] Secondary EKs play an important role in determining the cusp locations and crown morphology in multicuspid teeth. 150 In monocuspid teeth, such as the incisors, only the primary EK is formed. Mechanical constraints from the alveolar bones that surround the developing molars appear to play a role in establishing the amount of offsets (or alignment) between the lingual and buccal cusps. 151 This was realized because mouse and vole molars cultured as ex vivo explants without the surrounding bones lose their offset patterns but can be rescued by lateral compression imposed by artificial mechanical constraints. Soft tissue tomography also showed that the morphology and growth of molars are strongly associated with those of alveolar bones, highlighting co-development of these tissues and possible mechanical interdependence due to their close proximity. In diphyodont animals (animals that initially have a set of deciduous teeth that are later replaced by the permanent set of teeth), such as the miniature pig, compressive stress due to alveolar constraint has also been implicated in timing the activation of permanent tooth development from an arrested state. 152 The compressive stress is generated as a result of deciduous teeth growing faster than the expansion of alveolar sockets, and acts as a mechanical signal to induce an integrin β1-ERK1-RUNX2 signaling axis in the adjacent mesenchyme, which in turn suspends the permanent tooth epithelium in an arrested state. Once the compression is released after tooth eruption, integrin β1-ERK1-RUNX2 signaling is reduced and the permanent tooth proceeds to develop. Together these studies accentuate the importance of tissue forces during tooth morphogenesis and point to the necessity to consider these mechanical factors when bioengineering human teeth based on developmental principles. For example, designing a hydrogel that matches the elastic modulus of dental tissues supports the formation of biomimetic tooth buds from primary porcine dental cells. 153 As we learn more about how mechanical signals guide tooth development, increasingly sophisticated mechanical manipulations can be implemented in novel bioengineering platforms through the application of photochemistry and optogenetics that facilitate spatiotemporal control of the hydrogel properties 154 and cellular forces. 155 By recreating the mechanical microenvironment and the biochemical-mechanical signaling interactions observed in developing teeth, we will be able to more precisely direct dental progenitor cell proliferation and differentiation in culture and to bioengineer teeth with the correct shape and architecture. Finally, as the dental mesenchyme clearly responds to mechanical cues, 137,156,157 an in-depth understanding of the mechanical modifiers that influence their fate decision is essential to fully realize their potential for stem cell-based therapies and tissue regeneration. Palate The palate forms the roof of the mammalian mouth and physically separates the oral cavity from the nasal cavity. Anatomically, the palate is consisted of the primary and secondary palates; the primary palate encompasses the triangular region between the incisive foramen and the alveolar ridge surrounding upper incisors, and the secondary palate comprises the rest of the hard and soft palate posteriorly. The primary and secondary palates have distinct embryological origins. Whereas the primary palate is derived from the frontonasal prominence at the rostral anterior side of the mouth, the secondary palates develop as outgrowths from the oral surface of the paired maxillary processes on either side of the mouth. These outgrowths are largely composed of cranial neural crest-derived mesenchyme and surrounded by a layer of oral epithelium. 158 In mice, the secondary palatal outgrowths become visible at around E11.5, marking the beginning of palatogenesis. Between E11.5 and E13.5, the secondary palatal shelves grow in size and first extend vertically towards the mandible on either side of the tongue, while displaying stereotyped morphologies that are distinct along the anterior-posterior axis. From E13.5 to 14.5, the developing secondary palates undergo palatal shelf elevation and reorient themselves from the vertical orientation to the horizontal position that is above the tongue. The two palatal shelves subsequently grow towards each other and make contact at the midline. The juxtaposed epithelial linings then merge to form the midline epithelial seam (MES), which marks the beginning of palatal shelf fusion at around E14.75. The MES gradually disintegrates and the palatal shelf mesenchyme becomes one confluent structure. At the same time, the secondary palate also fuses with the primary palate anteriorly and with the nasal septum anterodorsally, to form a complete palate by E17. 159 As a result, palate development involves a series of coordinated tissue movement and remodeling that culminates in the joining of initially separated tissues. Disruptions in this process due to gene mutations or other environmental factors can therefore cause cleft palate, which is one of the most common craniofacial birth defects in human. 160 For example, mutations in Tgfb3 or Irf6 affect proper dissolution of MES and epithelial adhesion, resulting in failed palate fusion in both humans and mice. [161][162][163][164][165] In fact, mutations in Irf6 are the most common cause of human cleft lip and/or palate. 166 Multiple aspects of palatogenesis are thought to require coordinated generation of cell and tissue level forces to direct their movements (Fig. 4). During shelf elevation, the anterior palatal shelves undergo a rapid upward swinging motion to bring the palates from their vertical position to the horizontal position; while the medial and posterior portions of the palatal shelves achieve elevation by controlling the flow and organization of cells to alter the tissue shape. 167,168 While the exact mechanism remains unresolved, multiple physical properties, such as alterations in the mesenchymal cell density, 169 regional changes in proliferation, 170 and remodeling of the ECM and cytoskeletons, 171,172 have been hypothesized to generate the elevating forces. For instance, in Osr2 null mouse embryos in which shelf elevation is delayed, proliferation is specifically reduced in the medial half of the downward-pointing palatal outgrowths. 170 Similar phenotypes have also been observed in mutant embryos with conditional Fgfr1 deletion in the cranial neural crest lineage. 173 Reduced proliferation can in theory impair the horizontal expansion of the palatal shelves and affect the anisotropic pressure buildup that drives shape changes. Another possible contributor to shelf elevation is ECM remodeling. One of the main components of the palatal shelf ECM is the glycosaminoglycan hyaluronic acid (HA), which accounts for about 60% of the ECM mass. 174 Prior to shelf elevation, HA accumulates in the palatal mesenchyme and it has been postulated that hydration of HA expands the ECM volume and provides the pressure necessary to elevate the palatal shelf. 175,176 This idea was recently queried by experiments inhibiting HA synthesis specifically in the shelf mesenchyme via Osr2-Cre-mediated conditional deletion of Has2 (encoding hyaluronic acid synthase 2). 177 In these mutants, palatal shelves are reduced in size and undergo delayed, but complete, elevation. This result thus shows that while HA accumulation is intrinsically required for the expansion of the palatal shelf prior to shelf elevation, it is not the only source for generating the elevating force. Interestingly, embryos with Has2 deletion in both the shelf and mandibular mesenchyme, or just in the mandibular mesenchyme, exhibit mandibular hypoplasia, as well as failed shelf elevation, which can be rescued by culturing the mutant maxilla without the mandible and tongue. 177,178 Therefore, the mandible and the tongue also require HA for their correct morphogenesis, which is permissive for proper shelf elevation. When malformed, these structures remain as physical obstructions and secondarily block the elevating shelf. Conversely, forces generated by HA hydration within the palatal tissue may help overcome the initial blockade by the tongue during normal palatogenesis, allowing the palate to displace the tongue dorsally in a timely manner; although this force is not required for the eventual shelf elevation. 178,179 Collagen organization also appears to be important for palatal shelf elevation, as deletion of the collagen crosslinker, lysyl oxidase-like 3 (LOXL3) results in failed elevation. 180 Similarly, in mouse embryo mutants lacking the transcription cofactors YAP and TAZ in the palatal mesenchyme, palatal elevation is delayed and the expression of Loxl4 that encodes another lysyl-oxidase family protein, LOXL4, as well as the expression of collagen proteins, are both reduced. 112 These results thus highlight the importance of ECM remodeling during palatogenesis, although its mechanistic regulation remains an important open question. In addition, given the role of YAP/TAZ in mechanotransduction, 104 it is plausible that YAP/TAZ may be part of the mechanical feedback loop that both senses and modulates the mechanical property of the developing palate. It should be noted that cartilage-specific conditional deletion of Yap/Taz using Col2a1-Cre also results in cleft palate in mice, possibly due to malformed Meckel's cartilage that prevents proper tongue descent. 181 However, as Col2a1-Cre activity is in fact detectable in a subset of the posterior palate mesenchyme at E12.5 and mutant palatal shelves fail to elevate and fuse in cultured explants of whole maxillae without mandibles and tongues, 112 the function of the mandible and the tongue to physically block shelf elevation in this context needs to be further examined. Finally, besides HA and collagens, several other ECM molecules have also been found to be expressed in the developing palatal tissue. For example, Tenascin-C and Tenascin-W are predominantly expressed in the medial portion of the shelf mesenchyme prior to its elevation, potentially contributing to differential mechanical properties along the mediolateral axis of the tissue. 171 The tenascin meshwork also aligns with actin bundles and the long axis of nuclei, which are oriented toward the nasomedial wall of the elevating middle and posterior palatal shelves. These observations thus suggest that actomyosin contractility and tissue material property may play an important role in shaping the middle and posterior palatal shelf during elevation. Future studies determining the functional requirement of actomyosin-based cellular forces in shelf reorientation and the role of tissue material properties in modulating this process will further our understanding of this decades-old question of how palatal shelf elevates. While the ECM plays an important role during shelf elevation, apoptosis and actomyosin-driven cellular extrusion are integral to the fusion of palatal shelves. Over the past decades, we have gained significant insights into the cellular processes facilitating palatal fusion. Three possible mechanisms have been proposed to drive MES dissolution: (1) epithelial-mesenchymal transition (EMT), [182][183][184][185] (2) apoptotic cell death, [186][187][188] and (3) cell migration.. 189,190 Among them, apoptosis is perhaps one of the most researched mechanisms for MES removal. When palatal shelves come in contact at the midline, apoptosis is triggered in the MES, and signaling through retinoic acid and Tgfβ3, as well as Irf6 function, have been shown to be critical for inducing apoptosis in MES cells and palatal fusion. 187,[191][192][193][194] Consistent with these results, 45% of the Bok -/-;Bax -/-;Bak -/triple knockout mice, where intrinsic apoptosis is blocked, exhibited complete cleft palate. 195 However, fusion at MES was not specifically examined in these mutants, leaving questions on whether the palate phenotype is caused by defects in other steps of palatogenesis or by non-tissueautonomous effects. How then is regulation of apoptosis integrated with cellular processes that drive the merging of epithelial cells during MES formation? This is in part achieved by actomyosin tension-driven cellular convergence and extrusion. Live imaging of mouse mutants with conditional deletion of non-muscle myosin heavy chains IIA and IIB in the palate epithelium showed that actomyosin contractility is required for cell intercalations towards the midline, thus displacing cells from the center of the initially multi-layered MES towards the oral surface. 196 Similarly, drug inhibition of MyoII upstream regulators, ROCK and myosin light chain kinase (MLCK), in explant culture also blocks cell interaction and palatal fusion. 196 Therefore, actomyosin tension permits coordinated cellular rearrangement to promote the thinning of the epithelium. Concurrently, as more cells are displaced towards the periphery of MES, they would experience increased crowding and are actively extruded from the epithelium. In this context, actomyosindependent formation of cellular rosettes facilitates extrusion of both apoptotic and live cells, and the mechanosensitive Piezo ion channels have been found to promote this process, possibly in response to increased cellular crowding. 196 Cellular forces generated by actomyosin contraction is therefore critical for palatal fusion at multiple levels. Later, following secondary palate fusion, the pressure generated by infant suckling, a mammalian-specific feeding behavior, has also been linked to the formation of a temporary cartilaginous growth plate-like structure in the mid-palatal suture that otherwise ossifies primarily through intramembranous ossification. 197 Using finite element modeling, the computed patterns of suckling-generated distortional and hydrostatic strains in palates correlate with patterns of chondrogenic gene expression. In addition, different parts of the palate structure exhibit distinct mechanical properties, 197 consistent with the spatiotemporal regulation of various ECM proteins during palatogenesis. 172 Together, studies discussed here demonstrate that forces of different types and at various scales regulate multiple aspects of palate development. Future research combining genomic, biochemical and biomechanical approaches will help advance our understanding of the mechanical control of palatogenesis, as well as the genetic and cellular responses to physical cues. These efforts will in turn inform targetable mechanical pathways that drive normal palate elevation and fusion, and guide us towards therapeutic intervention to prevent cleft palate birth defects. Jaw and temporomandibular joint (TMJ) The development of the lower jaw first becomes apparent when cranial neural crest-derived mesenchymal cells differentiate into chondrocytes and form a rod-shaped cartilage, known as Meckel's cartilage, at around E12.5 in mice. 198 Meckel's cartilage then extends in length at both ends of the cartilage. At the same time, mesenchymal cells neighboring Meckel's cartilage begin to condense and differentiate into osteoblasts, which undergo intramembranous ossification to form a set of bony tissues that subsequently fold over the gradually degenerating Meckel's cartilage. Functionally, Meckel's cartilage does not appear to be required for the initial formation of the mandible, as the mandibular ossification still takes place in the absence of Meckel's cartilage, as in Sox9 null embryos. 199 However, Sox9 mutant mandibles are smaller in size, suggesting that Meckel's cartilage may control the size and shape of the mandible as it develops. 200 Consistent with this, reduced mechanical integrity in the deformed Meckel's cartilage of Ctgf null mice leads to shortened mandibles. 199 At the proximal end of the mandible, the TMJ links the jawbone to the temporal bone of the skull, and enables mandibular movement and mastication. The TMJ includes the condylar head of the mandible and the mandibular fossa of the temporal bone; both of which arise from endochondral ossification. A fibrous articular disc further divides the TMJ into two compartments, separating the condyle and the fossa. TMJ development begins at E13.5 when mesenchymal cells condense to form the condylar and temporal blastema, which then grow towards each other while the disc forms in between as a separate condensation at E16.5. The secondary cartilage of the condyle also joins the developing mandible and produces new bones that sustain the continued growth of the mandible. Like many bones in the vertebrate body, jawbone and TMJ morphogenesis is closely linked to muscle functions. It is therefore not surprising that mechanical forces are important modifiers of mandible development and morphologies both in embryos and postnatally, thus in accordance with the Wolff's law, 201 which stated that bone shapes and structures depend on the functional forces of the muscles. In mouse embryos, jaw movement begins at E14 and restricting jaw motility by exo utero suturing of the jaw at E15.5 results in a smaller articular disc and a shorter but thicker mandible at E18.5 as a result of reduced chondroprogenitor proliferation and abnormal chondrocyte differentiation in the TMJ and condyle cartilage. 202,203 During bone development, feedback between Indian hedgehog (Ihh) and Parathyroid hormone-related peptide (PTHrP) is central to the regulation of chondrocyte proliferation and their expression is downregulated in the condyle cartilage of sutured mandibles. 204 Interestingly, Ihh expression can be induced by cyclic stress in cultured chondrocytes, 205 suggesting that the regulation of Ihh transcription is mechanosensitive and can respond to mechanical stimuli to tune bone growth during mandible development. In zebrafish, muscle functions have also been linked to jaw joint development as immobilizing muscles through anesthetization causes jaw joint dysmorphology, particularly in regions of high compressive strain. 206,207 In this context, Wnt signaling is activated by mechanical stress and biochemically transduces mechanical signals to regulate chondrocyte proliferation, migration, intercalation and cell morphology to shape the Meckel's cartilage and jaw joint. 208 Muscle forces continue to shape the mandible in postnatal animals. For example, muscle sizes and bite forces are associated with mandibular shape variations in humans, 209 and patients with reduced muscle function develop altered craniofacial morphology. 210 Similarly, decreasing masticatory load in mice by feeding them with a soft diet results in transgenerational inheritance of mandibular shape changes, although the exact mechanism is not understood. 211 Consistent with these observations, altering mechanical forces placed on mandibles and condylar cartilage by feeding animals a soft diet, trimming their teeth, or forced mouth opening, affects chondrocyte biology in several different animal models. [212][213][214][215][216][217][218] These studies showed that mechanical stress is required to promote chondrocyte proliferation, maintain adequate differentiation, and support ECM production. Akin to the developing mandible, Ihh expression is also responsive to mechanical loading in the adult condylar cartilage, 219 pointing to a common mechanism that enables continued adaptive changes in mandibular growth to altering mechanical environments. Importantly, as the primary cilia have been shown to be required for Ihh signaling activation in response to hydrostatic compression in cultured primary epiphyseal chondrocytes 220 and primary cilia are essential for correct TMJ development, 221 it will be interesting in the future to assess if primary cilia can mediate mechanical signals to control chondrocyte proliferation in the mandible. Differences in the mechanical load as a result of differential muscle patterning also have evolutionary consequences. For instance, when compared with quail and chick embryos, the relatively larger mandibular adductor muscles in duck embryos generate a species-specific mechanical environment that signals through FGF and TGFβ signaling to induce the formation of a duck-specific coronoid process for the adductor insertion on the mandibular bone. 68,222 Another example is the loss of TMJ articular disc in mammals with lost dentition and corresponding changes in masticatory muscles. 223,224 In monotremes, such as platypus, a primordial disc is formed but does not mature, and a similar phenotype is observed in mouse mutants with severely reduced cranial musculature due to Tbx1 deletion in the mesoderm. 225 As a result, species-specific muscle forces may participate in the evolutionary changes of disc formation in TMJs. Cranium The vertebrate cranium is composed of the cranial vault (or calvaria, including the frontal, parietal, and occipital bones) and the cranial base (including the ethmoid, sphenoid, temporal, and part of the frontal and occipital bones), and together these bones enclose and protect the brain within. The anatomies of the cranium and the brain are well integrated and they accommodate each other in terms of the volumes and the shapes, a result of coordinated growth during development. 226 While the brain is derived from the neuroectoderm, the cranial bones are derived from mesenchymal cells that originate from either the cranial neural crest (e.g., progenitors for the frontal bone) or the head mesoderm (e.g., progenitors for the parietal bone). 227 In mouse embryos, these mesenchymal cells begin to condense at E12.5 and form rudiments of frontal and parietal bones above and posterior to the eye, respectively. 228,229 Next, calvarial rudiments undergo lateral and upward expansion as a result of osteogenic precursors migrating out from the bone primordia. 229,230 Calvarial bones are then formed through a process known as intramembranous ossification when osteoblasts in the rudiments further differentiate and directly lay down matrices to initiate bone mineralization without going through an intermediate cartilaginous step. 228 The expanding cranial bones subsequently approach each other. At the site of bone approximation, the opposing osteogenic bone fronts containing osteogenic progenitors and the interposed undifferentiated mesenchymal cells then become the developing suture. 231 While the brain enlarges in size throughout embryonic and postnatal development, the skull must also expand accordingly. The sutures, as fibrous joints between cranial bones, remain patent (or unfused) during this process and function as an active site for new bone formation that enables skull expansion. 232 In embryos, this is achieved through maintenance of proliferating osteoprogenitors in the osteogenic bone fronts of the cranial bones, which can generate new osteoblasts and add new bones appositionally. 233 In postnatal animals, the suture mesenchyme has been shown to retain a group of mesenchymal stem cells expressing Gli1, Prx1, and Axin2, and suture stem cells are responsible for the postnatal growth and turnover of the calvaria, as well as injury repair. [234][235][236] Maintaining suture cells in an undifferentiated state is therefore critical for the co-development of the cranium and the brain. Several signaling pathways, including Fgf, BMP, Notch, Ephrin, Wnt, and Hh, are all key regulators in this process and mutations in genes encoding components of the pathways result in pathological fusion of the sutures, or craniosynostosis, disrupting the normal morphology and development of both the cranium and the brain. 237,238 Apart from biochemical signals, it is important to also consider the role of tissue mechanical forces in controlling suture patency and cranial morphology (Fig. 5), given that mechanical signals regulate bone development elsewhere. 239 Beginning as early as E13 in mice, the calvaria is physically connected to the brain via the dura mater that is part of the meninges. While the dura mater is a source for secreting biochemical ligands to control both ossification and suture patency, 240,241 it can also in theory relay mechanical forces induced by the increasing brain volume to control the biology of the overlying mesenchymal and bone cells, as originally proposed by Moss. 242 The idea is that brain enlargement within the confined space of the skull can gradually generate pressure and deform the ECM and cells in the developing cranium, which would experience a tensile strain (mostly quasi-static, or very slow). Indeed, measuring the mouse intracranial pressure showed that the pressure increases with age in postnatal animals between P3 and P70 as brain increases in volume. 243 A measurable tensile strain is present in both the dura mater and sutures, although that decreases with age presumably due to increased suture stiffness. 244,245 In human patients with hydrocephalus, excessive shunting of cerebrospinal fluid can cause premature fusion of sutures (synostosis) and this has been postulated to result from reduced intracranial pressure and decreased tensile strain at the suture. 246 Similarly, synostosis is associated with other pathological conditions, such as microcephaly or intrauterine head constraint, 247 where sutural strain is also likely diminished. These observations thus indicate that tensile forces may play a role in regulating calvarial development and suture fusion. Most of our understanding of how tensile forces regulate sutures comes from experiments applying ectopic forces with the use of loaded helical springs to expand sutures in calvaria explants or directly on cranial bones in vivo. 248,249 These experiments showed that cells in the suture can respond to increased tension and orient themselves in the direction of the force, as well as alter their proliferation and differentiation potential to expand the bones. In this context, sutural mesenchymal cells undergo increased proliferation in response to tension. 250,251 A corresponding increase in the expression of Insulin-like growth factor I (IGF-1), its receptor, and FGF receptors in midsagittal cells, along with augmented FGF2 protein release from the suture, all indicate that tension may control sutural cell proliferation through IGF and FGF signaling [251][252][253] (Fig. 5). Tensile forces also induce TBX2 expression in midsagittal cells, where TBX2 may function to maintain the undifferentiated state of mesenchymal cells and suture patency by inhibiting the expression of the gap junction protein Connexin 43 (GJA1) that normally promotes osteogenic differentiation. 254,255 Concurrently, tensile strain promotes BMP4 expression in mesenchymal cells and their differentiation towards the osteoblast lineage as evidenced by the increasing number of osteopontin (OPN)-expressing and osteocalcin (OCN)-expressing cells that are recruited to the lengthening osteogenic bone fronts. 250,256,257 α-adaptin C, a component of the adapter protein 2 (AP-2) complex for clathrin-dependent endocytosis, has also been found to be upregulated in mesenchymal cells and may play a role in modulating signal transduction to promote osteogenic differentiation, as blocking endocytosis suppressed tensile forceinduced osteoblast differentiation. 258 An interesting observation from another study showed that stretching sutures results in an immediate intracellular Ca 2+ influx. 253 While the functional significance of Ca 2+ concentration change remains unclear in this context, Ca 2+ influx can lead to osteoblast differentiation elsewhere. 259,260 As conditional deletion of the mechanosensitive Ca 2+ ion channel Piezo1 in osteoblasts causes incomplete closure of cranial sutures, 261 it is intriguing to speculate that tissue forces may signal through Piezo1 and its downstream Ca 2+dependent signaling to regulate cell differentiation in sutures. However, the role of endogenous tensile (or compressive) stresses during calvarial and suture development remains unclear and future experiments studying the functional requirement of these forces by means of perturbing force generation or transduction is an important next step. In addition to the quasi-static strain discussed above, cranial bones are also attached to muscles, which exert forces in a cyclic pattern, such as during feeding. While limited data suggest that muscle forces are dispensable for the formation of sutures during embryonic development (and thus different from synovial joints, like TMJ, on that aspect), 262 muscle loading in postnatal animals can modulate suture morphology and its interdigitation patterns. For instance, surgical excision of temporal muscles can cause reduced complexity of the sagittal suture interdigitations in rats. 263 Animals applying less masticatory forces, either from eating a soft diet or due to absence of tooth eruption, also develop structurally simpler sutures and sometimes with synostosis. 264,265 On the contrary, when bite forces increase, such as in the Gdf-8 null mice that have lost the myogenesis inhibitor myostatin and form significantly enlarged jaw muscles, there is increased suture complexity. 266 In the same mutants, Fig. 5 Integration of mechanical and biochemical signals at cranial sutures. In the developing calvaria, mesenchymal cells in the suture midline are proliferative and give rise to osteoprogenitors and osteoblasts in the osteogenic front. The calvaria sits on top of the dura mater and experiences a quasi-static tensile strain (blue arrows) due to the expansion of the growing brain underneath and the intracranial pressure. Such force then signals through FGF and IGF signaling to maintain mesenchymal cell proliferation, as well as TBX2 to inhibit GJA1 and premature differentiation. In the osteogenic front, tensile forces signal through BMP4 and Ca 2+ influx to promote osteogenesis. α-adaptin C-dependent endocytosis also functions downstream of the tensile stress to promote osteogenic differentiation, possibly by enhancing BMP signals. Cyclic forces generated by masticatory muscle contraction promote both mesenchymal proliferation and osteogenic differentiation (red arrowheads), leading to suture widening Mechanical regulation of craniofacial morphogenesis Du et al. age-dependent changes in cranial vault morphology have also been observed, suggesting that muscle forces can remodel calvarial bones. 267,268 To specifically study the effects of cyclic forces on cranial development, a series of experiments were conducted by applying ectopic cyclic tensile or compressive forces to animals for a period each day. [269][270][271][272][273][274] When comparing to sham controls and animals receiving static loading, cyclic forces, regardless of tension or compression, induce suture widening, increased number of suture cells, and heightened osteogenesis. 269,273 Cyclic forces also trigger expression of matrix metalloproteases MMP-1 and MMP-2 at the suture, which are important for bone mineralization, as well as craniofacial and suture development. 271,275 Interestingly, suture cells isolated from neonatal rats are mechanosensitive to cyclic tension in culture and display increased osteogenesis with upregulated RUNX2 and OPN expression. 276 This mechanically-induced osteogenic differentiation program depends on ROCK activity, which promotes nuclear TAZ localization and its subsequent activation of Runx2 expression. It is conceivable that the same mechanotransduction pathway may be responsible in mediating mechanical signals in vivo to regulate suture osteogenesis. In addition, because actomyosin tension and TAZ localization can be modulated by substrate stiffness, 104 and stiffer substrate promotes suture cell differentiation, 277 it will be important in the future to understand how tissue forces remodel suture ECM compositions and stiffness and how changes in suture material properties modulate signaling changes, including those mediated by the Hippo pathway and Piezo ion channels, to control suture cell differentiation. CONCLUSION While there has been progress in understanding the role of mechanical inputs in regulating the development of various craniofacial structures, many outstanding questions remain. For example, how do cells sense and transduce mechanical information to regulate gene expression? In addition to YAP and Piezo proteins, forces transmitted through cytoskeletons and nuclear membrane complexes can directly deform the nucleus and impact chromatin organizations. 278 Given that mutations in several nuclear envelop proteins, such as lamins, can cause craniofacial defects, 279,280 it will be important to investigate the role of nuclear mechanotransduction during craniofacial development. Furthermore, what are the signaling cues that induce mechanical anisotropy and inhomogeneity during organ development? What is the feedback mechanism that modulates the amount and direction of forces at both cellular and tissue levels to achieve adequate shape changes? How are mechanical signals regulated differently to generate diverse morphologies across different species and during evolution? By integrating genetic and biochemical approaches with novel biomechanical techniques, such as oil microdroplets and magnetic beads to quantify absolute force magnitudes and apply forces locally, 81,281,282 we are closer than ever to address these questions. A deeper understanding of the mechanical control of craniofacial morphogenesis and development will ultimately contribute to novel strategies for manipulating organ-specific progenitor cells, bioengineering tissues with the correct shape and architecture, and advancing stem cell-based regenerative therapies that will transform patient treatment.
103929460
s2orc/train
v2
2019-04-09T13:09:39.476Z
2017-11-08T00:00:00.000Z
Research into esterification of mixture of lower dicarboxylic acids by 2-ethylhexan-1-ol in the presence of p-toluensulfonic acid Regularities of esterification of the mixture of lower dicarboxylic acids (succinic, glutaric, adipic) by 2-ethylhexan-1-ol in the presence of catalysts – p-toluensulfonic and sulfuric acids under non-stationary conditions were studied. It was found that in the presence of mineral acid, the reaction flows at a lower rate. Application of benzene as a substance that facilitates separation of water, formed in the esterification reaction, makes it possible, due to a lower reaction temperature, to decrease energy consumption of the process at an increase in conversion of dicarboxylic acids from 95.8 to 99.5 %. It was shown that the use of activated carbon of different brands simultaneously with catalysis by p-toluensulfonic acid with virtually the same effectiveness can decrease chromaticity intensity of esterification products by more than three times. The use of finely dispersed activated carbon 208CP and DCL 200 compared with coarse-grained activated carbon BAU-A additionally provides higher intensity of esterification reaction due to improvement of removal of water from the reaction mixture. It was found that an increase in the content of activated carbon DLC 200 by more than 0.3 % by weight in the reaction mixture contributes to a sharp decrease in the process intensity. This influence is explained by neutralization of a part of the catalysts by alkaline components of activated carbon, which decreases its active concentration and inhibits the reaction. Optimum conditions of the esterification process were proposed. The authors determined dependences of density and kinematic viscosity of the mixture of diesters of succinic, glutaric and adipic acids, and 2-ethylhexan-1-ol, separated from the esterification reaction products, on temperature and described them with regression equations Introduction Products of esterification of aliphatic dicarboxylic acids are used as high-boiling solvents, lubricants, plasticizers of polymeric materials, etc. Constant extension of assortment of diesters by the synthesis of new individual compounds or obtaining of mixtures of diesters require setting optimal conditions for each particular process. Specifically, promising raw materials for obtaining diesteric plasticizers include by-products of production of adipic acid -so-called lower dicarboxylic acids (LDA). This is a mixture of succinic, glutaric and adipic acid, which after clearing from a catalyst of cyclohexanol oxidation can be exposed to esterification with obtaining relevant reaction products. Establishing of conditions that provide the maximal yield of diesters of LDA and 2-ethylhexan-1-ol, and determining the properties of the obtained substances will make it possible to solve the problem of obtaining and subsequent application of another valuable chemical product. Accordingly, it is relevant to determine the influence of different factors on the esterification reaction, which determine maximum conversion of dicarboxylic acids as the most expensive reagent. The choice of a catalyst, which should be both affordable and highly reactive, is also important. Separation of reaction products and determining their phys-ical-chemical properties are important in terms of practical use of the obtained diesters. Literature review and problem statement Today, a whole range of esters of methanol and a mixture of succinic, glutaric and adipic acids -a by-product of manufacturing adipic acid -are produced. We know about industrial production of the whole range of esters of methanol and mixtures of succinic, glutaric and adipic acids, which is a by-product of obtaining adipic acid [1]. These products are characterized by relatively high boiling and flash points, solubility, resistance, and low viscosity and toxicity. That is why these esters are used as: -solvents; -plasticizers; -raw materials for obtaining long-chain water-soluble polyamides that with epichlorohydrin form water-resistant resins for impregnation of paper; -substances for washing off paints; -intermediate polymer links, etc. [1]. We know about the use of wastes of chemical production for obtaining diester plasticizers. Specifically, water-acid and alkaline effluents, produced at the stage of cyclohexane oxidation, are used as raw materials. Then plasticizers, as well as light ester fraction and solutions of salts, are separated from the reaction mixture [2]. Wastes of production of butyl alcohols (ester distillate and bottoms of rectification of alcohols) and dietary ethanol (fusel oil, ester-aldehyde fraction) are also promising raw material for synthesis of diesters [3,4]. One of the industrial plasticizers that are used for plasticizing PVC, polyvinyl acetals, esters of cellulose, polystyrene, acrylic and other synthetic resins is dioctyladipinate (DOA). It is a highly effective diester plasticizer, which provides compositions with cold-, wear-and light resistance, contributes to low viscosity and high viscous stability of plastisols [5]. Compared with esters of phthalic and phosphate acids, diesters of glutaric and succinic acids easily decompose biologically, do not show carcinogenic properties, are less toxic and do not deplete the ozone layer [6]. The use of waste and by-products of a number of productions as raw materials for obtaining diesters will not only provide the market with new chemical products. Lower cost of carbonic acids or alcohols, which are contained in by-products or wastes, will also allow decreasing the cost of diesters. At the same time, information about getting diesters of mixture of LDA and 2-ethylhexan-1-ol, which according to its properties correlates with DOA was not found in literary sources. One of the most important factors that determines conditions of the process and provides for high intensity of the process of obtaining diesters of dicarboxylic acids and alcohols, is a catalyst for the esterification reaction [7]. This catalyst also found widespread use in the esterification processes. In particular, pTSA efficiently catalyzes esterification of secondary alcohols and acids with formation of stable esters in the absence of a solvent in the presence of formed reaction water [9]. While using pTSA as a catalyst of esterification of phthalic anhydride, the yield of dibutylphtalate exceeds 96 % at the mole ratio of phthalic anhydride: butane-1-ol of 1:2.2. To achieve this yield, the amount of the catalyst should be 0.3 mol % compared with the number of phthalic anhydride, the reaction time is 3 hours, and the temperature is 418-423 K [10]. In the presence of catalytic amount of 0.5 mol % of pTSA, both aromatic and aliphatic alcohols efficiently react with carbonic acids with moderate and high yields of esters (from 55 to 92 %). Maximal yield of esters is observed for alcohols containing electron donor groups [12]. High yield of methyl caffeinates (84.0 %) in reaction of esterification of caffeic acid by methanol, catalyzed by pTSA, is achieved at molar ratio of methanol: acid of 20:1, temperature of reaction of 338 K, ratio of mass of a catalyst to substrate of 8 % and reaction time of 4 hours [12]. Through esterification of adipic acid by aliphatic alcohol С 4 -С 5 and cyclohexanol in the presence of pTSA, one obtains a mixture of diesters with a maximum yield of isobutyl-and butylcyclohexyl adipinate of 50.6 and 47.6 % and isoamyl-and amyl cyclohexyl adipinate of 51.7 and 59.8 %, respectively. The catalyst provides conversion of dicarboxylic acid of more than 96.8 % [13]. Combination of catalysis by pTSA of esterification reaction of non-saturated aliphatic acid C 9 -C 18 by ethanol and the action of ultrasound for 20 min at temperature of 298 K allows provision of esters' yield of 73-98 % depending on the structure of an acid [14]. For pTSA, there are no disadvantages, characteristic for traditional catalysts of esterification -mineral acids. In particular, in the presence of pTSA, dehydration of alcohols to olefins and resinification of organic compounds do not occur. However, pTSA often contains a significant amount of impurities that can contaminate esterification products, so it is advisable to use it combined with activated carbon [15]. Specifically, paper [16] shows that the use of activated carbon of OU-A brand allows decreasing chromaticity of LDA esterification reaction products by the mixture of isoalcohols С 4 -С 5 of fusel oil under conditions of pTSA catalysis from 5.9 to 1.1 mg of І 2 /100 сm 3 of the reaction mixture. Thus, modern trends of industrial organic chemistry are aimed at expanding raw material resources. This will make it possible to increase the range and decrease the cost of chemical products, including diesters of dicarboxylic acids. The application of such an active catalyst as pTSA must also provide for efficiency of the esterification process. That is why establishing regularities of reaction of LDA esterification by 2-ethylhexane-1-ol in the presence of pTSA and determining of optimal conditions for the process is both of theoretical and of practical interest. The aim and objectives of the study The aim of present research was to establish the regularities of LDA esterification by 2-ethylhexan-1-ol in the presence of pTSA as a catalyst under different conditions of the process and to identify and determine the properties of the obtained mixture of diesters. To accomplish the set goal, the following tasks had to be solved: -to determine the influence of benzene, the type and concentration of a catalyst, the type and amount of activated carbon on reaction duration, conversion (C) of carboxylic groups (CG) of reagents and chromaticity intensity of the reaction mixture; -to establish the optimum conditions of the process of obtaining the mixture of diesters of succinic, glutaric and adipic acids and of 2-ethylhexan-1-ol; -to determine the composition of the selected mixture of diesters and its physical properties. 1. Materials and equipment used in the experiment We used as reagents in the esterification reaction: -the mixture of LDA -by-products of production of adipic acid, purified, of brand A, TU U 24.1-05607824-045:2007 of the following composition (% by weight): succinic -29.2, glutaric -37.8, adipic -33.0. The average molar mass of this mixture was 132.5 g/mol; -2-ethylhexan-1-ol (EH) of the highest grade, GOST (State standard) 26624-85. To enhance removal of water, which was formed during the esterification reaction, benzene of brand reagent grade, GOST (State standard) 5955-75 was added to reagents. 2. Methodology of the experiment and analysis of the reaction mixture and mixture of diesters The reaction of esterification of LDA by 2-ethylhexan-1-ol was conducted under non-stationary conditions of removal of water, which was formed during the reaction. Water was removed in the form of an azeotrope either with EH or with benzene. The reagents and the catalyst were loaded in a round-bottom flask, connected with the Dean-Stark trap, a reverse fridge, a thermometer and put on the bath of silicone oil, heated to temperature of 433 K. The beginning of the reaction was determined after reaching the temperature of 373 K. The process was going on at intense stirring of the mixture both by a magnetic stirrer and due to boiling. The reaction was carried out till complete cessation of accumulation of the water layer in the Dean-Stark trap. In the course of the reaction, the samples of the reaction mixture were taken in order to determine the acid number. After completion of the reaction, where more than 99 % CG conversion was achieved, the products of the experiments were neutralized with 2 % solution of soda, washed with distilled water to neutral pH value; benzene and EH that did not react were removed from them by steam. The organic layer and the obtained mixture of diesters were separated from the water layer by decanting in a graduated funnel, respectively. Acid number (AN) of the reaction mixture and the mixture of diesters, selected from it, was determined in accordance with the procedure [18]. Relative error of the analysis did not exceed ±2.5 %. Conversion of carboxylic groups of LDA mixture was calculated by the initial AN of the reaction mixture and the AN of the reaction products. Chromaticity intensity of the reaction mixture and of separated diesters was determined by photoelectrocolorimete KFK-2 in the 20-mm long cuvette at the light wavelength of 440 nm. Chromaticity of the solution was expressed by the iodine scale in mg of I 2 /100 сm 3 of the substance. The composition of the mixture of diesters was determined by the gas-liquid chromatograph "Tsvet-100" with the thermal conductivity detector. The length of the column was 1 m, its diameter was 3 mm, the fixed phase was 5 % of Silicone SE30 on Chromaton N-AW. Consumption of gas-carrier of helium was 3 dm 3 /h, the volume of the analyzed sample was 2 μL. The current strength on the detector was 120 μA, the temperature of the evaporator was 523 K, of the detector -503 K, of the column -453 K. Viscosity of the mixture of LDA diesters and 2-ethylhexan-1-ol, separated from the reaction products, was determined by the viscometer VPG-2 by the time of flowing through the capillary (accuracy ±2.0 %), relative density was determined by the areometer AN (accuracy ±0.0005). All measurements were performed under condition of 10-15-minute thermostating of the mixture of diesters. Results of research into esterification of lower dicarboxylic acids by 2-ethylhexan-1-ol We studied the influence of benzene, the type and concentration of a catalyst, the ratio of reagents, the type and the amount of activated carbon on the reaction duration, conversion of carboxylic groups of reagents, chromaticity of the reaction mixture. It is known that aliphatic alcohols and water can form heterogeneous azeotropic mixtures, the boiling point of which is lower than that of alcohol and water. In particular, for 2-ethylhexan-1-ol it amounts to 372.25 K, and water content in the mixture is 80 % by weight [19]. Accordingly, in the esterification process, water is removed from the reaction mixture and is exfoliated from alcohol in the Dean-Stark trap. Performing the process of LDA esterification by 2-ethylhexan-1-ol in the presence of pTSA showed that at mole ratio EH: LDA of 2.65:1 and concentration of pTSA of 1.3·10 -2 mol/dm 3 , CG conversion of over 98 % is achieved within 60 minutes of the reaction (Fig. 1, curve 1). When adding 15.2 % by weight of benzene, capable of forming with water a similar azeotropic mixture, the reaction of esterification at the initial stage flows more slowly as a result of temperature that is lower by 20-30 K (Fig. 1). However, it is obvious that water removal from the reaction mixture and equilibrium shift toward formation of diesters in the presence of benzene flows more effectively. Within the same time (60 min), conversion of CG reaches almost 100 % (Fig. 1, Curve 2). Comparison of effectiveness of catalysis of esterification of the mixture of lower dicarboxylic acids by pTSA and sulfuric acid indicates that in the presence of the mineral acid, reaction flows at a lower rate. Even if the concentration of sulfuric acid is twice as high, the time, required to achieve CG conversion of more than 99 %, is 180 min, which is three times as much as under the conditions of catalytic reaction of pTSA (Fig. 2). % by weight It was found that an increase in concentration of pTSA from 7.2·10 -3 to 1.3·10 -2 mol/dm 3 has a significant impact on the change of CG conversion at the initial reaction stage, but at higher values of LDA conversion degree, this impact decreases (Fig. 3). In particular, for 180 minutes of the reaction, the difference between values of C(CG) is only ~3 %. % by weight The use of different brands of activated carbon in the process of LDA esterification by 2-ethylhexan-1-ol in the presence of pTSA also affects the change of CG conversion over time (Fig. 4). Activated carbon 208 CP and DCL 200 is finely-dispersed; it is characterized by specific surface area of 1150-1250 m 2 /g [20]. Activated carbon BAU-A has dimensions of particles of 1-3.6 mm and specific surface area (700-800 m 2 /g) [21]. It is possible to assume that smaller particles' dimensions and larger specific surface of coal 208 CP and DCL 200 contribute to more intensive boiling and water removal from the reaction mass. As a result, there is higher intensity of reaction of LDA esterification by 2-ethylhexan-1-ol in the presence of carbon 208 CP and DCL 200, CG conversion within the first 10 minutes of reaction reaches 30-35 % (Fig. 4). In the absence of activated carbon and in the presence of 0.3 % by weight of BAU-A, CG conversion is only 10 % within 10 minutes of reaction. An increase in content of activated carbon DCL 200 in the reaction mixture from 0.3 % to 0.6 % by weight, on the contrary, contributes to a sharp decrease in the process intensity (Fig. 5). Duration of LDA esterification by 2-ethylhexan-1-ol increases respectively by 3 times. This effect can be explained by the fact that pH value of carbon DCL 200 is 6-9. Alkaline components of activated carbon obviously neutralize a part of the catalyst, which decreases its active concentration and inhibits the reaction. The mixture of diesters was separated from the reaction mixture of products of LDA esterification by 2-ethylhexan-1-ol according to the technique, described in section 4.2. Determined chromatographic composition showed the following content of individual substances (in % by weight): di-2-ethylhexylsuccinate is 31.3, di-2-ethylhexylglutarate is 37.8, di-2-ethylhexyladipinate is 30.9. AN of the mixture was 0.1 mg KOH/g. It was determined that density of the mixture of LDA diesters changes from 928 kg/m 3 (at 292.7 K) to 876 kg/m 3 (at 372 K). This change is described by the linear equation with correlation factor r 2 =0.999: where Т is the temperature of the mixture of diesters, K. Kinematic viscosity of the mixture of LDA diesters changes as temperature increases non linearly from 13.8 mm 2 /s (292.7 K) to 2.8 mm 2 /s (372 K). The equation that describes this change (r 2 =0.999) was derived by the least squares method: ν=15.222-1.4081T+0.0612T 2 -0.001T 3 , mm 2 /s, where Т is the temperature of the mixture of diesters, K. These equations can be used in practical application of the mixture of diesters. Fig. 6 shows the change in kinematic viscosity of the mixture of LDA diesters and 2-ethylhexan-1-ol with an increase in temperature. For comparison, the authors determined this physical characteristic for dibutyl adipinate and dibutyl phthalate as plasticizers, which exhibit similar properties. Obviously, at the absolute value and the change in viscosity with an increase in temperature, the mixture of LDA diesters and 2-ethylhexan-1-ol is considerably closer to dibutyl phthalate than to dibutyl adipinate. In general, physical parameters of the mixture of LDA diesters and 2-ethylhexan-1-ol correlate with indicators of dioctyladipinate. In particular, its density is 0.923-0.930 g/cm 3 , AN ranges from 0.04 to 0.1 mg of KOH/g and kinematic viscosity is 13-17 mm 2 /s [5]. Discussion of results of research into esterification by 2-ethylhexan-1-ol Technological parameters of the process of LDA esterification by 2-ethylhexan-1-ol under different conditions are included in Table 1. We selected minimal values of reaction time, chromaticity intensity of reaction products, AN of the reaction mixture and, accordingly, maximal value of conversion of carboxylic groups of LDA as optimality criteria. It should be noted that the use of benzene as a component of the reaction mixture, which improves water removal, allows us to increase CG conversion from 95.8 to 99.5 % ( Table 1). The average and maximum temperatures of the reaction in the presence of benzene are also lower. Due to this, it is possible to decrease energy consumption in the esterification process. Sulfuric acid is a less active catalyst of the process of LDA esterification by 2-ethylhexan-1-ol, since 89.4 % conversion of CG is achieved in the presence of H 2 SO 4 only within 180 min of the reaction, whereas in the presence of pTSA, its value is 99.5 % within 60 min of the reaction. A higher concentration of mineral acid is required to achieve the specified values of CG conversion. The use of pTSA as a catalyst provides the value of CG conversion of over 98 % under condition of catalyst's concentration of 1.3·10 -2 mol/dm 3 . A decrease in the content of the catalyst in the reaction mixture increases the reaction duration by half and provides for 95.3 % CG conversion, the acid number of the reaction mixture is 9.1 mg KOH/g, which greatly increases LDA losses, which in such systems are found both as a small amount of unreacted acids, and mainly as monoesters [23]. The high value of chromaticity intensity of the reaction mixture (3.0-4.7 mg of І 2 per 100 сm 3 ) indicates possible course of side reactions and contamination of reaction products with impurities, contained in catalysts. Application of activated carbon of different brands allows us in the process of esterification of lower dicarboxylic acids by 2-ethylhexan-1-ol to decrease chromaticity of the reaction products under conditions of preserving the value of CG conversion of 98.0-99.4 %. At the same time, the use of the amount of activated carbon DCL 200 that is more than optimal (0.3 % by weight) inhibits the reaction and requires more reaction time to achieve high CG conversion. Results of the performed research correlate with the known regularities of processes of esterification of dicarboxylic acids by aliphatic alcohols. At the same time, they complement and extend the base of experimental data on the technology of diester plasticizers in terms of using new raw material resources and conditions for carrying out a specific technological process. Almost complete conversion of carboxylic groups of reagents of 99.4-99.5 %, achieved within 60-70 minutes under conditions of catalysis by p-toluensulphonic acid, will provide for high specific performance for diesters even if the esterification process is performed under periodic conditions. The use of benzene as a component of the reaction mixture has advantages both in provision of almost complete LDA conversion and in a decrease in heat consumption for removal of water, formed in the process of esterification. Results of research into esterification process with the use of activated carbon are also of practical importance. Such technique allows improving the quality of the mixture of diesters as finished product, because a decrease in chromaticity makes it possible to apply the products of LDA esterification by 2-ethylhexan-1-ol in manufacturing of colorless polymer products. Unfortunately, in the process of this research it was not possible to achieve almost complete decolorization of products of LDA esterification by 2-ethylhexan-1-ol. This sets the task of subsequent improvement of the process in this respect. Practical solution of the problem of obtaining the mixture of diesters with minimal chromaticity includes the search for varieties of activated carbon with better adsorption properties and research in other catalysts of the process that contribute to minimal progress of side reactions in the process of esterification. In general, indicated diesters can compete with industrial plasticizers, such as dioctyladipinate and dibutyl phthalate. Conclusions 1. We determined the influence of benzene, the type and concentration of a catalyst, the type and amount of activated carbon on technological parameters of the process of esterification of lower dicarboxylic acids by 2-ethylhexan-1-ol. It was shown that p-toluensulfonic acid is a more active catalyst for the reaction of esterification compared with sulfuric acid. It was established that existence of benzene in the reaction mixture provides for higher conversion of dicarboxylic acids and milder conditions of the process, and addition of activated carbon in the amount of 0.3 % by weight to reagents decreases the chromaticity intensity of the reaction mixture by almost 3 times. 2. It was found that optimal conditions of the process of esterification of lower dicarboxylic acids by 2-ethylhexan-1-ol is the concentration of the catalyst of p-toluensulfonic acid of 1.3·10 -2 mol/dm 3 , mole ratio of LDA:EH of 2.65:1 and benzene content in the reaction mixture of ~15 % by weight and that of activated carbon of 0.3% by weight. Under these conditions, we achieved almost complete quantitative yield (over 99.5 %) of mixture of diesters of succinic, glutaric and adipic acids and of 2-ethylhexan-1-ol. 3. The composition and physical indicators of the mixture of diesters of succinic, glutaric and adipic acids, and 2-ethylhexan-1-ol, separated from the reaction products were determined. It was found that mixtures of diesters make up 0.923-0.930 g/cm 3 , acid number ranges from 0.04 to 0.1 mg KOH/g, and kinematic viscosity is 13-17 mm 2 /s.
251886510
s2orc/train
v2
2022-08-29T13:42:40.545Z
2022-08-29T00:00:00.000Z
Evaluation of the safety of using harmonic scalpel during laparoscopic cholecystectomy in children: A preliminary report Background and objective In spite of being one of the most common surgical procedures performed in adults, laparoscopic cholecystectomy (LC) is relatively uncommon in the pediatric age group. Most surgeons prefer to dissect the cystic duct using a monopolar electrosurgical hook and occlude it with simple metal clips. Although the safety of using the ultrasonically-activated shears, e.g., harmonic scalpel for dissection of the gallbladder is confirmed in many studies, its efficacy in the closure of the cystic artery and duct in adults is still debatable. Furthermore, very few reports studied its safety in children during LC. The aim of our work is to study the safety and efficacy of ultrasonic shears in controlling the cystic duct and artery during LC in children. Materials and methods A prospective study was conducted from May 2017 to April 2020, where all children having symptomatic gallbladder stone disease were included in the study. HS was used as a sole instrument in gallbladder dissection as well as in controlling cystic duct and artery. No metal clips or sutures were used throughout the procedure. Results A total of forty-two children having symptomatic gallstone disease were included in the study. The main indication for LC was hemolytic anemia. Their age ranged from 3 to 13 years with a mean of 8.4 ± 3.25 years. All operations were completed laparoscopically, i.e., no conversion to open surgery was needed. The mean operative time was 40 ± 10.42 min. There were no intraoperative complications apart from gall bladder perforation in two cases during dissection from the liver bed while the postoperative recovery was smooth in all patients. Patients started oral feeding after 11.30 ± 3.01 h. The mean time for discharge was 25.47 ± 7.49 h, ranging from 14 to 48 h. Postoperative ultrasound for all cases showed no evidence of minor or major bile leaks or CBD injuries. Conclusion This is the first report to evaluate the use of HS as a sole instrument during LC in the pediatric age group. HS is a safe and efficient instrument that can be used alone in gallbladder dissection as well as in controlling cystic duct and artery during LC in children. Introduction In spite of being one of the most common surgical procedures performed in adults, laparoscopic cholecystectomy (LC) is relatively uncommon in the pediatric age group (1). Over the past two decades, the number of LC operations in children significantly increased because gallstone disease has been increasingly recognized in children and the spectrum of pediatric biliary tract disease changed considerably. Until recently, most gallstones in children were pigmented stones caused by hemolytic diseases such as thalassemia and hereditary spherocytosis (2). Nowadays, the occurrence of gallstone disease in children has risen, principally related to the epidemic of pediatric obesity. According to a study by Pogorelic et al., the average BMI of the population under observation was substantially correlated with the number of pediatric cholecystectomies. This likely shows a link between rising obesity rates and the incidence of symptomatic cholelithiasis in children (3). Most surgeons prefer to dissect the cystic duct using a monopolar electrosurgical hook and occlude it with simple metal clips. Alternatively, although uncommon, cystic duct ligation can be accomplished using a linear stapler, endoloops, or suture ligation (4). While the safety of using the ultrasonically-activated shears, e.g., the Harmonic scalpel (HS, Johnson & Johnson Co., Cincinnati, OH, United States), for dissection of the gallbladder is confirmed in many studies, its efficacy in the closure of the cystic artery and duct in adults is still debatable (5). The aim of our study was to study the safety and efficiency of using HS in gallbladder dissection and cystic duct control in children during LC. Materials and methods This prospective study was conducted from May 2017 to April 2020 after approval by the Ethical Committee of the Alexandria Faculty of Medicine (IRB no.: 00007555, 16 February 2017). Informed consent was attained from all parents and legal guardians of the children included in the study. Children suffering from symptomatic gallstone disease were included in the study, while those having acute cholecystitis, common bile duct stones, previous upper abdominal operation, and gall bladder tumors based on radiological findings were excluded from the study. The LC was performed in patients all under general anesthesia with the patient lying supine in the reverse Trendelenburg position with the right side up permitting gravity to assist in retraction and allowing the small bowel, to fall away from the field. Exactly four ports (three 5 mm and one 10 mm) were placed on the upper abdomen. A 5-mm port was inserted first through the umbilicus for insertion of the 5-mm 30 • angle view scope. Pneumoperitoneum was established to a pressure of 10-12 mmHg. Next, a 10-mm port was inserted below the xiphoid process, where we could insert the 5 mm harmonic shear or the hook pencil via the port reducer, as well as extract the gall bladder at the end. Other two 5-mm working ports were inserted in the right flank. The initial step is to retract the gallbladder in order to open the Calot cystohepatic triangle and locate and skeletonize the cystic duct using Harmonic ACE R + Shears (Ethicon Endo-Surgery, Inc., Cincinnati, OH, United States) at "5" power level (more cutting and less coagulation). The instrument was adjusted to power level "2" for the closure and division of the cystic duct (less cutting and more coagulation) (Figures 1, 2). To avoid damaging the common bile duct (CBD), the jaws of HS were kept at a safe distance to avoid its damage and remained closed till a click was heard and the gall bladder become detached from the cystic duct ( Figure 3). All the minor branches of the cystic artery along the adjacent border of the gallbladder were cauterized. Finally, the gallbladder was dissected and removed from the liver bed, and it was sealed with a toothed crocodile 5 mm grasper through a 10mm trocar beneath the xiphoid. The operative time, as well as any intraoperative and postoperative problems, were all recorded. Patients were examined in the outpatient clinic at the end of the first postoperative week for a clinical assessment and abdominal ultrasonography to check for any probable collections. The clinical examination and abdominal ultrasonography were repeated at the end of the first and sixth postoperative months, along with blood tests, such as bilirubin, aminotransferase, gamma-glutamyl transferase, and alkaline phosphatase levels. The primary outcome of the study was to assess the safety of Delivery of the harmonic shear around the cystic duct. Cutting of cystic duct with harmonic shear. Both ends of the cystic duct after being cut by harmonic shear. using HS was assessed by searching for a biliary leak or CBD stricture. Secondary outcomes in the form of operative time, time to start oral feeding, and discharge were recorded as well. After data was fed to the computer, IBM SPSS software package version 20 (IBM Corp., Armonk, NY, United States) was used for analysis. Number and percent were used to describe qualitative data, whilst range (minimum and maximum), mean, standard deviation, and median were used to describe quantitative data. Results The present study included 42 children with symptomatic gallstone disease. A total of twenty-two patients were boys (52.3%), while 20 were girls (47.7%). Their age ranged from 3 to 13 years with a mean of 8.4 ± 3.25. The main indication for LC was hemolytic anemia in all cases except two, LC was done due to the presence of gallbladder polyps. All operations were completed laparoscopically, i.e., no conversion to open surgery was needed. The mean operative time was 40 ± 10.42 min, (range: 20-58 min. There were no intraoperative complications apart from gall bladder perforation in two cases during dissection from the liver bed. These were managed by retrieval of the spilled stones, adequate irrigation of the peritoneal cavity, and adequate antibiotic therapy. The postoperative recovery was uneventful in all patients. Patients started oral feeding after 11.3 ± 3.01 h (range: 7-18 h). The mean time of patients' discharge was 25.47 ± 7.49 h, (range: 14-48 h). Postoperative ultrasound examination was done for all cases at the sixth postoperative month where it showed normal CBD measurements and a clear surgical bed with no minor nor major bile leaks or CBD injuries. Discussion The majority of surgeons prefer to dissect the cystic duct using a monopolar electrosurgical hook and occlude it with simple metal clips in order to minimize bile leak. Nonetheless, these clips can migrate into neighboring structures, resulting in strictures due to foreign body response, act as a nidus for stone formation, and occasionally fall off leading to substantial morbidity (6,7). Although non-popular, many surgeons prefer to ligate the cystic duct using absorbable sutures to avoid such complications; however, this adds to the length of the procedure adding a technically demanding step in order to perform the three intracorporeal sutures (8). Ultrasonic coagulating shears were developed to allow hemostasis during laparoscopic surgery owing to their sealing effect, which is produced by coagulation of protein through high-frequency ultrasonic vibrations generating heat (9). In LC, the HS was investigated by many authors as an energy tool during dissection and removal of the gallbladder from the liver bed. What was debatable is the use of HS as a sole instrument in controlling cystic duct during LC. Bessa et al. (10) reported that the HS was as safe and effective as the more commonly used clip and cautery technique in achieving safe sealing and control of the cystic duct in the LC. Furthermore, they reported it was even superior to the latter in terms of shorter operative time and lower incidence of gallbladder perforation with subsequent bile leakage during dissection of gall bladder from the liver bed. Similar results were obtained by Westervalt (11) who reported no bile leaks in his 100 patients when the cystic duct was controlled and achieved solely by HS. In the study by Huscher et al. (12), however, bile leaks were found in 7 of the 331 patients (2.1%). All these studies were conducted on adults; nonetheless, no previous reports discussed these topics in the pediatric age group. From our perspective, HS offers many advantages. Firstly, it serves as a 4-in-1 instrument (i.e., dissector, electrosurgical hook, a clip applier, and scissor) (13). This definitely saves time as there is no need to change instruments repeatedly. Additionally, no smoke is produced with HS; thus, no need to clean the camera repeatedly which enhances the vision during the procedure. Secondly, HS has a small area of collateral thermal injury compared to momopolar (electrocautery) or bipolar (Ligasure) diathermy as it transduces a lower amount of energy, which allows the surgeon to use the harmonic dissector adjacent to the common bile duct with no fear of CBD thermal injury or bile leakage (14,15). Definitely, this minimizes the risk of gallbladder perforation and consequently saves time wasted in abdominal lavage and in retrieving spilled stones and reduces morbidity (16). Lastly, recent studies confirmed that, in the setting of financial restrictions encountered in low-resource countries, HS can be re-used safely without any consequences to the patient's condition or postoperative course (17). As regards the debate of using HS in controlling cystic ducts solely, and as claimed, by the manufacturer, the coagulation function of HS is safe when applied to vessels of up to 7 mm (18). That is why it is applied by many authors for coagulating the cystic artery as it is usually smaller than that caliber, hence, postoperative bleeding is an unexpected complication (19). In addition, after establishing the use of HS for sealing the cystic artery, some surgeons also investigated its role in sealing the cystic duct and concluded that the use of HS can be used only if the cystic duct diameter is less than 6 mm in diameter. This could be an issue in adults as cystic duct diameter can increase to more than twice the reference range in presence of cholelithiasis (20); nevertheless, in children, cystic ducts are seldom larger than 6 mm. It is worth mentioning that LC is not the first operation where the efficacy of HS was evaluated it was previously studied in other clipless laparoscopic procedures such as appendectomy or splenectomy and showed a high degree of efficacy and safety (21,22). In conclusion, this is the first report that studies the use of HS as a sole instrument to complete LC in the pediatric age group. HS is a safe and efficient instrument that can be used in gallbladder dissection as well as in controlling cystic duct and artery during LC in children. Data availability statement The original contributions presented in this study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author. Ethics statement The studies involving human participants were reviewed and approved by the Ethics Committee of the Faculty of Medicine of the University of Alexandria (Alexandria, Egypt). Written informed consent to participate in this study was provided by the participants or their legal guardian/next of kin. Author contributions AA and MK: data collection and manuscript writing. MA: critical revision. AK: protocol development and critical revision. All authors contributed to the article and approved the submitted version. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
234119050
s2orc/train
v2
2021-05-11T00:03:58.430Z
2021-01-14T00:00:00.000Z
Does Juvenile Stand Management Matter? Regional Scenarios of the Long-Term Effects on Wood Production : We analysed the regional level effects of juvenile stand management (early cleaning and precommercial thinning), shortly termed tending on wood production and the profitability of forest management. Altogether ca. 0.4 million hectares of juvenile stands from two significant forestry regions of Finland, South and North Savo, were examined. We used plot-level data of the 11th National Forest Inventory to represent the current status of juvenile stands in the study area, and the Motti stand simulator to predict the future developments of those stands for the next 100 years. We applied three scenarios: (i) Timely tending, (ii) delayed tending, and (iii) no tending, to examine differences between these alternative levels of juvenile stand management. The results showed the benefits of tending at a regional level. Timely tending was the most profitable option when low or modest interest rates (2–3%) were applied in the assessment. Even a short delay in tending clearly increased the tending costs. Delaying and neglecting tending resulted in significant losses, especially in sawlog removals and stumpage earnings. The financial gain from tending was the highest on fertile sites. Due to the high growth rate of trees, the situation may change very quickly on such sites. For the operational forestry, this means that fertile sites should have a high priority when conducting timely tendings. Introduction There has been an upward trend in the forest growth over several decades in Nordic forests. For example, in Finland, the annual increment of growing stock has recently reached ca. 110 million cubic meters, which is nearly double compared to the level in the 1950s [1]. Despite this, the concern about the sustainable availability of pulpwood and high-quality timber for industry has been raised. This is due to the increasing demands of wood-based raw material after recent, large-scale investments by the Finnish forest industry [1,2]. At the same time, the importance of forests on carbon sequestration and maintaining biodiversity are emphasized. This means that the forests should be managed so that, in addition to wood production, they provide a wide range of other ecosystem services as well. To get more high-quality timber to the markets, particular attention needs to be put to the management of young stands (e.g., [3]). In Finland, forests are mainly managed by small forest stands according to even-aged stand management from regeneration to the final cutting (i.e., rotation forestry). Most of the stands are regenerated for Scots pine (Pinus sylvestris L., henceforth pine) or Norway spruce (Picea abies (L.) Karst., henceforth spruce), and only a small proportion for silver birch (Betula pendula Roth.) or other tree species. At the juvenile stand stage, stands are generally managed once or twice in order to provide favourable growth conditions to regenerated tree species. Juvenile stand management, shortly termed tending, includes two separate silvicultural treatments: Early cleaning and precommercial thinning. In newly regenerated stands, abundant fast-growing broadleaves create a need for early cleaning to control competition. Furthermore, in subsequent years, precommercial thinning is generally needed to control the overall structure and stem density in a stand [4]. The recommendations for the appropriate timing and intensity for tending are given in silvicultural guidelines [5]. For example, in spruce stand, early cleaning is recommended to be done when the stand reaches one meter in height. Later, when the height is 3-5 m, the stand is recommended to be thinned to the density of 1600-2200 stems per hectare. Tending affects several ways on the development of trees and stands. Removing undesired broadleaves and other competing vegetation from young stand increases the growth of the released trees and enhances the yield of commercial timber [6][7][8][9]. Controlling stand density by thinning, in turn, accelerates the diameter and volume increment of the stems and enhances the development of the crowns, although thinning is known to reduce total yield [10,11]. Due to the enlarged growing space, tree branches grow thicker and longer, and crown recession slows down [12,13]. On the other hand, especially for pine, keeping the stand density longer at a relatively high level may have positive impacts on stem quality, enabling thin branches and good stem form [14]. Positive impacts of tending include improved profitability of forest management in the long term [3,15]. Currently in Finland, however, tending is conducted in much smaller areas than deemed necessary. According to the 11th National forest inventory (NFI11), from the silvicultural point of view, there is an urgent need for tending in at least 700,000 ha (ca. 18% of seedling stands) and a need for tending in the next few years in 1 million ha [16]. In the study region, Savo, the corresponding numbers are ca. 130,000 ha and 190,000 ha, respectively [16]. The financial motivation of forest owners to conduct pre-commercial silvicultural operations is challenging due to the associated immediate high costs and far-off benefits (i.e., long payback period). In particular, the costs of tending and clearing operations have been increasing [17,18]. The realized benefits of tending depend on the manner the tending is implemented. Timing and intensity of precommercial thinning affect the yield and quality development of young stands, e.g., the timing and profitability of the first commercial thinning [4,19]. In practice, however, broadleaves competing with the conifers are often removed too late to get the most benefit from the work [20,21]. Timing impacts directly on the working costs. The cost of precommercial thinning increases rapidly over time, due to the fast growth of the undesired trees and sprouts. According to Kaila et al. [22], a two-year delay can increase the cost by 8-42%. In addition, the availability of labour can be a restricting factor due to the high seasonality of silvicultural work. Thus, there is an obvious need to improve practices to reduce the costs of tending and, on the other hand, to demonstrate the effects of tending on the future incomes for forest owners and the consequential impacts on society. The stand-level effects of tending have been extensively studied in Nordic countries (e.g., [14,15,19,21,23,24]), whereas large-scale results are sparse to support decision-making in forest management planning and forest policy making. Recently, Huuskonen et al. [3] studied the benefits of juvenile stand management in a nationwide study. Still, there is an increasing demand for analyses at the regional level. For this study, we selected Savo as a study area. The area encompasses the south Savo and north Savo regions, two of the current 19 regions in Finland. These two regions play a vital role in forest biomass production in Finland, because of their forest structures and wood export volumes (e.g., [25]). For example, in the year 2018 the Savo regions shared about 21% of total harvesting supply in Finland [26]. Huuskonen et al. [3] emphasized the general gain of tending. In the study at hand, we sharpened the examination to the timing of treatments. In addition, Huuskonen et al. [3] Forests 2021, 12, 84 3 of 17 reported the differences between larger climatic regions (i.e., southern, central, and northern Finland) in the benefits of tending, whereas in our regional level study, the differences between site fertility levels were examined. The objective of the study was to analyse the effects of tending at the regional level on forest growth, total wood production by timber assortments, and on the profitability of forest management. The NFI11-data were applied from the selected study area representing the current status of juvenile stands in Savo. We used scenario analysis based on the simulated development of stands and examined the differences between three management alternatives for juvenile stands: Timely tending, delayed tending, and no tending. Scenarios In order to describe the long-term effects of tending on wood-production potential, we compiled three different scenarios representing different management strategies: Timely tending (scenario TEND), delayed tending (LateTEND), and no tending (NoTEND) ( Figure 1). The first two scenarios, TEND and LateTEND, included tending treatments as early cleaning and precommercial thinning. In TEND, both treatments were applied on time (timing according to silvicultural guidelines, based on the mean height of the dominant tree species), whereas in LateTEND only precommercial thinning was applied, but executed notably later (in 1.5 m higher stand) than in TEND. In the third scenario, NoTEND, neither early cleaning nor precommercial thinning was conducted. After the first commercial thinning stage, management regimes recommended in the silvicultural guidelines [5] were applied for all the three scenarios. Huuskonen et al. [3] emphasized the general gain of tending. In the study at hand, we sharpened the examination to the timing of treatments. In addition, Huuskonen et al. [3] reported the differences between larger climatic regions (i.e., southern, central, and northern Finland) in the benefits of tending, whereas in our regional level study, the differences between site fertility levels were examined. The objective of the study was to analyse the effects of tending at the regional level on forest growth, total wood production by timber assortments, and on the profitability of forest management. The NFI11-data were applied from the selected study area representing the current status of juvenile stands in Savo. We used scenario analysis based on the simulated development of stands and examined the differences between three management alternatives for juvenile stands: Timely tending, delayed tending, and no tending. Scenarios In order to describe the long-term effects of tending on wood-production potential, we compiled three different scenarios representing different management strategies: Timely tending (scenario TEND), delayed tending (LateTEND), and no tending (NoTEND) (Figure 1). The first two scenarios, TEND and LateTEND, included tending treatments as early cleaning and precommercial thinning. In TEND, both treatments were applied on time (timing according to silvicultural guidelines, based on the mean height of the dominant tree species), whereas in LateTEND only precommercial thinning was applied, but executed notably later (in 1.5 m higher stand) than in TEND. In the third scenario, NoTEND, neither early cleaning nor precommercial thinning was conducted. After the first commercial thinning stage, management regimes recommended in the silvicultural guidelines [5] were applied for all the three scenarios. Figure 1. Overview of step-by-step process applied in the scenario analysis. Motti stand simulator was used in the simulations. Further analyses were carried out with SAS [27] and J [28] software. Figure 1. Overview of step-by-step process applied in the scenario analysis. Motti stand simulator was used in the simulations. Further analyses were carried out with SAS [27] and J [28] software. Forest Data To get a representative basis for the simulations, we obtained the NFI11-data [16] covering the regions of South Savo and North Savo (henceforth referred as Savo) (Figures 1 and 2). Then, we selected the juvenile stands locating on productive forest land (i.e., annual increment of growing stock over the rotation >1 m 3 ha −1 ) available for wood production, and having a maximum stand mean height of 3.5 m for spruce, and 5 m for both pine and broadleaved dominated stands (according to dominant tree species). The height criteria were applied to include only juvenile stands where precommercial thinning would normally be a standard management option. We excluded clearcut areas, which were not yet regenerated, as well as the juvenile stands with an overstorey. Forest Data To get a representative basis for the simulations, we obtained the NFI11-data [16] covering the regions of South Savo and North Savo (henceforth referred as Savo) ( Figures 1 and 2). Then, we selected the juvenile stands locating on productive forest land (i.e., annual increment of growing stock over the rotation >1 m 3 ha −1 ) available for wood production, and having a maximum stand mean height of 3.5 m for spruce, and 5 m for both pine and broadleaved dominated stands (according to dominant tree species). The height criteria were applied to include only juvenile stands where precommercial thinning would normally be a standard management option. We excluded clearcut areas, which were not yet regenerated, as well as the juvenile stands with an overstorey. The final forest data comprised of 1351 plots, representing a total area of 0.39 million ha ( Figure 1, Table 1). Site fertility variated from the most fertile (class 1) to barren sites (class 6) on mineral soils and drained peatlands (Table 1). Due to the small proportion of peatland sites (13%), we reported only the combined results for peatland and mineral soil sites. Dominant tree species were pine, spruce, and birch species (silver birch on mineral soils, and downy birch (Betula pubescens Ehrh.) on drained peatlands). Fertile sites are mainly dominated by spruce and the poorer sites by pine (Table 1). Based on the stands' locations in the study area, they represented one of the three climatic areas: <1000 d.d. (degree days), 1000-1200 d.d., >1200 d.d., according to the cumulative annual temperature sum with a +5 °C threshold value. The major part (70%) of our study area represented the climatic area of 1000-1200 d.d., whereas ca. 30% represented the higher temperature sums and only a couple of stands located in the area of lower temperature sums. Together, the classes of site fertility, dominant tree species, and climatic area formed stand characteristic groups, where each stand can be fitted and, for which the specific management regimes were defined. The final forest data comprised of 1351 plots, representing a total area of 0.39 million ha ( Figure 1, Table 1). Site fertility variated from the most fertile (class 1) to barren sites (class 6) on mineral soils and drained peatlands (Table 1). Due to the small proportion of peatland sites (13%), we reported only the combined results for peatland and mineral soil sites. Dominant tree species were pine, spruce, and birch species (silver birch on mineral soils, and downy birch (Betula pubescens Ehrh.) on drained peatlands). Fertile sites are mainly dominated by spruce and the poorer sites by pine (Table 1). Based on the stands' locations in the study area, they represented one of the three climatic areas: <1000 d.d. (degree days), 1000-1200 d.d., >1200 d.d., according to the cumulative annual temperature sum with a +5 • C threshold value. The major part (70%) of our study area represented the climatic area of 1000-1200 d.d., whereas ca. 30% represented the higher temperature sums and only a couple of stands located in the area of lower temperature sums. Together, the classes of site fertility, dominant tree species, and climatic area formed stand characteristic groups, where each stand can be fitted and, for which the specific management regimes were defined. Management Regimes All the simulated management practices of a stand over a rotation were arranged as management regimes (Figure 1). We constructed tailored sets of alternative management regimes for each scenario and for different types of stands. By scenarios, management regimes included different kinds of treatments for the juvenile stands (timely tending, delayed tending, and no tending). Within the scenarios, management regimes varied according to stand characteristic groups based on site fertility, dominant tree species, and climatic area, and included successive silvicultural treatments and cuttings as suggested in silvicultural guidelines [5]. Since the guidelines include recommended range of intensity and timing of activities, several alternative management regimes were usually available for one stand characteristic group. As a result, 3369 specific management regimes were available for simulations ( Figure 1). Examples of the regimes are shown in Table 2. Simulations We used the Motti stand simulator to predict the development of each stand according to alternative management regimes ( Figure 1; Table 2). Using Motti, we were able to utilize a large and complex set of models to predict the natural dynamics as well as the effects of silvicultural treatments on stand dynamics [30]. Motti includes both stand-and treelevel growth and yield models for stand dynamics (regeneration, growth, and mortality), separately for mineral soil and drained peatland stands (e.g., [30][31][32][33]). The technical design of Motti is described by Salminen et al. [34]. The simulation period was 100 years. When the stand reached regeneration maturity (defined by stand mean diameter) before the simulation period ended, final cut was simulated, followed by forest regeneration according to the particular management regime. The number of simulations depended on the number of stands in the stand characteristic groups and the number of corresponding management regimes available for each group. For the stands of this study, we simulated altogether 85,714 stand developments ( Figure 1, Table 2). Details of Treatments Applied in Simulations In the TEND scenario, early cleaning and precommercial thinning were simulated as suggested in silvicultural guidelines [5]. The timing of tending treatments was based on stand dominant height (early cleaning ca. 1 m, precommercial thinning from 3.5 m to 5.5 m depending on site and tree species). Early cleaning was typically applied at a stand age of 4 to 6 years and precommercial thinning at the age from 10 to 15 years. Depending on site, tree species, and regeneration method, the number of seedlings after early cleaning was 3000-4000 trees per hectare. All seedlings considered as potential crop trees were left in early cleaning. In seedling stands, the models also predict the natural establishment of seedlings in addition to artificially regenerated seedlings. When early cleaning is applied in simulation, the opening of the growing space triggers an immediate emergence of new seedlings to the site [32]. After precommercial thinning, the stem number was 2000-2400 stems per hectare for pine and 1800-2200 stems per hectare for spruce. For birch, the stem number was 1600-1800 and 2000-2200 stems per hectare on mineral soils and peatlands, respectively. For the biodiversity aspect, stem numbers included a small proportion of broadleaved trees in coniferous stands. In the LateTEND scenario, early cleaning was not carried out, but precommercial thinning was carried out when the stand dominant height was from 5.0 m to 7.0 m to the same densities as in TEND (Table 2). No tending was simulated in the NoTEND scenario. The first commercial thinnings were simulated when the stands reached the predefined dominant height level depending on site type, tree species, and the climatic area. In dense stands (i.e., stem number more than 2600 per hectare), clearing of the thinning area was applied before the first commercial thinning. Quality thinning or thinning from below was used. The stem number after the first commercial thinning was set as 700-1100 stems per hectare depending on site and tree species. In the NoTEND scenario, stand density after thinning was slightly higher (100 stems per ha) than in the TEND and LateTEND scenarios to reduce the risk for wind and snow damage. In addition to pulpwood and sawlogs, the tree tops and stems smaller than pulpwood size were collected as delimbed energy wood in the first commercial thinning of LateTEND and NoTEND. Thereafter, similar management were applied for all the scenarios. Intermediate thinnings were timed according to the stand basal area and dominant height as suggested in the thinning guidelines. Thinning from below was used. The timing of final cuttings was based on the stand mean diameter varying by site type, tree species, and climatic area. Four alternative thresholds with 2-3 cm intervals around recommended mean diameters for final cuttings were used. The method of site preparation and forest regeneration varied according to site type. For fertile sites, planting was applied (mainly for spruce and birch). For pine, seeding or natural regeneration was applied depending on the site type. Genetically improved regeneration material was also alternatively available in artificial regeneration for pine on mineral soil sites. Ditch network maintenance was included in the management regimes on peatlands. In intermediate thinnings and final cuttings, the harvesting method was conventional pulpwood and sawlog harvesting. In all cuttings, logging of the stems was based on the rules that are widely used in Finland. For the simulated sawlog volumes, a tree-level reduction function was used to model the effect of large branches, forking, sweep, and other defects on the stems [35]. Unit Prices, Cost Factors, and Costs The costs of silvicultural treatments were defined by time consumption models integrated in Motti and the unit costs (long-term mean values) from statistics (Table 3). In the time consumption models for early cleaning, precommercial thinning, and clearing of the thinning area, it was assumed that the work was done manually with a clearing saw. The time consumption models for early cleaning and precommercial thinning was based on the number and size (mean stump diameter and height) of removed trees. The time used for clearing was also based on stem number, height, and size, but obtained in four classes from easy to very difficult. In regeneration, planting was assumed to be carried out manually. Material costs were included in the planting and seeding costs. For harvesting revenues, stumpage prices by tree species and felling methods were used (Table 3). Both prices and costs were based on statistics from the years 2002 to 2016 [17,36]. The nominal time series were deflated by the cost-of-living index to the year 2016 [37]. The net present values (NPV) were calculated with real interest rates (i.e., net of inflation) from 2% to 5%. Processing of Simulation Results For further analysing and up-scaling results to the regional level we used software J [28] and SAS [27]. First, for each scenario (TEND, LateTEND, and NoTEND), we randomly selected one simulated stand development for each stand among all the simulated alternatives for that stand within a given scenario ( Figure 1). Technically, random selection was carried out with a linear programming software J applying the procedure documented by Huuskonen et al. [3]. The applied procedure guaranteed that all simulated alternatives for the stand had an equal probability to be selected. As a sensitivity analysis, we separately tested the effect of this randomizing procedure on the results. Secondly, we scaled the stand-level results up to the regional level applying the representative area of each NFI-plot. Software J was also used in this step producing the results for the total study area including all variables examined. In general, the calculation procedures were similar to those applied in Huuskonen et al. [3]. Treatment Areas Average annual tending areas were the largest in the TEND scenario (Figure 3). There were two reasons for that. Firstly, early cleanings and precommercial thinnings were applied in TEND, whereas only precommercial thinning in LateTEND. Thus, the tending area during the first 20 years of the simulation period was ca. 27% greater in TEND than in LateTEND. Secondly, in TEND, stands reached the end of the rotation earlier, and all the actions of their next rotation were consequently scheduled earlier than in the other scenarios. Earlier treatments can be seen in Figure 3, where, for example, the area of precommercial thinnings during the simulation years of 61-70 is 53% higher in TEND than in LateTEND. scenarios. Earlier treatments can be seen in Figure 3, where, for example, the area of precommercial thinnings during the simulation years of 61-70 is 53% higher in TEND than in LateTEND. The first commercial thinnings and final cuttings were generally conducted earlier in TEND than in other scenarios indicating a faster development of stand mean diameter in the stands managed with tending ( Figure 4). On the contrary, intermediate thinnings were applied earlier in NoTEND and their annual areas were notably larger when compared to TEND. Earlier intermediate thinnings were partially due to the slightly lower intensity of the first commercial thinning in NoTEND, where retained stands were left somewhat denser after thinning than in the other scenarios. During the first 20 years, the annual area of the first commercial thinnings was small, but thinnings were applied slightly more in NoTEND than in the other scenarios. Later, the annual first commercial thinning areas were clearly larger in TEND than in LateTEND or NoTEND (Figure 4). Removals In the TEND scenario, the total removals from the 100-year period was 173 million The first commercial thinnings and final cuttings were generally conducted earlier in TEND than in other scenarios indicating a faster development of stand mean diameter in the stands managed with tending ( Figure 4). On the contrary, intermediate thinnings were applied earlier in NoTEND and their annual areas were notably larger when compared to TEND. Earlier intermediate thinnings were partially due to the slightly lower intensity of the first commercial thinning in NoTEND, where retained stands were left somewhat denser after thinning than in the other scenarios. During the first 20 years, the annual area of the first commercial thinnings was small, but thinnings were applied slightly more in NoTEND than in the other scenarios. Later, the annual first commercial thinning areas were clearly larger in TEND than in LateTEND or NoTEND (Figure 4). scenarios. Earlier treatments can be seen in Figure 3, where, for example, the area of precommercial thinnings during the simulation years of 61-70 is 53% higher in TEND than in LateTEND. The first commercial thinnings and final cuttings were generally conducted earlier in TEND than in other scenarios indicating a faster development of stand mean diameter in the stands managed with tending ( Figure 4). On the contrary, intermediate thinnings were applied earlier in NoTEND and their annual areas were notably larger when compared to TEND. Earlier intermediate thinnings were partially due to the slightly lower intensity of the first commercial thinning in NoTEND, where retained stands were left somewhat denser after thinning than in the other scenarios. During the first 20 years, the annual area of the first commercial thinnings was small, but thinnings were applied slightly more in NoTEND than in the other scenarios. Later, the annual first commercial thinning areas were clearly larger in TEND than in LateTEND or NoTEND (Figure 4). Removals In the TEND scenario, the total removals from the 100-year period was 173 million m 3 , including 57% sawlogs and 43% pulpwood (Table 4). Energy wood was not collected in TEND. In the LateTEND scenario, the total removals, including 2 million m 3 of energy Removals In the TEND scenario, the total removals from the 100-year period was 173 million m 3 , including 57% sawlogs and 43% pulpwood (Table 4). Energy wood was not collected in TEND. In the LateTEND scenario, the total removals, including 2 million m 3 of energy wood, were 0.6% smaller than those of TEND. In NoTEND, the total removals were almost the same as those of TEND, but included 4 million m 3 of energy wood, and the proportion of sawlogs was smaller at 47%. The relation between sawlogs and pulpwood was almost the same in TEND (1.31) and LateTEND (1.32), whereas in NoTEND it was as low as 0.94. Table 4. Total removals (million m 3 ), stumpage earnings, silvicultural costs, and net present values (NPV) 2-5% (million €) from the 100-year period by scenario (EC = early cleaning, PCT = precommercial thinning). As a summary, although the total removals were highest in the NoTEND scenario (Table 4), TEND resulted in the earliest and highest sawlog removals and NoTEND resulted in the earliest and highest pulpwood removals during the 100-year period ( Figure 5). the same as those of TEND, but included 4 million m 3 of energy wood, and the proportion of sawlogs was smaller at 47%. The relation between sawlogs and pulpwood was almost the same in TEND (1.31) and LateTEND (1.32), whereas in NoTEND it was as low as 0.94. As a summary, although the total removals were highest in the NoTEND scenario (Table 4), TEND resulted in the earliest and highest sawlog removals and NoTEND resulted in the earliest and highest pulpwood removals during the 100-year period ( Figure 5). Costs and Revenues The total silvicultural costs in the 100-year period were equal in TEND and LateTEND, whereas they were ca. 33% lower in NoTEND (Table 4, Figure 6). Tending costs represented almost half of all silvicultural costs involved in TEND and LateTEND. The regeneration costs were related to the final cut and regenerated area during the 100year period, thus being highest in TEND and second highest in LateTEND ( Figure 6). Costs and Revenues The total silvicultural costs in the 100-year period were equal in TEND and LateTEND, whereas they were ca. 33% lower in NoTEND (Table 4, Figure 6). Tending costs represented almost half of all silvicultural costs involved in TEND and LateTEND. The regeneration costs were related to the final cut and regenerated area during the 100-year period, thus being highest in TEND and second highest in LateTEND ( Figure 6). The total costs of tending (early cleaning and precommercial thinning) in TEND were 11% lower than those of LateTEND, in which only precommercial thinning was carried out (Table 4, Figure 6). In NoTEND, there were no costs from tending treatments, whereas clearing of the first commercial thinning area caused significant costs ( Figure 6). In the other scenarios, clearing costs were negligible. The total stumpage earnings from the 100-year period were the highest in TEND and lowest in NoTEND (Table 4). Comparing the total silvicultural costs and incomes, TEND resulted in higher costs of ca. €230 million (50%) when compared to NoTEND, but at the same time stumpage earnings were €822 million (13%) higher. Correspondingly, LateTEND resulted in €233 million (51%) higher costs compared to NoTEND, but at the same time stumpage earnings were €677 million (11%) higher. When TEND was compared to LateTEND, the costs were €3 million (0.4%) lower, whereas stumpage earnings were €145 million (2%) higher. The average costs per hectare for precommercial thinning were €282 ha −1 and €540 ha −1 in TEND and LateTEND, respectively. The early-cleaning cost in TEND was on average €261 ha −1 . As a result, the average costs per hectare for tending were practically equal for both scenarios (€542 ha −1 and €540 ha −1 in TEND and LateTEND, respectively). The site effect on tending costs was examined per hectare basis. Since tree species had different principles for tending, sites were further divided by dominant tree species. However, spruce stands on unfertile sites, pine stands on fertile sites, and birch stands were not examined by sites due to the small number in the dataset. The average early cleaning costs were the lowest on unfertile sites (ca. €209 ha −1 ) and the highest in spruce stands on medium sites (€272 ha −1 ) (Figure 7). The total costs of tending (early cleaning and precommercial thinning) in TEND were 11% lower than those of LateTEND, in which only precommercial thinning was carried out (Table 4, Figure 6). In NoTEND, there were no costs from tending treatments, whereas clearing of the first commercial thinning area caused significant costs ( Figure 6). In the other scenarios, clearing costs were negligible. The total stumpage earnings from the 100-year period were the highest in TEND and lowest in NoTEND (Table 4). Comparing the total silvicultural costs and incomes, TEND resulted in higher costs of ca. €230 million (50%) when compared to NoTEND, but at the same time stumpage earnings were €822 million (13%) higher. Correspondingly, LateTEND resulted in €233 million (51%) higher costs compared to NoTEND, but at the same time stumpage earnings were €677 million (11%) higher. When TEND was compared to LateTEND, the costs were €3 million (0.4%) lower, whereas stumpage earnings were €145 million (2%) higher. The average costs per hectare for precommercial thinning were €282 ha −1 and €540 ha −1 in TEND and LateTEND, respectively. The early-cleaning cost in TEND was on average €261 ha −1 . As a result, the average costs per hectare for tending were practically equal for both scenarios (€542 ha −1 and €540 ha −1 in TEND and LateTEND, respectively). The site effect on tending costs was examined per hectare basis. Since tree species had different principles for tending, sites were further divided by dominant tree species. However, spruce stands on unfertile sites, pine stands on fertile sites, and birch stands were not examined by sites due to the small number in the dataset. The average early cleaning costs were the lowest on unfertile sites (ca. €209 ha −1 ) and the highest in spruce stands on medium sites (€272 ha −1 ) (Figure 7). The total costs of tending (early cleaning and precommercial thinning) in TEND were 11% lower than those of LateTEND, in which only precommercial thinning was carried out (Table 4, Figure 6). In NoTEND, there were no costs from tending treatments, whereas clearing of the first commercial thinning area caused significant costs ( Figure 6). In the other scenarios, clearing costs were negligible. The total stumpage earnings from the 100-year period were the highest in TEND and lowest in NoTEND (Table 4). Comparing the total silvicultural costs and incomes, TEND resulted in higher costs of ca. €230 million (50%) when compared to NoTEND, but at the same time stumpage earnings were €822 million (13%) higher. Correspondingly, LateTEND resulted in €233 million (51%) higher costs compared to NoTEND, but at the same time stumpage earnings were €677 million (11%) higher. When TEND was compared to LateTEND, the costs were €3 million (0.4%) lower, whereas stumpage earnings were €145 million (2%) higher. The average costs per hectare for precommercial thinning were €282 ha −1 and €540 ha −1 in TEND and LateTEND, respectively. The early-cleaning cost in TEND was on average €261 ha −1 . As a result, the average costs per hectare for tending were practically equal for both scenarios (€542 ha −1 and €540 ha −1 in TEND and LateTEND, respectively). The site effect on tending costs was examined per hectare basis. Since tree species had different principles for tending, sites were further divided by dominant tree species. However, spruce stands on unfertile sites, pine stands on fertile sites, and birch stands were not examined by sites due to the small number in the dataset. The average early cleaning costs were the lowest on unfertile sites (ca. €209 ha −1 ) and the highest in spruce stands on medium sites (€272 ha −1 ) (Figure 7). The average precommercial thinning costs were, in principle, higher in pine stands than in spruce stands, due to the notably later timing of recommended treatments for pine (see Table 2). This can be seen in the results of TEND, where the precommercial thinning costs were, on average, ca. 50% higher in pine stands. In LateTEND, the difference between spruce and pine stands was smaller, and costs were highest in spruce stands on fertile sites (€587 ha −1 ). In fertile sites, the average cost of one precommercial thinning in LateTEND was 20% higher than the combined cost of early cleaning and precommercial thinning in TEND. On medium and unfertile sites, the costs of one treatment were lower than the costs of two treatments together (4% lower in spruce stands, and 10% lower in pine stands) (Figure 7). Profitability The NPV calculated from the whole 100-year period was the highest in the TEND scenario compared to other scenarios with an interest rate of up to 3% ( Table 4, "All" in Figure 8). With higher interest rates (from 4% to 5%), LateTEND was the least profitable and NoTEND turned out to be the most profitable option. Figure 7. Average costs (€ ha −1 ) of early cleaning (EC) and precommercial thinning (PCT) in pine and spruce dominated stands by scenarios of timely tending (TEND), and delayed tending (LateTEND), and by site fertility levels. The average precommercial thinning costs were, in principle, higher in pine stands than in spruce stands, due to the notably later timing of recommended treatments for pine (see Table 2). This can be seen in the results of TEND, where the precommercial thinning costs were, on average, ca. 50% higher in pine stands. In LateTEND, the difference between spruce and pine stands was smaller, and costs were highest in spruce stands on fertile sites (€587 ha −1 ). In fertile sites, the average cost of one precommercial thinning in LateTEND was 20% higher than the combined cost of early cleaning and precommercial thinning in TEND. On medium and unfertile sites, the costs of one treatment were lower than the costs of two treatments together (4% lower in spruce stands, and 10% lower in pine stands) ( Figure 7). Profitability The NPV calculated from the whole 100-year period was the highest in the TEND scenario compared to other scenarios with an interest rate of up to 3% ( Table 4, "All" in Figure 8). With higher interest rates (from 4% to 5%), LateTEND was the least profitable and NoTEND turned out to be the most profitable option. With the 3% interest rate, the NPV of TEND was slightly higher (difference €71 ha −1 ) than the NPV of NoTEND. At a regional level, this meant that TEND resulted in a higher NPV of €14 million than NoTEND. However, NoTEND outperformed LateTEND, resulting in a higher NPV of ca. €1 million. With the 4% interest rate, the NPV of TEND was €73 ha −1 lower than the NPV of NoTEND. At the regional level, NoTEND resulted in a €15 million higher NPV than TEND, and a €22 million higher NPV than LateTEND. By site fertility levels, NPVs (€ ha −1 ) were higher than average on fertile sites, but lower than average on medium and unfertile sites, as anticipated ( Figure 8). An advantage of TEND was retained on fertile and medium sites (for both spruce and pine stands) up to an interest rate of 3%, whereas NoTEND was the most profitable on unfertile sites. Sensitivity Analysis We separately tested the effect of the randomizing procedure (i.e., the random selection of the one simulation result for each stand by scenario) on the results. As a sensitivity analysis, we repeated randomizing 10 times for the North Savo stands, and then compared the results to the initial results by a few selected variables (NPV, harvesting removals, area of tending). With the 3% interest rate, the NPV of TEND was slightly higher (difference €71 ha −1 ) than the NPV of NoTEND. At a regional level, this meant that TEND resulted in a higher NPV of €14 million than NoTEND. However, NoTEND outperformed LateTEND, resulting in a higher NPV of ca. €1 million. With the 4% interest rate, the NPV of TEND was €73 ha −1 lower than the NPV of NoTEND. At the regional level, NoTEND resulted in a €15 million higher NPV than TEND, and a €22 million higher NPV than LateTEND. By site fertility levels, NPVs (€ ha −1 ) were higher than average on fertile sites, but lower than average on medium and unfertile sites, as anticipated ( Figure 8). An advantage of TEND was retained on fertile and medium sites (for both spruce and pine stands) up to an interest rate of 3%, whereas NoTEND was the most profitable on unfertile sites. Sensitivity Analysis We separately tested the effect of the randomizing procedure (i.e., the random selection of the one simulation result for each stand by scenario) on the results. As a sensitivity analysis, we repeated randomizing 10 times for the North Savo stands, and then compared the results to the initial results by a few selected variables (NPV, harvesting removals, area of tending). The sensitivity analysis showed the stability of our results (i.e., our results changed very little although the randomizing was repeated several times). The relative standard deviations of NPV among 11 cases (i.e., the initial + 10 repeated randomizations) were between 0.52% and 1.28% depending on the interest rate and scenario. According the NPV, the best scenario remained exactly same as in the initial results in all repeated cases and with all interest rates (0%-5%). Harvesting removals from the whole 100-year period also varied very little between different randomizing cases, with relative standard deviations being from 0.75% to 1.00%. For the total area of tending, the relative standard deviation was 1.01% and 0.83% in TEND and LateTEND, respectively. Temporal variation of precommercial thinning area during the 100-year period in 10 replicates is shown in Figure 9. very little although the randomizing was repeated several times). The relative standard deviations of NPV among 11 cases (i.e., the initial + 10 repeated randomizations) were between 0.52% and 1.28% depending on the interest rate and scenario. According the NPV, the best scenario remained exactly same as in the initial results in all repeated cases and with all interest rates (0%-5%). Harvesting removals from the whole 100-year period also varied very little between different randomizing cases, with relative standard deviations being from 0.75% to 1.00%. For the total area of tending, the relative standard deviation was 1.01% and 0.83% in TEND and LateTEND, respectively. Temporal variation of precommercial thinning area during the 100-year period in 10 replicates is shown in Figure 9. Benefits of Tending Our results showed the important role of tending as a part of the chain of silvicultural treatments. Although the costs of tending were high and occurred in the early stages of the rotation, higher and earlier incomes from future harvestings compensated these costs when discounting with modest interest rates at 2% to 3%. The financial viability at stand level analysis of precommercial thinning has been shown earlier e.g., in the studies of Pitt et al. [38], Bataineh et al. [39], and Fahlvik et al. [15]. The profitability of the scenarios was conditional to the applied interest rate. Timely tending (TEND) resulted in the highest NPV with an interest rate of up to 3%. Delayed tending (LateTEND) was the second best up to 2%, but with the 3% interest rate neglecting tending (NoTEND) turned out to be second best option before LateTEND. With 4% and 5% interest rates, NoTEND outperformed the alternatives with tending (TEND and LateTEND). Thus, according to this study, tending turns into a financially unattractive measure when the interest rate exceeds ca. 3%. However, the increased risk of damage related to unmanaged young stand is not taken into account, which overestimates the financial outcome associated with NoTEND. Our choice to apply interest rates of 2%-4% is a compromise between fluctuating time spans (associated with rotation periods ranging from 40 to 110 years) and recent studies on applicable interest rates in forestry [40][41][42]. Price [42] illustrated the discount schedules for three countries (UK, Norway, and France): The suggested discount rates fluctuated between 2% and 4% when the time horizon is from 30 to 200 years. Benefits of tending varied by sites. The financial gain from tending, expressed as NPV, was the highest on fertile sites, where the competition by broadleaves is intense, but where the high growth rate of trees and their fast and intensive reactions to thinning can compensate the costs of tending. Timely tending (TEND) was more profitable than LateTEND and NoTEND on fertile and medium sites with an interest rate of up to 3%. On Benefits of Tending Our results showed the important role of tending as a part of the chain of silvicultural treatments. Although the costs of tending were high and occurred in the early stages of the rotation, higher and earlier incomes from future harvestings compensated these costs when discounting with modest interest rates at 2% to 3%. The financial viability at stand level analysis of precommercial thinning has been shown earlier e.g., in the studies of Pitt et al. [38], Bataineh et al. [39], and Fahlvik et al. [15]. The profitability of the scenarios was conditional to the applied interest rate. Timely tending (TEND) resulted in the highest NPV with an interest rate of up to 3%. Delayed tending (LateTEND) was the second best up to 2%, but with the 3% interest rate neglecting tending (NoTEND) turned out to be second best option before LateTEND. With 4% and 5% interest rates, NoTEND outperformed the alternatives with tending (TEND and LateTEND). Thus, according to this study, tending turns into a financially unattractive measure when the interest rate exceeds ca. 3%. However, the increased risk of damage related to unmanaged young stand is not taken into account, which overestimates the financial outcome associated with NoTEND. Our choice to apply interest rates of 2-4% is a compromise between fluctuating time spans (associated with rotation periods ranging from 40 to 110 years) and recent studies on applicable interest rates in forestry [40][41][42]. Price [42] illustrated the discount schedules for three countries (UK, Norway, and France): The suggested discount rates fluctuated between 2% and 4% when the time horizon is from 30 to 200 years. Benefits of tending varied by sites. The financial gain from tending, expressed as NPV, was the highest on fertile sites, where the competition by broadleaves is intense, but where the high growth rate of trees and their fast and intensive reactions to thinning can compensate the costs of tending. Timely tending (TEND) was more profitable than LateTEND and NoTEND on fertile and medium sites with an interest rate of up to 3%. On unfertile sites, TEND was outperformed by NoTEND. In terms of NPV, delaying tending on unfertile sites would not be advisable since costs were poorly compensated due to relatively low growth rates. Tree species affected the average costs of tending. The average costs of timely precommercial thinning (€ ha −1 ) were higher in pine stands than in spruce stands because the recommended timing for the treatment for pine stands is later than for the spruce stands (i.e., precommercial thinning was more time-consuming in 5.5 m pine stands than in 3.5 m spruce stands). When treatment was delayed, the difference in costs between spruce and pine stands narrowed. Delaying did not increase costs in pine stands as much as in spruce stands. In pine stands, the average costs of tending were higher in TEND including two tending treatments than in LateTEND including only one treatment. However, in spruce stands on a fertile site, the situation was opposite: Average costs of one delayed treatment in LateTEND were higher than the costs of two timely treatments in TEND. The fast increase of costs associated with delaying is due to the high tree growth on those sites (both the dominant tree species as well as the undesired broadleaves). Another reason for the higher increase in costs in spruce stands (delayed vs. timely precommercial thinning) is the trees to be removed in tending: On fertile sites they usually are overtopped by broadleaves, whereas in pine stands on unfertile sites removed trees are pines and smaller than retained trees (e.g., [21]). The fast increase of costs also indicates that the stand condition (in relation to stand density and the silvicultural need for tending) is rapidly getting worse. Thus, on the stands on fertile sites, the best gain will be reached with timely tending, and therefore, they should be the first to be taken care of. Compensation for tending costs comes from the harvesting removals. Although the total biomass production was almost equal between scenarios, the proportion of timber assortments differed considerably. With timely tending (TEND), substantially more sawlogs were produced, whereas the proportion of pulpwood was larger in the scenario without tending (NoTEND). In this regard, tending and delayed tending were quite similar. According to earlier studies (e.g., [43]), the quality of stems will be better in tended stands due to the possibility to select best stems to grow. Therefore, it is worth mentioning that a possibly better quality of timber (due to tending) was not considered in the study at hand. This might underestimate the NPV associated with TEND and LateTEND. From a profitability point of view, if the recommended time for tending has already passed, neglecting tending turned out to be the most profitable option, especially with higher interest rates. However, neglecting tending involves higher risk of damage, which was not considered in our analysis. Evidently, this would have an impact on financial performance. Neglecting tending was the most profitable, even though it caused extra clearing costs at the time of the first commercial thinning. In practice, a possible option may be either pure or integrated energy wood thinning [44][45][46]. However, beside the financial aspects, the benefits of tending can be valued by other indicators as well (being not examined in this study), although many of them also have indirect impacts on profitability. Delaying or neglecting tending decreases the vitality of the trees, threatens the health of the stands, and increases the risks for different kinds of damage (e.g., [47][48][49]). For example, if precommercial thinning is carried out at a delayed stage in juvenile stands, the retained trees are thin, having a high risk for snow damage [50][51][52]. Due to changes in climate, the importance of the vitality of the stands and forests will be strongly emphasized in the future (e.g., [53]). Finally, the higher amount of sawlogs associated with tending has indirect economic effects not considered in this study. For instance, the higher amount of sawlogs generates more value added through wood processing and creates positive welfare impacts to society (for value-added production in forest industry, see Lantz [54]). Although the time span of the simulations was long, our study ignored the expected impacts of climate change on the growth and productivity of boreal forests (e.g., species distribution) as well as the possible increases in abiotic and biotic risks to forests. For this, further studies are needed. The other improvements for this kind of scenario analysis should include, e.g., a more detailed economic analysis, including a sensitivity analysis for the changes in costs and stumpage earnings in the future. Some details such as the dependence of clearing at the first commercial thinning on timing of precommercial thinning could easily be ascertained with the long-term field experiments [4]. In addition, the long-term field experiments would improve our knowledge of the future development of unmanaged juvenile stands. Regional Impacts of Tending The results of this study revealed the benefits of timely tending on future harvesting removals and stumpage earnings at a regional level. The estimated future yield of sawlogs and pulpwood from the current juvenile stands of Savo during the 100-year period by timely tending (TEND) would be 2.7 million m 3 more than without tending (NoTEND). Sawlog removals would be 19% higher in TEND, whereas pulpwood removals would be 15% lower, compared to NoTEND. The results also showed significant losses of NPV at a regional level due to delaying or neglecting tending (with 3% interest rate: €14.9 million or €13.8 million, in LateTEND and NoTEND, respectively). Although the total silvicultural costs would increase remarkably due to tending (+ €230 million) it would generate €822 million more stumpage earnings at Savo over a 100-year period (without discounting). Delaying tending causes further costs (+ €3 million) and decreases stumpage earnings by €145 million (without discounting). In this analysis, the TEND scenario represented an ideal situation, where all forest management was supposed to be done according to silvicultural guidelines. Thus, regional scenario results indicate solely the potential and the results need to be proportionate to the actual intensity level of tending in the study area. According to the NFI11 field data, the need for precommercial thinning was obtained on 320,000 hectares, of which more than a half was in urgent need for tending [16]. Thus, inevitably some (monetary) losses have already occurred in those stands. Furthermore, the current intensity level of tending (i.e., the average annual area of combined early cleaning and precommercial thinning) in the commercial forests of Savo is 23,600 hectares (average of years 2016-2018). Thus, the forest areas, where tending was carried out in practice, have been notably smaller than needed. If this intensity continues, it means that more and more juvenile stands will be left without tending or tended later than recommended. Depending of the site type, delaying tending by 1.5 m (in stand height) equals ca. 4-6 years in stand age. It is a short time, given the practices in operational forest management. Almost one third of the current juvenile stands in Savo are growing on the fertile sites, and more than a half on medium sites. The better the site, the more quickly juvenile stand develops, the narrower the timeframe for timely tending. Furthermore, losses caused by delaying will be higher on the better sites than on the poorer sites. Conclusions Our results underline the importance of timely tending at regional level. Timely tending was the most profitable when a modest interest rate (2-3%) was applied in the assessment. However, the scenario analysis showed only the potential future directions at a regional level, and the actual outcomes eventually depend on the practices and activities directed to the silvicultural sector in the future. Financially, applying tending later than recommended cannot be suggested due to the increased discounted tending costs. At a regional level, both delaying and neglecting tending generated significant losses especially in sawlog removals and stumpage earnings. Great care must be taken particularly on fertile sites. Timely tending turned to delayed tending in a very short time, rapidly increasing tending costs and decreasing the profitability of forest management. The magnitude of this decrease was strongly related to the applied interest rate so that the higher the rate, the greater the decrease. However, totally neglecting tending would generate risks which would have a negative effect on interest rate and further on the profitability. Data Availability Statement: The original NFI dataset and the further generated input data for Motti simulations are not publicly available to protect the privacy of forest owners and to keep the location of permanent plots secret.
221399020
s2orc/train
v2
2020-08-20T10:09:00.455Z
2020-08-17T00:00:00.000Z
Simultaneous Optimization of Microwave-Assisted Extraction of Phenolic Compounds and Antioxidant Activity of Avocado (Persea americana Mill.) Seeds Using Response Surface Methodology This study was designed to optimize three microwave-assisted extraction (MAE) parameters (ethanol concentration, microwave power, and extraction time) of total phenolics, total flavonoids, and antioxidant activity of avocado seeds using response surface methodology (RSM). The predicted quadratic models were highly significant (p < 0.001) for the responses studied. The extraction of total phenolic content (TPC), total flavonoid content (TFC), and antioxidant activity was significantly (p < 0.05) influenced by both microwave power and extraction time. The optimal conditions for simultaneous extraction of phenolic compounds and antioxidant activity were ethanol concentration of 58.3% (v/v), microwave power of 400 W, and extraction time of 4.8 min. Under these conditions, the experimental results agreed with the predicted values. MAE revealed clear advantages over the conventional solvent extraction (CSE) in terms of high extraction efficiency and antioxidant activity within the shortest extraction time. Furthermore, high-performance liquid chromatography (HPLC) analysis of optimized extract revealed the presence of 10 phenolic compounds, with rutin, catechin, and syringic acid being the dominant compounds. Consequently, this optimized MAE method has demonstrated a potential application for efficient extraction of polyphenolic antioxidants from avocado seeds in the nutraceutical industries. Introduction Avocado (Persea americana Mill.) belongs to the family of Lauraceae and is an important fruit crop endemic to the tropical and subtropical regions but presently cultivated worldwide. e food industry has shown remarkable interest in processing and enhancing the value of this crop due to its high economic importance. In addition to its pleasant sensory properties, there has been growing interest in the consumption of avocado-derived products owing to its high nutritional value and reported healthpromoting and/or disease-preventing properties [1,2]. e seed is a major by-product of avocado industry and is usually discarded with no further application [3]. In addition, this important by-product represents an environmental and waste management problem. e avocado seed constitutes up to 16% of the weight of the fruit [4] and is a rich source of polyphenols with antioxidant and antimicrobial properties [4][5][6][7]. Recent studies have demonstrated the antioxidant, anticancer, antidiabetic, anti-inflammatory, blood pressure reducing, antimicrobial, insecticidal, and dermatological activities of seed preparations [4,8]. Due to their beneficial effects, avocado seeds can be an alternative inexpensive source of bioactive compounds, and an efficient extraction of important phenolics from the avocado waste could improve the economics of the avocado industry and minimize environmental impact. e extraction of phenolic compounds from avocado seeds has been investigated in the last decades focusing mainly on conventional extraction methods such as maceration, Soxhlet, and heat reflux extraction methods. However, these methods are very time-consuming and require large quantities of solvents [9,10]. Recently, several efficient and advanced extraction techniques including accelerated solvent extraction [6], ultrasound-assisted extraction [11], and supercritical fluid extraction [12] have been developed for the extraction of phenolic compounds from avocado seeds. Microwave-assisted extraction (MAE) is a green and effective extraction technique that uses microwave energy to heat polar solvent in contact with samples, by ionic conduction and dipole rotation, which improves cell wall destruction and increases solubility of compounds such as flavonoids [13][14][15]. MAE has gain popularity in recent times due to it benefits of improved efficiency, reduced extraction time, low solvent consumption, higher extraction rate, and high potential for automation [16,17]. MAE technique has been used for the extraction of bioactive compounds from a wide variety of matrices, such as grapes [18], tomatoes [19], apple [20], and coffee [21]. However, the extraction of phenolic bioactive compounds from avocado seeds has not been evaluated using MAE. e efficiency of the MAE process is usually affected by several variables such as extraction power, time, solvent composition, and solvent-to-sample ratio [18,[22][23][24]. It is therefore important to optimize these process variables to achieve maximum yield of bioactive compounds from the raw materials. In this study, a response surface methodology (RSM) was used to determine the effect of MAE process variables and their interactions to ensure maximal extraction efficiency. is method allows the optimization of all variables simultaneously and predicts the most efficient conditions with the use of a minimal number of experiments [25]. RSM has recently been used to optimize the extraction conditions of phenolics from various plants [26][27][28]. us, the objective of this study was to optimize MAE conditions to obtain maximum yield of phenolic antioxidants from avocado seeds. RSM was used to predict the effects of microwave power, extraction time, and ethanol concentration on total phenolic content (TPC), total flavonoid content (TFC), and antioxidant activity of avocado seed extract. Sample Preparation. Avocado fruits (Persea americana Mill. var. Hass), with adequate ripeness for consumption, were obtained from a local market at Bonyere (Ghana) in February 2019. e seeds were manually removed from the fruits, cleansed, sliced into small and thin size, and sun-dried for 12 days until no more weight loss was observed. e dried seeds were milled into fine powder using a blender, and the particle size was standardized using a 250 μm sieve. e moisture content of the dried avocado seeds was 8.9%. e powdered sample was stored at −20°C in airtight bags until being used. Experimental Design. A face-centred central composite design was used to optimize three independent microwave parameters: ethanol concentration (%, X 1 ), microwave power (W, X 2 ), and extraction time (min, X 3 ) of four dependent variables: total phenolic content (Y TPC ), total flavonoid content (Y TFC ), DPPH scavenging activity (Y DPPH ), and ABTS scavenging activity (Y ABTS ). ese independent microwave parameters were selected due to their significant influence on the efficiency of MAE [18,[22][23][24]. Generally, ethanol and methanol are better solvents for extraction of phenolic compounds. Considering the potential use of this product in food industry, ethanol was selected as the solvent in this study. e independent variables were coded at three levels, and their actual values selected based on literature data and preliminary experimental results. e independent variables and their related codes and levels are displayed in Table 1. A total of 17 experimental runs were performed randomly, which included three replicates at the centre point (Table 2), and all the experiments were replicated thrice to improve the analysis. Regression analysis for the experiment data was performed and was fitted into a second-order polynomial model: where β 0 , β i , β ii , and β ij are the regression coefficients; x i and x j are the coded levels of independent variables affecting the dependent response Y; and k is the number of parameters. Microwave-Assisted Extraction (MAE) . MAE was performed using a domestic microwave oven system (Kenwood K30CSS14 Microwave, China) operating at 800 W maximum power and a frequency of 2450 MHz. e apparatus was equipped with a digital control system for irradiation time and microwave power. e oven was modified in order to condense the vapor generated during extraction into the sample. 1 g of avocado seeds powder was stirred in 20 mL aqueous ethanol, and the mixture was irradiated using the microwave system. e MAE extraction parameters were microwave power (80-400 W), extraction time (1-5 min), and ethanol concentration (40-80%). ereafter, the sample was filtered using a vacuum pump, and the liquid extract was collected and stored at 4°C until further use. Conventional Solvent Extraction (CSE). Phenolic compounds in avocado seeds were extracted using a CSE method optimized by Gómez et al. [29]. Briefly, 1 g of avocado seeds powder was mixed with 60 mL of 56% ethanol (v/v), and the mixture was kept in a thermostatic water bath (Grant W14, Cambridge, England) at 63°C, with shaking for 23 min. After cooling, the mixture was centrifuged at 2500 rpm for 10 min, and the supernatant was recovered through filtration and stored at 4°C until further use. Determination of Total Phenolic Content (TPC). TPC of the avocado seed extract was determined using the Folin-Ciocalteu method [16]. e extract (100 μL) was mixed with 750 μL of a 10-fold diluted Folin-Ciocalteu reagent followed by 750 μL of sodium carbonate (7.5%, w/v). e mixture was incubated in dark at room temperature (27°C) for 90 min, and its absorbance measured at 725 nm using an UV-Vis spectrophotometer (Labomed Spectro UVD 3200, USA) against the blank. Gallic acid was used for the calibration curve ( Figure S1). e results were expressed as mg of gallic acid equivalent (GAE) per gram of dry weight (dw) of avocado seeds. Total Flavonoid Content (TFC). e flavonoid content in the extract was determined by the aluminium chloride method [30]. Briefly, 0.5 mL of extract was diluted with 1.5 mL of distilled water, 0.5 mL of 10% (w/v) aluminium chloride, and 0.1 mL of potassium acetate (1 M). e final volume was made up to 5 mL with distilled water, and the mixture kept at room temperature for 30 min. e absorbance was measured at a wavelength of 415 nm against blank (AlCl 3 solution) after 30 min of equilibrium. e TFC was quantified using quercetin standard curve ( Figure S2) and estimated as mg of quercetin equivalent (QE) per gram of dry weight (dw) of avocado seeds. DPPH Radical Scavenging Activity. e DPPH assay was performed as described by Pandey et al. [30]. e extract (1 mL) was mixed with 3 mL of DPPH solution (4 mL of stock DPPH solution in 96 mL of 80% methanol), and the mixture was kept in dark for 30 min at room temperature. e absorbance of the mixture was measured at 520 nm using UV-Vis spectrophotometer (Labomed Spectro UVD 3200, USA). A mixed solution of 1 mL ethanol and 3 mL DPPH solution was used as the blank. Antioxidant activity of the extract was expressed as the percent inhibition, according to the following equation: Table 2: Central composite design (CCD) with observed response of the dependent variables from MAE of avocado seeds. Independent variables Phenolic compounds Antioxidant activity Run order where A control is the absorbance value of the blank and A sample is the absorbance of extract and DPPH solution. 2.6.2. ABTS Radical Scavenging Activity. ABTS radical scavenging ability of the avocado seed extract was evaluated using a spectrophotometric method as described by Dahmoune et al. [16]. A radical solution (7 mM ABTS and 2.45 mM potassium persulfate in equal proportions) was prepared and left to stand in the dark at room temperature (27°C) for 16 h until the reaction was completed, and the absorbance was stable at 734 nm. is solution was diluted with ethanol (80%) till an absorbance value of 0.70 ± 0.02 at 734 nm was obtained. e extract (0.1 mL) was mixed with 3.9 mL of diluted ABTS solution and kept in dark for 15 min at room temperature. e absorbance was measured at 734 nm against blank (diluted ABTS solution) using UV-Vis spectrophotometer (Labomed Spectro UVD 3200, USA). e antioxidant activity of the extract was expressed as percent inhibition, according to where A control is the absorbance value of the blank and A sample is the absorbance of extract and ABTS solution. HPLC Analysis. Phenolic compounds present in the optimized extract were analyzed using Shimadzu UFLC chromatographic system (Shimadzu Corporation, Kyoto, Japan), equipped with two LC-20AD pumps and SPD-20AV ultraviolet-visible detector. e separation of the compounds was performed using Luna C18 column (150 mm × 4.6 mm, 3 μm) at a column temperature of 30°C. e mobile phase consisted of A (1% acetic acid in acetonitrile) and B (1% acetic acid in water) with gradient elution 0-3 min (9% A), 3-37 min (9-68% A), 37-39 min (68% A), and 39-40 min (69-9% A). e flow rate was 0.8 mL/min, and the injection volume was 5 μL. Each standard solution and sample was analyzed in triplicate. e peaks were detected by UV at wavelength of 280 nm according to the scanning mode of the UV detector. e phenolic compounds were identified by comparing their retention times with corresponding standards. All the identified compounds were quantified by external standard method using calibration curves, and their concentrations were expressed as mg/100 g·dw. Statistical Analysis. Statistical analysis and response surface plots were performed using the Design-Expert software (version 11.0, Stat-Ease, Inc., MN, USA). Data were analyzed using analysis of variance (ANOVA) at 95% confidence level. Table 2). e values showed considerable dependence on the extraction conditions, which suggests the need to optimize the extraction process. Quadratic polynomial models were developed, and the adequacy and fitness of the models were evaluated by ANOVA. Results and Discussion e ANOVA results revealed that the four models were highly significant (P < 0.0001) for TPC, TFC, DPPH, and ABTS (Table 3). e respective values of R 2 , Adj-R 2 and Pred-R 2 for TPC (0.9758, 0.9446, and 0.8086, respectively), TFC (0.9875, 0.9715, and 0.8679, respectively), DPPH (0.9912, 0.9800, and 0.9372, respectively), and ABTS (0.9899, 0.9770, and 0.9255, respectively) were all close to 1, indicating good correlation between the predicted and the actual results [31]. Moreover, the low values of coefficient of variation (CV, %: 3.55, 9.28, 4.83, and 5.98) suggested that the experimental values were reliable and reproducible [32,33]. Furthermore, the lack of fit values were not significant (P > 0.05), indicating the adequacy of the model in predicting MAE of phenolic compounds and antioxidant activity of avocado seeds. Influence of the Extraction Parameters on Total Phenolic Content. e TPC in avocado seed extract varied from 47.25 to 89.39 mg·GAE/g ( Table 2). e lowest yield was achieved at ethanol concentration of 80% and microwave power of 80 W after 1 min of extraction, while the highest yield was obtained at ethanol concentration of 60% and microwave power of 400 W after 3 min of extraction time. Table 3 shows that microwave power (X 2 ) and extraction time (X 3 ) had significant (P < 0.05) positive effect on TPC and the most significant factor is microwave power. e quadratic effect (X 2 1 , X 2 2 and X 2 3 ) also had significant (P < 0.05) influence on TPC under MAE. ere was a significant (P < 0.05) interaction between ethanol concentration and extraction time (X 1 X 3 ), as well as microwave power and extraction time (X 2 X 3 ). e second-order polynomial equation for TPC was expressed as e effects of the independent variables and their mutual interactions on TPC can be seen on the three-dimensional response surface curves shown in Figures 1(a)-1(c). MAE of TPC from avocado seeds increased initially and decreased as the ethanol concentration increased (Figures 1(a) and 1(b)). Similar observation was reported for MAE of polyphenols from Coriolus versicolor mushroom [22], from chokeberries [23], from Myrtus communis L. leaves [16] and from blueberry leaves [34]. is significant (p < 0.01) quadratic effect of ethanol concentration on TPC (Table 3) could be explained by the heightened degree of sample cell membrane breakage and improved phenolic compounds solubility by the initial increase in ethanol concentrations [35,36]. However, as ethanol concentration continues to increase, the polarity of the solvent changes, which may lead to increased Journal of Analytical Methods in Chemistry impurities being extracted [35], therefore reducing the amount of total phenolic compounds extracted. Also increased diffusion resistance due to coagulation of proteins at high ethanol concentrations may prevent dissolution of polyphenols and influences the extraction rate [36]. As shown in Figure 1(a), microwave power had significant influence on TPC than ethanol concentration and this may be attributed to the increased solubility of phenolic compounds as a result of increasing power which promotes cell rupture and enhances exudation of phenolic compounds into the extracting solvent [37]. Ozbek et al. [38] reported similar behaviour for MAE of TPC from pistachio hull. e extraction time was an important parameter that influenced the extraction of TPC. As shown in Figures 1(a) and 1(b)), the extraction of TPC increased with increasing extraction time to about 4 min, beyond which a decrease in TPC was observed. is result is in agreement with that reported from Calop pulp [39]. Extended extraction time was expected to favour the extraction of phenolic compounds, since enough time is required for solvent penetration into the plant tissue, dissolving the compounds and subsequently diffusing out to the extraction medium [40]. However, at longer extraction time, the extracted yields decreased due to increased dissolution of polymer matrix, which causes an increase in viscosity and thereby encapsulating the extracted compounds [41]. In addition, long extraction time may increase exposure to light and oxygen which will eventually result in the oxidation of phenolic compounds [42]. According to ANOVA analysis (Table 3), the interactive effect of ethanol concentration and extraction time (X 1 X 3 ) had significant positive influence (p < 0.05) on TPC. As shown in Figure 1(b), the extraction of TPC increased with increasing ethanol concentration and extraction time to about 60% and 4 min, respectively, after which increasing ethanol concentration and extraction time caused a decrease in the recovery of TPC. Figure 1(c) illustrates the effect of microwave power and extraction time on TPC. is significant (p < 0.05) positive interaction (Table 2) is in agreement with earlier reports [43,44]. Increasing microwave power increased TPC as extraction time increases (1-3 min). is phenomenon could be explained by the enhanced mass transfer rate and solubility of phenolic compounds due to decreasing surface tension and solvent viscosity with increasing microwave power, which improve sample wetting and matrix penetration, respectively, thereby enhancing extraction efficiency [16,45,46]. However, at high levels of microwave power (320-400 W), increasing the extraction time after 4 min decreased TPC which may be due to degradation of certain phenolic compounds [16]. Influence of the Extraction Parameters on Total Flavonoid Content. e predictive equation for the relationship between TFC and the extraction parameter was expressed as follows: As shown in Table 3, microwave power and extraction time exhibited a highly significant (p < 0.001) positive linear effect, while the quadratic terms of ethanol concentration and microwave power showed significant (p < 0.01) negative effect on the extraction of TFC from avocado seeds. e same linear and quadratic effects were observed for TPC extraction, which suggests that similar factors affected the extraction of TFC from avocado seeds. is is expected as flavonoids represent a subgroup of polyphenols. e interaction of microwave power and extraction time (X 2 X 3 ) had a significant (p < 0.05) positive effect on TFC. At lower microwave powers, increasing extraction powers gradually increased TFC value over time (Figure 1(f)). is significant (p < 0.05) interaction of microwave power and extraction time (X 2 X 3 ) is tentatively explained by the low rate of mass transfer at low microwave powers, which would require more time for the phenolic compounds to dissolve from the avocado seeds into the solution. At higher microwave powers, the dissolution of phenolic compounds can reach equilibrium in a relatively short time, hence the extraction of TFC are not readily affected by changes in the extraction time. Ethanol concentration was the least important factor as it did not show a significant effect on TFC (Table 3). However, the significant (p < 0.05) negative interaction of ethanol concentration and microwave power (X 1 X 2 ) on the extraction of TFC suggested that optimal microwave power values increase as ethanol concentration decreases (Figure 1(d)). Influence of the Extraction Parameters on Antioxidant Activity. e antioxidant activity of the avocado seed extract was determined using ABTS and DPPH assays. e results in Table 3 show that the ABTS scavenging activity was influenced by ethanol concentration, microwave power and extraction time, while DPPH activity depended on microwave power and extraction time. e model equation for antioxidant activity can be represented as follows: Y ABTS � 67.44 − 2.27X 1 + 11.97X 2 + 11.43X 3 + 0.01X 1 X 2 e linear effects of microwave power and extraction time showed a highly significant (p < 0.001) positive effect on ABTS scavenging activity, while ethanol concentration exhibited significant (p < 0.05) negative effect on ABTS. Moreover, the quadratic effects of ethanol concentration (X 2 1 ) and extraction time (X 2 3 ) showed highly significant (p < 0.001) and moderately significant (p < 0.01) negative effects on ABTS activity, respectively ( Table 3). As shown in Figure 2(e), increasing ethanol concentration above 60% resulted in a quadratic decrease in ABTS activity. Interestingly, there was no significant interactive impact (p > 0.05) of X 1 X 2 , X 1 X 3 , or X 2 X 3 on ABTS scavenging activity (Table 3). is indicates that the ABTS scavenging activity of the extract was individually affected by ethanol concentration, microwave power, and extraction time and not by their interaction. In case of DPPH antioxidant activity, both microwave power and extraction time showed highly significant (p < 0.001) positive linear effect. e quadratic effects of the ethanol concentration (X 2 1 ) (p < 0.05) and microwave power (X 2 2 ) significantly (p < 0.01) influenced DPPH scavenging activity (Table 3). Moreover, increasing both microwave power and extraction time resulted in significant positive interactive effect on DPPH activity (Figure 2(c)). us, the longer the extraction time, the better the DPPH scavenging activity of the extract. Similar observation was reported by Garrido et al. [47] from chardonnay grape marc. Although both ABTS and DPPH scavenging activities exhibited relatively similar patterns, the minor differences could be due to the present of various phenolic compounds in the extract, which exert different kinetics and reaction mechanisms to different antioxidant activity [30]. Similar findings have been reported from vine pruning residues [48] and from rhizomes of Rheum moorcroftianum [30]. Optimization of Extraction Conditions and Verification of Predictive Model. e optimal conditions for simultaneous extraction of maximum phenolic compounds (TPC and TFC) and antioxidant activity (DPPH and ABTS) from dry avocado seeds were predicted by maximizing the desirability of the responses using Design-Expert software trial version 11.0 (Stat-Ease, Inc.). e optimal microwave extraction conditions for optimum TPC, TFC, DPPH, and ABTS in a single experiment were determined to be as ethanol concentration of 58.3%, microwave power of 400 W, and extraction time of 4.8 min with desirability of 0.955. e numerical optimization provided the maximum predicted values of 83.90 mg·GAE/g for TPC, 21.84 mg·QE/g for TFC, 75.67% DPPH inhibition, and 82.66% ABTS inhibition. Experiments were performed under the optimized conditions, and the results are presented in Table 4. e experimental values agreed with the predicted values, confirming the reliability of the model obtained by CCD in predicting the contents of phenolic compounds and antioxidant activity using MAE. Comparison of MAE with CSE. e results of TPC, TFC, and antioxidant activity from avocado seeds by MAE and CSE are shown in Table 5. e MAE method significantly (p < 0.05) enhanced the extraction of phenolic compounds and antioxidant activity as compared to CSE. In addition to improved extraction efficiency, solvent consumption and extraction time were significantly reduced by MAE in comparison with CSE. Using ultrasound-assisted extraction, a TPC value of 57.3 mg·GAE/g was obtained from avocado seeds [11]. e fast and efficient extraction of phenolic compounds from avocado seeds by MAE could be explained by the rapid heat generation by microwave energy which causes destruction of the cellular matrix and enhances the release of phenolic compounds [13,14] and hence antioxidant activity. Journal of Analytical Methods in Chemistry HPLC Analysis of Phenolic Compounds in Avocado Seed Extract. Ten phenolic compounds contained in the optimized extract of avocado seeds were identified by HPLC at wavelength of 280 nm (Figure 3). e identified compounds were gallic acid, catechin, 4-hydroxybenzoic acid, vanillic acid, caffeic acid, syringic acid, p-coumaric acid, rutin, ferulic acid, and quercetin. Most of these compounds have previously been identified in avocado seeds [3,49,50]. e content of each phenolic compound in this extract was quantified (Table 6). e most abundant compounds in this extract were rutin (71.67 mg/100 g), catechin (52.46 mg/ 100 g), and syringic acid (45.87 mg/100 g). e concentration of catechin, which is one of the most abundant phenolic compounds in avocado seeds, was higher than the previously reported value of 25.84 mg/100 g [50]. is may be due to, among other things, the extraction technique employed. Most of the identified phenolic compounds have shown significant free radical scavenging activity [30,51]; hence the combined effects of these phenolic compounds may be partly responsible for the antioxidant activity observed in the extract obtained by MAE. Conclusion In this study, three parameters of MAE were successfully optimized for the maximum extraction of polyphenolic compounds and antioxidant activity of avocado seeds using RSM. e results indicate that both microwave power and extraction time significantly influenced the extraction of phenolic compounds (TPC and TFC) and antioxidant activity (DPPH and ABTS). e optimal conditions for simultaneous extraction of maximum phenolic compounds (TPC and TFC) and antioxidant activity (DPPH and ABTS) from avocado seeds were ethanol concentration of 58.3%, microwave power of 400 W, and extraction time of 4.8 min. Under these conditions, the experimental results agreed with the predicted values. MAE revealed clear advantages over CSE in terms of high extraction efficiency and antioxidant activity of extract within the shortest extraction time. Furthermore, ten phenolic compounds have been identified and quantified from this extract. e predominant phenolic compounds in the avocado seed extract include rutin, catechin, and syringic acid. us, this optimized MAE method could be beneficial for the extraction and analysis of polyphenolic antioxidants from avocado seeds for industrial purposes. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare no conflicts of interest regarding the publication of this paper. Figure S1: gallic acid calibration curve. Figure
133187690
s2orc/train
v2
2019-04-26T14:23:46.379Z
2016-10-03T00:00:00.000Z
Sourcing obsidian: a new optimized LA-ICP-MS protocol Laser Ablation-Inductively Coupled Plasma-Mass Spectrometry [LA-ICP-MS] is one of the most successful analytical techniques used in archaeological sciences. Applied to the sourcing of lithic raw materials, it allows for fast and reliable analysis of large assemblages. However, the majority of published studies omit important analytical issues commonly encountered with laser ablation. This research presents a new advanced LA-ICP-MS protocol developed at Southern Cross GeoScience (SOLARIS laboratory, Southern Cross University, Australia), which optimizes the potential of this cutting-edge geochemical characterization technique for obsidian sourcing. This new protocol uses ablation lines with a reduced number of assayed elements (specific isotopes) to achieve higher sensitivity as well as increased precision and accuracy, in contrast to previous studies working with ablation points and an exhaustive list of measured isotopes. Applied to obsidian sources from the Western Mediterranean region, the Carpathian basin, and the Aegean, the results clearly differentiate between the main outcrops, thus demonstrating the efficiency of the new advanced LA-ICP-MS protocol in answering fundamental archaeological questions. Statement of significance Our new LA-ICP-MS protocol, specifically tailored for the geochemical sourcing of obsidian artefacts in the Western Mediterranean area, was developed at SOLARIS (Southern Cross GeoScience, Southern Cross University, Australia) with a top-of-the-range Agilent 7700x ICP-MS coupled to a an ESI NWR 213 Laser Ablation System. Taking into account the common analytical issues encountered with the LA-ICP-MS technique, we focused on two parameters: the use of ablation lines instead of ablation points, and the development of a reduced list of measured isotopes. The use of ablation lines aims to compensate for any sample heterogeneity, achieve a higher count rate as well as a better signal stability, and also reduce laser-induced elemental fractionation. The measured isotopes have been carefully selected amongst the most efficient to discriminate between the different obsidian sources. This shortened list of isotopes achieves precise and accurate measurements with a higher sensitivity, and with the use of ablation lines, contributes to enhancing the potential of this geochemical characterization technique for obsidian sourcing. Data availability The LA-ICP-MS results for the obsidian geological samples from the Mediterranean area are available as Introduction Geochemical characterization methods currently used for obsidian sourcing studies in archaeology include: X-Ray Fluorescence spectroscopy [XRF] (Carter and were developed disregarding some of these issues. Many studies still use discrete ablation points, despite the fact that the use of ablation lines and rasters is a well-established means of overcoming elemental fractionation (Jackson 2001), which is one of the main issues of LA-ICP-MS analysis. Lines and rasters also allow for a higher count rate, achieve better signal stability and help compensate for sample heterogeneity . Most obsidian sourcing studies were also assaying up to 30 isotopes, when only a handful of these isotopes are typically used to discriminate between the obsidian sources and attribute the artefacts to those sources (see e.g. Here we present, validate, and explain the rationale underlying a protocol designed to optimize the LA-ICP-MS technique for obsidian sourcing. Geological and archaeological obsidian samples were analysed as a means of testing this new protocol, which improves analytical sensitivity, accuracy, reliability, and efficiency (i.e. swiftness in regard to the aforementioned factors) by focusing on two main changes: (a) the use of a reduced list of assayed isotopes, and (b) the use of ablation lines instead of ablation points, as advised in earlier methodological studies. V1 and V2 protocols The hypothesis explored here is that a reduced number of assayed isotopes can achieve a better sensitivity. This led to the development and comparison of two different protocols: one commonly found in the literature (named V1) employs an exhaustive list of measured isotopes, the secondoptimized (V2)employs a reduced list of isotopes. The instrumental settings used for both protocols are summarized in the Table 1. Laser ablation parameters As previously mentioned, the use of ablation lines in LA-ICP-MS analyses has been proven to reduce element fractionation, correct for sample heterogeneity and achieve higher count rates . To our knowledge, such an ablation protocol has rarely been applied to obsidian sourcing (although see e.g. In this study, we opted to use ablation lines in order to optimize the LA-ICP-MS technique. With our protocol designed for both geological and archaeological obsidian samples, the ablation settings have been tailored specifically for each sample type. The same instrumental parameters were utilized in both cases (see Table 1). Geological samples The geological samples were cut and embedded in an epoxy resin (Epofix, Struers), then polished down to ¼ µm (using a polycrystalline diamond solution). Before analysis, the geological samples were cleaned in distilled water in an ultrasonic bath for five minutes, then rinsed consecutively with running tap water, distilled water, and alcohol. On these polished sections, an ablation line of 1.2 mm with a scan speed of 10 µm/sec achieved a 2:15 min signal, and a spot size of 60 µm width and 5 µm depth was used to attain the best possible results. A laser output of 40% [energy per pulse ≈ 0.044 mJ] was selected. Archaeological samples For the archaeological samples, the protocol was adapted to minimize the impact of ablation and thus maximize the preservation of the artefact. Accordingly, the ablation line was reduced to 40 µm wide (thinner than human hair) and 0.6 mm long, making it barely visible to the naked eye and considered as virtually non-destructive. The depth of the line was increased to 10 µm in order to make up for any geochemical surface alteration (often present on artefacts; see Poupeau et al. 2010). To compensate for a loss of signal due to the shorter and narrower ablation line, the scan speed was lowered to 5 µm/sec and the output amplified to 80% [energy per pulse ≈ 0.389 mJ] instead of 40% as with the geological samples. Preparation of the archaeological samples before analysis involved cleaning in distilled water in an ultrasonic bath for five minutes, followed by successive thorough rinses of distilled water, alcohol, and acetone. Sensitivity: V1 vs. V2 protocol In order to compare the sensitivity of our V1 and V2 protocols, a series of measurements were obtained on the same day, under similar plasma conditions on the NIST 613 SRM. For all of the isotopes common to both protocols ( 66 Zn, 85 Rb, 88 Sr, 89 Y, 90 Zr, 93 Nb, 133 Cs, 137 Ba, 146 Nd, 147 Sm, 208 Pb, 232 Th, and 238 U), a simple comparison of the raw counts shows that higher count rates were achieved with the second protocol (Table 2), and so a higher sensitivity (raw count rate/expected concentration in ppm) was established. Indeed, since fewer isotopes are selected in the V2 protocol but the total acquisition time per line stays the same (2:15 min), each isotope signal will be acquired for a longer period (2:15 min divided by 15 instead of 30). Therefore, higher count rates were achieved, resulting in higher sensitivity. Mazet et al., in prep.) was analyzed with the V2 protocol during a total of 25 runs. In order to assess the accuracy, precision, and reproducibility of our analyses, Table 3. For 232 Th, the relative error does not exceed 6%, and for the majority of isotopes the relative error is below 5%, and less than 3% for five of them ( 85 Rb, 88 Sr, 137 Ba, 208 Pb, and 238 U). To further our assessment of the V2 protocol accuracy, we also compared the relative error obtained on the same number of measurements (n=8) on the NIST 613 standard between the V1 and V2 protocol. For the majority of isotopes assessed, the relative error here again calculated against the reference values of the GeoRem database is lower with the V2 protocol results than the V1 protocol results (see Table 3). This new protocol is therefore producing accurate results while achieving higher sensitivity for isotope discrimination. Precision To compare the precision of the analysis between the exhaustive (V1) and optimized (V2) protocols, the standard error of the mean was calculated for each of the 13 isotopes assayed in both protocols (8 measurements). The results are presented in Table 4 and show, for each isotope, a considerably lower standard error of the mean for the V2 protocol as well as a lower standard deviationi.e. a higher precision of the measurements. This clearly reflects that a smaller number of isotopes assayed multiplies the measurement points, consequently increasing the precision. The same conclusion would be made if it was possible to compare our data to previous studies using several ablation points (data unavailable/unpublished), since an ablation line is in fact constituted of a series of points, i.e. about 70 to 80 in our V2 protocol, a quantity difficult to reach in a reasonable time with punctual ablation ICP-MS analysis protocols. As demonstrated in Table 4, only the 66 Zn isotope, which may have interferences with polyatomic structures (e.g. 50 Ti 16 O; see Evans and Giglio 1993), presents a higher standard error of the mean than for the V1 protocol. Reproducibility The reproducibility of the analyses through time was also assessed and represents a crucial factor in archaeological studies, particularly to sourcing studies. Using the same international standard (NIST SRM 613) the evolution of the 66 Zn, 88 Sr, 133 Cs, 137 Ba, and 146 Nd contents was observed over a 6 month period, as illustrated in Fig. 1 (23 measurements represented). The variations frequently remain within a 2s range, thus attesting the repeatability of these measurements. Matrix-induced effect and comparison to a common protocol The BCR-2G standard (glass, basaltic composition; USGS, 2014) from the U.S. Geological Survey (USGS) was analyzed several times to control for matrix-induced effects. The obtained average composition was compared against the USGS and GeoRem reference values, as well as against the values obtained by Barca et al. (2007) with LA-ICP-MS (see Table 5). The accuracy was assessed as the relative error between the measured values and the reference values from the GeoRem database. Accurate results were obtained and the relative error remains systematically below 10%, except for the zinc content which appears problematic. Comparing this study with the ablation point and exhaustive isotope list protocol ( Sources discrimination and provenance attribution of artefacts The viability of a specific method for obsidian sourcing does not only lie on its reliability (in which we entail sensitivity, precision, accuracy, and reproducibility; see e.g. Hughes 1998; Frahm 2012 for discussion), but also on its validity, i.e. its ability to distinguish between the relevant obsidian sources and to attribute obsidian artefacts from an assemblage to a specific source. The concept of source is defined in this context as a specific geochemical signature and not as a geographical location (see Hughes and Smith 1993). The primary known obsidian sources of the Western Mediterranean area, Carpathian basin, and Aegean area (Fig. 2) were considered in this study to assess the validity of the V2 protocol for obsidian sourcing: Sardinia (sub-types SA, SB1, SB2, and SC; Tykot 1997), Lipari (Pichler 1980 clearly distinguished from one another, thus confirming the validity of the V2 protocol in the geographical area considered. The validity of our protocol on the archaeological level, i.e. its capacity to attribute each artefact of an assemblage to a specific source, was assessed through the analysis of 538 archaeological samples from the Tyrrhenian area (Neolithic period). Table 6 and are in fairly good agreement. Only the measured 88 Sr content for the SC group is slightly lower than in the other studies, i.e. 82-106 ppm (taking into consideration 1 standard deviation) while other laboratories report values ranging from 95 to 167 ppm. This difference could eventually be explained by a difference in source sampling. Conclusions This study demonstrates that the new LA-ICP-MS protocol developed at Southern Cross University improves analytical reliability, validity and efficiency when applied to identifying obsidian provenance in the Western Mediterranean. Analysis of the NIST SRM 613 international standard using the enhanced protocol (V2) demonstrated improved ability to obtain accurate and precise measurements with a higher sensitivity and within a very limited time frame (3 to 5 punctual measurement of about 60 s are usually used in previous studies, where our protocol produces a series of 70 to 80 measurement points in 2:15 min). Comparing the data obtained on the BCR-2G basalt standard (USGS) by a standard protocol using ablation points and an exhaustive list of isotopes (Barca, De Francesco, and Crisci 2007), our optimized protocol using lines and fewer isotopes obtained better or comparable results, when considering the accuracy of the measurements -V1 analysis was more accurate than V2 for only 4 of 14 isotopes. Furthermore, when the V2 protocol is applied to the Mediterranean obsidian sources, differentiation between sources is particularly distinct, thus confirming the validity of the optimized protocol (V2) as a sourcing tool in obsidian provenance research. Further study is required to investigate the rather low precision and accuracy results of the 66 Zn isotope, as well as the application of the V2 protocol rationale to further obsidian sources in the Mediterranean area (e.g. Near East). In conclusion, the use of a refined LA-ICP-MS protocol tailored specifically to the target material is a demonstrably effective means of optimizing this cutting-edge geochemical characterization technique. In obsidian sourcing, it is particularly important for a meticulous selection of isotopes to be measured in order to discriminate between the sources of a particular geographical area: the more judiciously selected the list of isotopes, the better results. Conflict of interest statement The authors confirm there are no conflicts of interest. Author biographies Marie Orange is a Ph.D student within Southern Cross GeoScience, Southern Cross University, Australia. Her research focuses on obsidian trade in the Western Mediterranean during the Neolithic period. Dr. François-Xavier Le Bourdonnec is an Associate Professor of Archaeological Sciences at Bordeaux Montaigne University. His work deals with circulation and economy of prehistoric lithic raw materials. Dr. Anja Scheffers is a professor at Southern Cross University, Australia. Her research focuses on how coastal environments have changed in the past. She is particularly interested in processes that shape and modify coastal landscapes over a variety of length and time scales and the coupling and feedback between such processes, their rates, and their relative roles, especially in the contexts of variation in climatic and tectonic influences and in light of changes due to human impact. Dr. Renaud Joannes-Boyau is a Senior Research Fellow at Southern Cross University, Australia, in charge of the ESR dating and Laser-Ablation ICP-MS laboratories. His research involves the application of physical techniques to archaeological problematics, in particular the direct dating of fauna and hominid fossil remains as well as the investigation of isotopic signature in fossil teeth and bones to reconstruct dietary changes and diagenetic processes.
237577920
s2orc/train
v2
2021-09-21T13:48:39.273Z
2021-09-20T00:00:00.000Z
Machine Learning in the Differentiation of Soft Tissue Neoplasms: Comparison of Fat-Suppressed T2WI and Apparent Diffusion Coefficient (ADC) Features-Based Models Machine learning has been widely used in the characterization of tumors recently. This article aims to explore the feasibility of the whole tumor fat-suppressed (FS) T2WI and ADC features-based least absolute shrinkage and selection operator (LASSO)-logistic predictive models in the differentiation of soft tissue neoplasms (STN). The clinical and MR findings of 160 cases with 161 histologically proven STN were reviewed, retrospectively, 75 with diffusion-weighted imaging (DWI with b values of 50, 400, and 800 s/mm2). They were divided into benign and malignant groups and further divided into training (70%) and validation (30%) cohorts. The MR FS T2WI and ADC features-based LASSO-logistic models were built and compared. The AUC of the FS T2WI features-based LASSO-logistic regression model for benign and malignant prediction was 0.65 and 0.75 for the training and validation cohorts. The model’s sensitivity, specificity, and accuracy of the validation cohort were 55%, 96%, and 76.6%. While the AUC of the ADC features-based model was 0.932 and 0.955 for the training and validation cohorts. The model’s sensitivity, specificity, and accuracy were 83.3%, 100%, and 91.7%. The performances of these models were also validated by decision curve analysis (DCA). The AUC of the whole tumor ADC features-based LASSO-logistic regression predictive model was larger than that of FS T2WI features (p = 0.017). The whole tumor fat-suppressed T2WI and ADC features-based LASSO-logistic predictive models both can serve as useful tools in the differentiation of STN. ADC features-based LASSO-logistic regression predictive model did better than that of FS T2WI features. Introduction Soft tissue neoplasms (STN), a group of heterogeneous tumors, are derived from blood vessels, lymphatic vessels, nerves, muscles, or other connective tissue [1]. STNs are commonly seen with complicated components and classified as benign, intermediate (metastatic or recurrent occasionally), and malignant subtypes by the WHO [1]. Except for a few tumors with characteristic imaging features, a definite histological diagnosis is usually challenging on imaging. A better prognosis can be achieved for most benign and intermediate STN. Soft tissue sarcoma (STS) represents about 1% of all malignancy; it recurs and metastasizes commonly with a poor prognosis [2]. MR imaging is the preferred method for detecting and staging of STN [3][4][5][6]. Conventional MR assessment of STN mainly focused on the morphologic findings, such as the tumor's size (> or ≤ 5 cm), contour (round or lobulated), margins (well-or ill-defined), heterogeneity of masses, and involvement of adjacent vital structures (bone/neurovascular bundle) [3-5, 7, 8]. Several studies were designed to explore the effectiveness of conventional MR in the differentiation of STN. The reported diagnostic accuracy ranged from 50 to 90% [3,5,6,8,9]. An overlap of the radiological features between benign and malignant tumors was frequently seen. Gadolinium (Gd)-based enhanced MR scan helped differentiate cystic from solid masses [10]. Additionally, the knowledge of prevalence and presentation of onset can serve as a supplement of morphological features in the differentiation of STN [3]. Surgical excision was the first-choice treatment for STN. Although the role of chemotherapy was controversial [11], a few subtypes of sarcomas were sensitive to chemotherapy, such as rhabdomyosarcoma (embryonal and alveolar subtypes), Ewing sarcoma family of tumors, round cell liposarcoma, desmoplastic small round cell tumor, and synovial sarcoma [11]. Diffusion magnetic resonance weighted imaging (DWI) based on the Brownian motion of water molecules can reflect the tissue microstructures [12]. The apparent diffusion coefficient (ADC) is a widely used quantitative parameter. Low ADC values mean highly cellular density and/or restricted microenvironments, while acellular regions are found with elevated ADC values [12][13][14][15]. Muscular sarcomas were reported with a broad range of ADC values [16]. Some researchers thought that ADC value was a reliable quantitative parameter in the differentiation of STN [13,14,17]. Texture analysis (TA) is a method to evaluate the tumor by extracting and using features that were invisible to the naked eye. Texture analysis was employed to differentiate tumors or tumors with different grades be employed to differentiate tumors or tumors with different grades [18][19][20] but scarcely did they focus on the application of TA based on FS T2WI and ADC mapping in the differentiation of STN. Machine learning, as the intersection of statistics and computer science, has been gradually applied in the medical field recently [21]. It mainly focused on how computers learn from big data and included many algorithmic models, such as the least absolute shrinkage and selection operator (LASSO), support vector machine (SVM), random forest, and decision tree [22][23][24]. LASSO was commonly used and robust. It overcame the shortcomings of multiple regression in highdimensional data and was beneficial in feature selection [23][24][25]. We supposed that the TA of the whole tumor FS T2WI and ADC features-based LASSO-logistic regression predictive models can be used in the characterization of STN precisely. An then to assess the effectiveness of these two models in the characterization of STN, we retrospectively collected and reviewed the clinical and imaging findings of 160 patients with 161 histologically proven STN (75 of them with DWI). Study Population This retrospective study was approved by our institutional review board, and informed consent was waived. Between July 1, 2015, and December 31, 2015, the imaging features and clinical findings of patients with suspected soft tissue neoplasms were collected and reviewed retrospectively. The inclusion criteria were as follows: STN were all histologically proven (surgery or biopsy), and all the patients underwent an MR scan. The suspected STN that were not histologically proven or without MR scans were excluded. At last, 160 cases (161 histologically proven masses) with MR scans were collected and reviewed, and 75 of them with diffusion-weighted imaging (DWI, with b values 50, 400, and 800 s/mm 2 ). The 38 soft tissue sarcoma (STS) cases with DWI were divided three times, into the chemosensitive and non-chemosensitive groups [11]; the small round cell and non-small round cell sarcoma groups; and the rhabdomyosarcoma and non-rhabdomyosarcoma groups. Demographic and Clinical Data The demographic and clinical data were reviewed, including the age of onset, gender, main manifestations, tumor locations, and histological results. The locations were recorded as the head and neck, trunk, retroperitoneum, and extremities, respectively. Imaging Acquisition All the patients underwent conventional MR and/or DWI (with b values of 50, 400, and 800 s/mm 2 ). Axial FS T2WI imaging and/or ADC mapping was used for whole tumor 3D volume segmentation and feature extraction (Figs. 1-3): • The scanned FS T2WI parameters: TR 3,500-4,000 ms, TE 100-110 ms, ETL 15, matrix 512 × 512, the number of excitation 2, the slice thickness 5 mm, the gap of slice 1 mm, and FOV 250-350 mm • T1WI: axial FSE/TSE sequences, TR 410-500 ms, TE 15 ms, matrix 512 × 512, the number of excitation 2, slice thickness 5 mm, and the gap of slice 1 mm • T2WI: coronal or sagittal TSE/FSE, TR 3,500-4,000 ms, TE 100-110 ms, the number of excitation 2, the slice thickness 5 mm, and the gap of slice 1 mm DWI was performed before enhanced T1WI. DWI was acquired by using the single-shot echo-planar imaging pulse (SS-SE-EPI)-DWI sequence in free breathing with parallel imaging, with b values of 50, 400, and 800 s/mm 2 . Other scanning parameter was the same as that described above. The ADC mapping was generated using the mono-exponential decay mode. Subsequently, all patients underwent enhanced T1-weighted imaging after the intravenous injection of 0.1 mmol/kg contrast medium (Magnevist, Bayer Schering Pharma, Berlin, Germany) at a flow rate of 2-3 ml/s. Tumor Segmentation and the Extraction of FS T2WI and ADC Features LIFEx v4.00 software (https:// www. lifex soft. org/) was employed for tumor segmentation and feature extraction. Tumor segmentation was done by a radiologist with 12 years of experience on MR interpretations of STN (Figs. 2 and 3). Conventional MR images were referred to during selection of the region of interest (ROI). The ROIs were manually selected using LIFEx v4.00 software, to cover the whole tumor. The steps of texture feature extraction were as follows: ROI selection (3D model), spatial resampling (1 mm × 1 mm × 1 mm), intensity discretization (number of Gray-level, 64), and intensity rescaling (relative, mean ± 3SD). The ROIs were measured twice at a 1-year interval. The Construction and Validation of the Predictive Model These cases were randomly divided into training (70%) and validation (30%) cohorts. The texture features of the training cohort were used for constructing the predictive model, and the features of the validation cohort were used for validation. The inter-observer correlation coefficient (ICC) was used to evaluate the repeatability of these features. In order to handle high-dimensional data better and select features, the LASSO algorithm was employed. LASSO-logistic regression with tenfold cross-validation and 1 standard error rule was used to reduce data dimensions, select features, and build a predictive model. The receiver operating characteristic (ROC) and DCA were used to validate the effectiveness of the model. Statistical Analysis The R (version 3.6.0, https:// www.r-proje ct. org/), SPSS 20.0, and MedCalc statistical software were employed for data analysis. Kolmogorov-Smirnov test was employed for testing normal distribution. Independent student's t test was employed to analyze the differences in texture features. ROC curves were generated to determine the cut-off values. The AUCs were calculated and further compared by the Delong test. The DCA was done by R software. The glmnet and pROC packages of R software were employed. Values of p < 0.05 were considered statistically significant. Demographic and Clinical Data There were 84 masses in the benign and 77 in the malignant group (Table 1). And there were 37 benign and 38 malignant STNs with DWI. The gender ratio (female:male) was 77:83. The ages ranged from 1 month to 82 years old, and the median age was 29.5 years old. Thirty-three were in the head and neck region, 93 arise in the trunk (7 in retroperitoneal space), and 35 arisen in the extremity (21 in the lower, 14 in the upper). There were 38 cases with STS that underwent DWI; 17 chemosensitive and 21 non-chemosensitive sarcomas, 13 small and 25 non-small round cell sarcomas, and 17 rhabdomyosarcomas and 21 non-rhabdomyosarcomas were enrolled. Most of them complained of enlarging, pain, or painless masses. The other manifestations were the Kasabach-Merritt phenomenon (KMP), proteinuria (1 case), and yellowish skin. The Differences of MR FS T2WI and ADC Features Between Benign and Malignant Groups The ICC of texture features ranged from 0.81 to 0.94, showing good repeatability. There were 14 MR FS T2WI features with significant differences between benign and malignant STN (p < 0.05) ( Table 2). And there were 12 features between benign and malignant tumors (p < 0.05) with significant differences, including mean ADC, max ADC, STD value, and HISTOskewness values ( Table 2). The features between chemosensitive and nonchemosensitive sarcomas, between small round and non-small round cell sarcomas, and between rhabdomyosarcomas and nonrhabdomyosarcomas were not significantly different (p > 0.05). The Construction and Validation of FS T2WI and ADC Features-Based Predictive Models LASSO algorithm with tenfold cross-validation was employed for reducing data dimensions and feature selection. The whole tumor 3D MR FS T2WI features of the training cohort (114 cases) were used to build predictive models. The deviance of classification was minimized when the λ (lambda) was 0.134 (Fig. 4). And only one feature, GLZLM_ZP, was selected. The LASSO-logistic regression predictive model was built and the linear regression equation was Y benign/malignant = −0.0713-0.2472 × (GLZLM_ZP). The AUC of the ROC curve was 0.65 for the training cohort. The AUC of the ROC curve was 0.75 for the validation cohort (Fig. 5a), and the sensitivity, specificity, and accuracy were 55%, 96%, and 76.6%, respectively. The deviance of classification was minimized when the lambda (λ) was 0.038 (Fig. 4) The AUC was 0.932 for the training set. The AUC was 0.955 for the validation set (Fig. 5b), and the sensitivity, specificity, and accuracy were 83.3%, 100%, and 91.7% respectively. The effectiveness of the predicted models was also validated by DCA (Fig. 6). DCA of FS T2WI and ADC features-based predictive models showed that these two Fig. 4 Feature selection using the LASSO-logistic algorithm using tenfold cross-validation and 1 standard error rule. The optimal tuning value (a1, b1) was selected for benign and malignant STN prediction and (a2, b2) the corresponding features The Comparison of FS T2WI and ADC Features-Based Predictive Models The ROCs of validation cohorts were used for the comparison of FS T2WI and ADC features-based predictive models. The ADC features-based LASSO-logistic regression predictive model did better than that of the FS T2WI in the differentiation of STN (z = 2.386, p = 0.017). Discussion The whole tumor ADC value was not helpful in the differentiation of chemosensitive and non-chemosensitive sarcomas, small round and non-small round sarcomas, or rhabdomyosarcomas and non-rhabdomyosarcomas. The mean ADC value did better than max ADC value and STD value in the differentiation of STN. The HISTO-skewness value can be served as another useful feature in the differentiation. Machine learning of the whole tumor FS T2WI and ADC values did facilitate the differentiation of benign and malignant STN. And ADC features-based LASSO-logistic regression predictive model did better than that of FS T2WI features. Texture analysis by extracting indiscernible radiomic features was useful in analyzing tumor heterogeneity. The utilization of images can be maximized without adding scan sequences [20]. Corino VDA et al. found that MR radiomic features can be used to distinguish intermediate soft tissue sarcomas from high-grade ones accurately [26]. The accuracy and AUC were 0.90 and 0.85 and 0.88 and 0.87 for the validation and test sets. Although we found the FS T2WI features-based model with high specificity (96%), the sensitivity was low (55%). The ADC features-based model can achieve high effectiveness. The sensitivity, specificity, and accuracy were 83.3%, 100%, and 91.7%, respectively. And ADC features-based LASSO-logistic regression predictive model did better than that of FS T2WI features. ADC value was affected by ROI position and selected b values. We selected the whole tumor as ROI to avoid the selected bias. The quantitative parameter we measured showed good repeatability. Similar to literature [14], we chose three b values (50, 400, and 800 s/mm 2 ). For b = 50 s/ mm 2 , it was less affected by microvascular perfusion than b = 0 s/mm 2 , and the selection of 800 s/mm 2 was to ensure enough signal-to-noise ratio (SNR). DWI reflecting water molecule diffusion is useful in the detection and differentiation of tumors and facilitates the therapeutic assessment [13,[27][28][29][30][31][32]. Some benign STN resembled malignant ones on conventional MR sequences and were usually misdiagnosed [31,33]. Most researchers thought the mean ADC and minimal ADC values help in the differentiation of STN [34,35]. The mean ADC value of volumetric quantification had a high interobserver agreement and reflected tumors' heterogeneity [36]. Although Van Rijswijk CSP et al. (37) harbored different opinions, they thought that malignant ones had significantly lower true diffusion coefficients. We found the mean ADC and HISTO-skewness values were valuable in the characterization of STN and did better than minimal ADC values. And it was tested by the LASSOlogistic model. The HISTO-skewness value can be served as another useful feature in differentiation, which was not mentioned previously. Benign STN often exhibited a negatively skewed distribution due to their low cell density and large extracellular space, while the malignant ones showed a positively skewed distribution. TA of ADC mapping can acquire more quantitative or semi-quantitative features for the differentiation of STN. Several limitations should be mentioned. Selective bias could not be avoided; these patients were relatively younger, and the rhabdomyosarcoma was the most common malignancy. The sample size of intermediate tumors was relatively small. Those tumors seldom metastasize or recur and therefore were classified as benign. The value of texture analysis in the differentiation of STN should be explored at different anatomic sites. Considering the sample size, we did not compare the efficacy of different machine learning models. Moreover, the point-to-point radiological and histological correlation couldn't be done, due to the retrospective property. Conclusion ADC features of the whole tumor couldn't differentiate chemosensitive from non-chemosensitive sarcomas, small round from non-small round sarcomas, or rhabdomyosarcomas from non-rhabdomyosarcomas. The mean ADC and HISTO-skewness values did help in differentiating benign from malignant STN. The ADC features-based LASSO-logistic predictive model did better than the FS T2WI features-based model in the characterization of STN. Funding This research was supported in part by grants from the Science and Technology Council of Shanghai (grant no. 15ZR1408000, grant no. 18. no.12140901302, and grant no. 18140901200). Availability of Data and Material The raw data can be made available. Code Availability The R, SPSS 20.0, and MedCalc statistical software were used. Declarations Ethics Approval This retrospective study was approved by our institutional review board, and informed consent was waived. Consent for Publication All authors have agreed to publish this article. Conflict of Interest The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
3433430
s2orc/train
v2
2018-02-16T23:08:29.422Z
2017-12-02T00:00:00.000Z
Detection of hepatitis B virus DNA and HBsAg from postmortem blood and bloodstains A large number of accidental virus infections occur in medical and non-medical workers exposed to infectious individuals and materials. We evaluated whether postmortem blood and bloodstains containing hepatitis B virus (HBV) are infectious. HBV-infected blood and bloodstains were stored for up to 60 days at room temperature and subsequently screened for hepatitis B surface antigen (HBsAg) and HBV DNA. In addition, HBV-positive postmortem blood was added to a cell line and the production of HBV virions was examined over a period of 7 days. HBsAg and HBV DNA were detected in all samples stored for 60 days at room temperature. HBV-positive postmortem blood successfully infected the cell line and progeny viruses were produced for up to 6 days. Thus, it is crucial that due care is taken when handling not only living material infected with HBV, as well as other harmful viruses, but also blood or body fluids from cadavers or medical waste. Introduction At scenes of large-scale disasters or terrorist attacks, where there are a considerable number of casualties, many nonmedical specialists, including police officers and firefighters, must work together with medical teams to save the survivors and investigate the cause of the incident. During the Ebola virus outbreak of 2015, not only the medical teams assisting the patients but also many members of the public were secondarily infected with Ebola virus because of the custom of touching the deceased at their funeral [1]. In February 2015, more than 20 people, including forensic doctors at the University of Tokyo and police officers, were infected with tuberculosis during the transfer and autopsy of an infected corpse [2]. Corpses with unknown medical history are often examined in the field of forensic medicine. During the outbreak of Ebola virus and Middle East respiratory syndrome coronavirus, the Japanese government enacted a number of measures to prevent the transmission of secondary infections from travelers. However, these measures focused on living individuals, and infection from corpses was not considered. Unfortunately, it is more difficult to identify infection in a cadaver than it is in a living individual, such as by checking travel records or symptoms. Therefore, it is important to analyze the risk of infection from infected corpses. Excessive preventative measures when dealing with potentially infected corpses are not adequate from a costbenefit point of view, and unnecessary sterilization may result in environmental pollution. It is also unknown for how long a virus remains infectious in a corpse or bloodstain. To the best of our knowledge, no reports have clearly examined this issue. In our previous study, as a representative harmful virus, we examined if hepatitis C virus (HCV) can be detected in blood or bloodstains that were stored at room temperature for up to 60 days [3]. HCV-RNA was found to be detectable from blood and bloodstains for up to 60 days. Anti-HCV antibody (HCV-Ab) was also detectable for up to 60 days, so HCV-Ab screening can also be used to evaluate postmortem blood and bloodstain samples. However, even when the genome of a virus is detected, it is still not certain whether the virus capsid is also intact. In addition, if the virus capsid is intact, it is still unclear whether this virus is still infectious. HCV is very difficult to culture in cells in vitro, and culturing of HCV isolates directly from patient sera is as yet unattainable [4]. Hepatitis B virus (HBV) is a partially double-stranded, enveloped DNA virus classified within the Hepadnaviridae as well as a member of the hepatitis virus grouping with HCV. Despite the availability of a vaccine HBV infection is still a global health problem, since over 240 million people are estimated to be chronically infected by HBV [5,6] and more than 300,000 die annually from cancer or liver dysfunction associated with HBV infection [7]. HBV can be grown easily in cell culture [8,9]. Therefore, in this study, we selected HBV as a representative 'harmful virus' for analysis. We stored HBV-infected blood and bloodstains for up to 60 days at room temperature and examined if HBV DNA and hepatitis B surface antigen (HBsAg) could be detected. In addition, HBV-infected postmortem blood was added to a cell line and we examined if this HBV-infected cell line could produce progeny virus. Samples HBV-infected blood samples were obtained with informed consent from 6 patients (4 men and 2 women; mean age, 35.6 ± 9.0 years; range, 26-44 years) at the University Hospital, Kyoto Prefectural University of Medicine and Aiseikai Yamashina Hospital for serological analysis and clinical diagnosis ( Table 1). Measurement of HBV in blood samples Prior to our experiments, the HBV DNA titer in all clinical samples was determined using the COBAS TaqMan HBV DNA Assay (Roche Molecular Systems, Pleasanton, CA). Titers ranged from 4.2 to 9.1 log IU/mL (average, 6.51 ± 2.45 log IU/mL). The limit of detection was 1.3 log IU/mL. All samples were stored at -80 °C until use. Blood and bloodstain preparation Bloodstain samples were prepared by soaking cotton buds in 0.1 mL of HBV-infected whole blood samples (n = 6) for 1 min and then drying at room temperature for up to 60 days. HBV-infected whole blood samples (n = 6) were placed in sealed 2-mL test tubes and kept at room temperature (20 °C) for up to 60 days. The prepared blood and bloodstain samples were analyzed at 3, 9, 27, and 60 days after preparation. Detection of HBsAg HBsAg from the bloodstain and whole blood samples was detected using immunochromatography with an Ortho Quick Chaser HBsAg Kit (Ortho Clinical Diagnostics, Tokyo, Japan). Before testing, the bloodstain samples were soaked in 400 µL saline; 100 µL of the extracted solution was then analyzed using immunochromatography. The limit of detection was 20 ng/mL. Detection of HBV genome DNA was extracted from 200 µL diluted whole blood and 200 µL solution extracted from bloodstained materials with a QIAamp DNA Mini Kit (QIAGEN, Hilden, Germany). The extracted DNA was eluted in 50 µL elution buffer and used for genome amplification of the HBV S gene using PCR with AmpliTaq Gold DNA Polymerase (Applied Biosystems LLC, Foster City, CA, USA) in 25 µL aliquots containing 2.5 µL 10 × Gold buffer, 500 M deoxynucleoside triphosphate, 1.5 mM MgCl 2 , and 0.6 µM primers. The sense primer 5′-GTC TAG ACT CGT GGT GGA CTT CTC TC-3′ and antisense primers 5′-AAG CCA AAC AGT GGG GGA AAGC-3′ were used as previously [10]. DNA polymerase was initially activated at 95 °C for 11 min for PCR. PCR amplification was performed for 35 cycles at 94 °C for 15 s, 55 °C for 5 s, and 72 °C for 30 s, followed by a final step at 72 °C for 10 min. Amplification was carried out in a PC-320 thermal cycler (ASTEC, Fukuoka, Japan). PCR products were mixed with 6 × loading buffer Orange G and subjected to electrophoresis on a 1.5% agarose gel at 100 V for 30 min. The electrophoresed agarose gel was stained with ethidium bromide (0.5 µg/ HBV-infected postmortem case In August 2016, a body was found floating in the sea by a fisherman, about 700 m from the coast. A rescue helicopter arrived at the scene soon after the emergency call. However, the victim was found to be in cardiopulmonary arrest and was pronounced dead at 14:08 pm. He was unidentified and the cause of death was unknown. Therefore, the body was sent for autopsy the next day and the cause of death was determined as drowning. Subsequent police investigation revealed that he was a 56-year-old male textile manufacturer living in the neighboring city. On the previous day, he had gone fishing at around 10:00 am. His medical history was never found. HBV infection of a cell line with postmortem blood A sample of whole blood was taken from the autopsy case and immediately separated; HBV copy number was measured as 5.0 log copies/mL. The sample components were stored at -80 °C until use. Human hepatocyte carcinoma-derived HepG2 cells were obtained from the RIKEN BioResource Center through the National BioResource Project of the Ministry of Education, Culture, Sports, Science and Technology (MEXT) Japan. The experiment was started 2 weeks after the autopsy. The HepG2 cell line (3.0 × 10e 5 cells/well) was plated in 2-cm tissue culture dishes. Blood samples containing 5.0 log copies/mL HBV (1.79 × 10e 3 IU/mL) were diluted from × 1 to × 100. Each 1 mL of diluted whole blood sample was added to individual dishes containing the cell line. These were incubated with an additional 1 mL DMEM, 10% fetal bovine serum, and 1% streptomycin at 37 °C for 24 h. The dishes were washed with fresh medium and incubated for an additional 72 h. Subsequently, the medium was changed in each dish and samples of the spent medium were sent for HBV analysis (Day 4). From the 4 th to 8 th days, the medium was changed every 24 h in each dish and the spent medium samples were sent for HBV analysis. HBV analysis was performed by HBsAg detection and PCR amplification of the HBV genome using the aforementioned methods. HBV-infected blood samples were obtained with informed consent. This study was approved by the Institutional Review Board of Kyoto Prefectural University of Medicine (G-52). Results HBV-DNA and HBsAg were detected in all blood and bloodstain samples stored at room temperature for up to 60 days, with viral loads of 4.2-9.1 log IU/mL detected ( Table 2). All samples in this study were therefore positive for HBsAg and HBV-DNA. In the postmortem case, HBsAg and HBV DNA were detected in the HepG2 cell line with HBV copies of 10.0 ×10 3 copies/mL detected up to 6 days. After 6 days, cell death occurred and the culture was discontinued. HBsAg and HBV-DNA were not detected in diluted samples of HBV-positive whole blood (Table 3). Discussion In our study, all samples were positive for both HBV-DNA and HBsAg, which may indicate that the virus capsid is sustained for a considerable period of time in blood and bloodstains. In addition, the fact that HBV from postmortem blood infected the cell line indicates the need for careful handling of materials that have come in to contact with a corpse, blood, and other body fluids. Almost 90% of HBV infections occur during the perinatal period and within 6 months after birth [11]. The HBV vaccine first went on sale in 1982. In many countries, HBV vaccination for newborn babies and medical workers started in the 1980s [11]. Therefore, the HBV-positive rate of blood donors is low, at approximately 0.1% [12]. Nowadays, the infection rate of HBV is lower than that of HCV [13]. However, despite vaccination, there are some people whose anti-HBs titer are negative or less than 10 IU/mL. Although it is said that immunological memory persists in such cases after vaccination, it is an issue that merits consideration [14]. In the United States, 6.2% of medical workers are positive for Anti-HBc, which is higher than the rate in blood donors, which is 1.8% [12], indicating that medical workers are at a high risk of infection. Approximately 75% of HBVrelated transmissions in healthcare workers are via percutaneous injury with a scalpel or needle; the remaining mode of transmission in these workers is via mucosal-cutaneous exposure. When an individual is positive for both HBsAg and HBeAg, there is a 22-31% risk of hepatitis [15]. Even in a high-risk working environment, medical workers have existing knowledge about infectious diseases and the appropriate use of guards such as gloves and masks. However, there is a higher risk of infection (and becoming Anti-HBcpositive) when non-medical workers, who lack this medical expertise, attend a disaster. The risk of HBV infection has been reduced by universal vaccination in several countries [16,17], however the infection risk, not only in the medical field but also in the general population remains high, therefore it is advisable to extend universal vaccination to the rest of the world. In this study, although it was only with a single case, HBV in postmortem blood successfully infected the HepG2 cell line (Table 3). HepG2 cells are a human hepatoblastoma cell line derived from a 15-year-old male with a well-differentiated carcinoma. HepG2 cells differ morphologically from primary hepatocytes. Recently, the sodium taurocholate cotransporting polypeptide (NTCP) was identified as a receptor for HBV [18]; however, it is not expressed in HepG2 cells [19]. However, some reports have described binding and entry of HBV using normal HepG2 cells; furthermore, although virion production was not observed in these studies [20][21][22][23], it was following transfection of HBV DNA in related studies [8,9]. It is therefore possible that the mechanisms of viral entry into HepG2 cells or hepatocytes has not been clearly elucidated. In our case study, we used normal HepG2 cells to observe the infection of HBV even though it is much easier to infect cells with HBV following NTCP expression. Even in these challenging conditions without NTCP expression, HBsAg and HBV DNA were detected in cultured cells. This finding does not conclude directly that HBV in postmortem materials remains infectious to humans, for instance it may be due to residual HBV in the culture dish. However, at least we can say that there is a possibility of infection. Our single case had no significant illness or a past medical history and, in addition, did not present with any significant gross pathology within the liver tissue ( Figure 1). Interestingly, even in such an inactive case, postmortem blood still had the potential to be infectious. Therefore, we should increase our preparedness and awareness concerning the possibility of infection from postmortem materials. Compliance with ethical standards Funding This work was supported by a Grant-in-Aid for Scientific Research C from the Japan Society for the Promotion of Science (no. 22590641). Conflict of interest The authors declare that they have no conflict of interest. Ethical approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This study was approved by the institutional review board of Kyoto Prefectural University of Medicine (G-52). Informed consent Informed consent was obtained from all individual participants included in the study. Fig. 1 Liver tissue from the clinical postmortem case (X400). Centrilobular necrosis (single arrow) and slight fibrosis of Glisson's sheath were seen (double arrows); however, no active infiltration of inflammatory cells was seen
14330500
s2orc/train
v2
2014-10-01T00:00:00.000Z
2011-04-29T00:00:00.000Z
Comparative Lipidomics of Azole Sensitive and Resistant Clinical Isolates of Candida albicans Reveals Unexpected Diversity in Molecular Lipid Imprints Although transcriptome and proteome approaches have been applied to determine the regulatory circuitry behind multidrug resistance (MDR) in Candida, its lipidome remains poorly characterized. Lipids do acclimatize to the development of MDR in Candida, but exactly how the acclimatization is achieved is poorly understood. In the present study, we have used a high-throughput mass spectrometry-based shotgun approach and analyzed the lipidome of genetically matched clinical azole-sensitive (AS) and -resistant (AR) isolates of C. albicans. By comparing the lipid profiling of matched isolates, we have identified major classes of lipids and determined more than 200 individual molecular lipid species among these major classes. The lipidome analysis has been statistically validated by principal component analysis. Although each AR isolate was similar with regard to displaying a high MIC to drugs, they had a distinct lipid imprint. There were some significant commonalities in the lipid profiles of these pairs, including molecular lipid species ranging from monounsaturated to polyunsaturated fatty acid-containing phosphoglycerides. Consistent fluctuation in phosphatidyl serine, mannosylinositolphosphorylceramides, and sterol esters levels indicated their compensatory role in maintaining lipid homeostasis among most AR isolates. Notably, overexpression of either CaCdr1p or CaMdr1p efflux pump proteins led to a different lipidomic response among AR isolates. This study clearly establishes the versatility of lipid metabolism in handling azole stress among various matched AR isolates. This comprehensive lipidomic approach will serve as a resource for assessing strategies aimed at disrupting the functions of Candida lipids, particularly the functional interactions between lipids and MDR determinants. Introduction The incidences of Candida cells acquiring multidrug resistance (MDR) are common, which in turn hamper their successful chemotherapy [1][2][3][4]. C. albicans as well as non-albicans species have evolved a variety of mechanisms to develop MDR to common antifungals. Reduced intracellular accumulation of drugs (due to rapid efflux) is one of the most prominent mechanisms of resistance in Candida cells. Accordingly, it has been well documented by several groups that clinical azole resistant (AR) isolates of C. albicans display transcriptional activation of genes encoding ATP Binding Cassette (ABC) multidrug transporter proteins CaCdr1p or CaCdr2p or Major Facilitator Super family (MFS) efflux pump protein CaMdr1p [5][6][7][8]. Lipids, in addition to being structural and metabolic components of yeast cells, also appear to play an indirect role in the frequently observed MDR in Candida. For example, CaCdr1p shows selectivity towards membrane recruitment and prefers membrane raft micro-domains for its localization within plasma membrane [9]. It has already been demonstrated that there are close interactions between raft constituents such as ergosterol and sphingolipids (SLs), and disruption of these results in altered drug susceptibilities [10,11]. Thus, any change in ergosterol composition by disruption of ERG genes, or change in SL composition by disruption of its biosynthetic genes leads to improper surface localization of CaCdr1p [9]. Interestingly, MFS transporter CaMdr1p shows no such selectivity towards raft lipid constituents and remains fully membrane localized and functional in cells where sphingolipid or ergosterol biosynthesis is compromised [9]. There are also instances where common regulation of MDR and lipid metabolism genes have been observed [12,13]. Any changes in the status of membrane lipid phase and asymmetry also seem to affect azole resistance in Candida cells [14]. Taken together, MDR in Candida is closely linked to the status of membrane lipids, wherein the overall drug susceptibility of a cell appears to be an interplay of membrane lipid environment, drug diffusion and extrusion [15]. Earlier studies describing changes in lipid composition in azole resistant isolates provided limited information, particularly due to the lack of high throughput analytical tools [16][17][18][19][20] and the use of randomly collected AS and AR isolates of Candida [21,22]. In the present study, we have utilized high throughput MS-based platform to get an insight into the dynamics of lipids in frequently encountered azole resistance in C. albicans cells. We have performed comprehensive lipid profiling and compared the lipidomes of genetically matched pairs of azole sensitive (AS) and resistant (AR) hospital isolates of C. albicans and evaluated if any changes in lipid imprints are typical to a drug-resistant phenotype. In our analysis, we focused on the contents of five major groups of lipids namely: phosphoglycerides (PGLs), SLs, sterol esters (SEs), di-acyl and tri-acyl glycerols (DAGs and TAGs respectively) and analyzed their molecular species. The PGL groups including phosphatidyl choline (PC), phosphatidyl ethanolamine (PE), phosphatidyl inositol (PI), phosphatidyl serine (PS), phosphatidyl glycerol (PG) and phosphatidyl acid (PA), and SL groups including ceramide (CER), inositolphosphorylceramide (IPC), mannosylinositolphosphorylceramide (MIPC), mannosyldiinositolphosphorylceramide (M(IP) 2 C) were analyzed. Less abundant lyso-lipids namely lysophophatidylcholine (LysoPC), lysophophatidylethanolamine (LysoPE) and lysophophatidylglycerol (LysoPG) were also detected. Using the combination of comparative lipidomics and its statistical validation, we individually identified over 200 molecular lipid species and evaluated the differences in lipids between the AS and AR pairs. The study shows that though each isolate is different in regard to its lipid profile, it does share a few commonalities with the other isolates, particularly at the level of molecular lipid species. This study provides a comprehensive picture of total lipidome in response to azole resistance in Candida cells. Lipid standards Synthetic lipids with FA compositions that are not found, or are of very low abundance in Candida, were used as internal standards. Lipid standards were obtained from Avanti Polar Lipids (Alabaster, AL). Strains, media and culture conditions C. albicans strains used in this study are listed in Supplementary Table S1. C. albicans cells were kept on YPD plates and inoculated in YPD medium (1% yeast extract, 2% glucose, and 2% bactopeptone). The cells were diluted into 50 ml fresh medium at 0.1 OD at A 600 (,10 6 cells/ml) and grown for 14 h until the cells reached exponential growth (,2610 8 cells/ml). Three separate cultures of each Candida strain were used. Lipid Extraction Lipids were extracted from Candida cells using a slight modification of the method of Bligh and Dyer [23]. Briefly, the Candida cells were harvested at exponential phase and were suspended in 10 ml methanol. 4 g glass beads (Glaperlon 0.40-0.60 mm) were added and the suspension was shaken in a cell disintegrator (B. Braun, Melsungen, Germany) four times for 30 sec with a gap of 30 sec between shakings. Approximately 20 ml chloroform was added to the suspension to give a ratio of 2:1 of chloroform:methanol (v/v). The suspension was stirred on a flat-bed stirrer at room temperature for 2 hrs and then filtered through Whatman No. 1 filter paper. The extract was then transferred to a separatory funnel and washed with 0.2 volumes of 0.9% NaCl to remove the non-lipid contaminants. The aqueous layer was aspirated and the solvent of the lipid-containing, lower organic layer was evaporated under N 2 . The lipids were stored at 280uC until analysis. Unfractionated lipid extracts were directly introduced by continuous infusion into the ESI source on a triple quadrupole MS (API 4000, Applied Biosystems, Foster City, CA). Samples were introduced using an autosampler (LC Mini PAL, CTC Analytics AG, Zwingen, Switzerland) fitted with the required injection loop for the acquisition time, and passed to the ESI needle at 30 ml/min. Sequential precursor (Pre) and neutral loss (NL) scans of the extracts produce a series of spectra revealing a set of lipid species containing a common head group fragment. Lipid species were detected with the following scans: 9. The collision gas pressure was set at 2 (arbitrary units (au)). The collision energies, with nitrogen in the collision cell, were +40 V for PC, +28 V for PE, +25 V for PA, +22 V for PG, PI and PS, and 257 V for LysoPG. Declustering potentials were +100 V for PC, PE, PA, PG, PI, and PS, and 2100 V for LysoPG. Entrance potentials were +14 V for PC, PA, PG, PI, and PS, +15 V for PE, and 210 V for LysoPG. Exit potentials were +14 V for PC, PA, PG, PI, and PS, +11 V for PE, and 214 V for LysoPG. The mass analyzers were adjusted to a resolution of 0.7 u full width at half height. For each spectrum, 9 to 150 continuum scans were averaged in multiple channel analyzer (MCA) mode. The source temperature (heated nebulizer) was 100uC, the interface heater was ''on'', and +5.5 kV or 24.5 kV were applied to the electrospray capillary. The curtain gas was set at 20 au, and the two ion source gases were set at 45 au. Processing of the data, including isotope deconvolution, was done similar to the way described by Singh et al. [25]. The background of each spectrum was subtracted, the data were smoothed, and peak areas were integrated using a custom script and Applied Biosystems Analyst software. The lipids in each class were quantified in comparison to the two internal standards of that class. The first and typically every 11 th set of mass spectra were acquired on the internal standard mixture only. Peaks corresponding to the target lipids in these spectra were identified and molar amounts were calculated in comparison to the internal standards on the same lipid class. To correct for chemical or instrumental noise in the samples, the molar amount of each lipid metabolite detected in the ''internal standards only'' spectra was subtracted from the molar amount of each metabolite calculated in each set of sample spectra. The data from each ''internal standards only'' set of spectra were used to correct the data from the following 10 samples. The analyzed data (in nmol) were normalized to the sample's ''dry lipid weight'' to produce data in the units nmol/mg dry lipid weight. Finally, the data were expressed as mole percent of total lipids analyzed. Sphingolipid Quantification. The ESI-MS/MS procedure for SL quantification was similar to that for PGL quantification. The declustering potential was +180 V for CER and 2180 V for IPC, MIPC, and M(IP) 2 C. The exit potential was +10 V for CER and 215 V for IPC, MIPC, and M(IP) 2 C. The mass analyzers were adjusted to a resolution of 0.7 u full width at half height. For each spectrum, 125 to 250 continuum scans were averaged in multiple channel analyzer (MCA) mode. The source temperature (heated nebulizer) was 100uC, the interface heater was ''on'', and +5.5 kV or 24.5 kV were applied to the electrospray capillary. The curtain gas was set at 20 au, and the two ion source gases were set at 45 au. Of note, by collision induced dissociation, IPC molecular ions fragment to produce mass spectral fragments unique to IPC's. Notably, head group fragment of m/z 259 might be produced from the secondary fragmentation of arachidonates and other ions. However, by using a combination of characteristic fragmentation pattern of IPC's and stringent data processing, we could focus specifically on IPC molecular ions. The rest of the processing was similar to that for phosphoglycerides; however, all SL signals were normalized to the signal for 0.23 nmol 16:0-18:0-PI that was added in as an internal standard. SL amounts were determined by normalizing the mass spectral signal so that a signal of 1.0 represents a signal equal to the signal of 1 nmol 16:0-18:0-PI (the internal standard). These data were then divided by the sample dry weight to obtain signal/dry weight, by the total SL signal to obtain % of total SL signal. It is possible that there is variation in ionization efficiency among various SLs and between the internal standard and SL species. Thus, normalized SL species amounts may not reflect their molar amounts. However, the employed procedure allows for determination of relative abundance of SLs and comparison of amounts of particular SLs among samples. The ESI-MS/MS procedure for SE, DAG and TAG quantification was similar to that of PGL quantification. Precise amounts of internal standards were added in the following quantities (with small variations of amounts in different batches of internal standards): 4.6 nmol di15:0-DAG, 3.1 nmol tri17:0-TAG. Lipid species for SE, DAG and TAG were detected with the following scans: 16 4 ] + in positive ion mode with NL 259.2. The collision gas pressure was set at 2 au. The collision energy, with nitrogen in the collision cell, was +25 V for SE, DAG and TAG. The declustering potential was +100 V for SE, DAG and TAG. The exit potential was +12 V for SE, DAG and TAG. The mass analyzers were adjusted to a resolution of 0.7 u full width at half height. For each spectrum, 50 continuum scans were averaged in multiple channel analyzer (MCA) mode. The source temperature (heated nebulizer) was 100uC, the interface heater was ''on'', and +5.5 kV were applied to the electrospray capillary. The curtain gas was set at 20 au, and the two ion source gases were set at 45 au. The rest of the processing was similar to that for PGLs; however, all SE signals were normalized to the signal for 4.6 nmol di15:0-DAG that was added in as an internal standard. For these samples, all SE, DAG and TAG are represented as a list of ''fatty acid containing'' species. So, there are the 16:0 containing species, the 18:1 containing species and so on. There is no total DAG or TAG, as some of these species overlap with each other. For example, 16:0-18:1 DAG will appear in both the 16:0 containing and the 18:1 containing list. The data for DAG and TAG has been described as ''Relative mass spectral signal'', where the unit is the signal for 1 nmol of the internal standard. The data are normalized to dry lipid weight. Thus, these amounts are not true molar amounts, but they can be used to compare the amounts of particular molecular species among samples. SE amounts were determined by normalizing the mass spectral signal so that a signal of 1.0 represents a signal equal to the signal of 1 nmol di15:0-DAG (the internal standard). The SE data were then divided by the dry lipid weight to obtain signal/dry lipid weight, by the total SE signal to obtain % of total SE signal. It is possible that there is variation in ionization efficiency among various SEs and between the internal standard and SE species. Thus, normalized SE species amounts may not reflect their molar amounts. However, the employed procedure allows for determination of relative abundance of SEs and comparison of amounts of particular SEs among samples. Of note, the total amount of each SE class has been calculated by adding the normalized mass spectral signal of all respective FA containing SE (for example, total amount of ergosterol esters was calculated by adding the mass spectral signal of 16:1, 16:0, 18:3, 18:2, 18:1 and 18:0 -containing ergosterol ester). Statistical Analysis The mean of three independent biological replicates 6 standard deviation (SD) from the individual samples was used to compare the lipids of Candida species. Multivariate data analysis (pattern recognition) was employed. Principal component analysis (PCA) was performed using the software SYSTAT, version 10 (Systat Software Inc., Richmond, CA, USA) using three replicates of each of the AS and AR Candida isolate to highlight the statistically significant lipid differences. To assess the statistical significance of the difference in lipid datasets and PC scores, the Student t-test was performed using the significance level of 0.05. When all the values for a particular lipid species were zero in all samples, the data for that lipid species were removed from the analysis. The data in percentage were log-transformed and normalized to the same scale for PCA. Results To analyze the global lipidomic changes associated with azole resistance, we used a large number of genetically matched pairs of AR isolates. All the AR isolates were similar, as they showed high MIC 80 for fluconazole and other drugs; however, based on the mechanism of acquiring azole resistance, they could be segregated into two distinct groups (Supplementary Table S1). For example, while in the first group which included Gu4/Gu5, DSY294/ DSY296 DSY347/DSY289 and DSY544/DSY775 pairs, azole resistance was predominantly attributed to an over-expression of an ABC transporter encoding gene CaCDR1 [27,28], and for the second set of pairs G2/G5, F2/F5, DSY290/DSY292 and DSY741/DSY742, the resistance to azoles was mainly due to an over-expression of MFS encoding gene, CaMDR1 [29,30]. The availability of highly related AS and AR isolates enabled us to make a direct comparison of their lipidomes. For lipidome analysis, AS and AR Candida cells were harvested in the exponential growth phase and their total lipids were extracted as described in Materials and Methods [23]. The extracted lipids were subjected to ESI-MS/MS by direct infusion of the lipid extracts. The total lipids (PGLs, SLs and SEs) were quantified and lipid content (as total normalized mass spectral signal of PGL + SL + SE) was found to range between to 453 to 1116 nmol per mg dry lipid weight of AS and AR isolates (Supplementary Sheet S1). A comparison of lipidome of AS and AR isolates did not give a typical pattern of variations, however, some of the differences appeared to be more prominent among the AR isolates. To highlight the changes between AS and AR isolates, we selected and discussed a few as an example. Although, we determined lipids to their absolute amounts (as total normalized mass spectral signal of PGL + SL + SE), we have used the mole percentages (as % of total normalized mass spectral signal of PGL + SL + SE) for data analysis, which have lower standard deviation. By employing MS analysis, nine major PGLs classes which included PC, PE, PI, PS, PG, PA LysoPC, LysoPE and LysoPG in AS and AR isolates were targeted. Additionally, lipid molecular species were identified by mass of the head group plus the mass of the intact lipid, allowing determination of the number of C atoms and double bonds in the acyl chain(s) of PGLs. The PGLs were quantified in relation to internal standards of the same lipid class. This procedure is known to provide accurate quantification because various molecular species of the same lipid class (here, the internal standard and other species) produce very similar amounts of mass spectral signal after electrospray ionization [31]. Our MS analysis also included four major groups of SLs, CER, IPC, MIPC, M(IP) 2 C and their relative amounts were determined. As discussed in Methods, SEs, DAG and TAG, were analyzed on the basis of their FA compositions (six major FAs were analyzed, including C16:1, C16:0, C18:3, C18:2, C18:1 and C18:0, as these are the most abundant FAs present in Candida) and their relative amounts were determined on the basis of respective internal standards. The PGLs and SLs compositional profile is different among various AS/AR matched pairs Our method could detect PC, PE, PI, PS, PG and PA as major PGLs among AS and AR isolates. The abundance of PGLs was in order PC, PE, PI, PS, PA, PG, which did not change between the AS and AR pairs (Figure 1). PC, PE and PI accounted for almost 80% PGLs in all the isolates. As shown in Figure 1, fluctuations in PGL levels are quite evident among all AS/AR pairs except in the pair DSY294/DSY296, where no significant change was observed. The contents of PS decreased by as much as 2 fold among all AR isolates, except in DSY289 where it increased by 1.4 fold. However, in DSY296, there was no change in PS content. It is noteworthy that PS is significantly found to be lower in all those AR isolates where the MFS transporter CaMdr1p is overex-pressed. Other lipids, namely PC, PE, PI, PG and PA showed minor but significant changes among various AS/AR pairs, but these changes were not consistent between pairs. Generally, among the three major lyso-PGLs analyzed, namely LysoPC, LysoPE and LysoPG, no significant differences were observed among the majority of AS/AR isolates. However, LysoPG content was up to 2.5 times lower in DSY296, DSY289 and in G5 isolates, while LysoPC and LysoPE content increased up to 2 folds in DSY292 and DSY742 isolates. Four major SL groups including CER, IPC, MIPC and M(IP) 2 C, were detected. While the M(IP) 2 C abundance, which is the most complex phospho-SL, was found to be ,2.2% among all analyzed isolates, MIPC, was the most variable SL. It was depleted over 2 folds in DSY289, DSY775, DSY292 and DSY742 AR isolates while it was raised up to 2 folds in DSY296, G5 and F5 isolates ( Figure 2). SE homeostasis is altered among various AS/AR matched pairs Sterols were identified and quantified as SE. Lanosterol, zymosterol, episterol, fecosterol, ergostatetraenol and ergosterol were the major components in all AS/AR pairs, which ranged between 1-80% of the total SEs ( Figure 3A). The intermediate metabolites of sterol biosynthetic pathway such as epi-and fecoand zymo-esters were ,1.2 to 4 fold depleted in Gu5, DSY289, G5, F5, DSY292, DSY742 AR isolates. Lanosterol esters which are also important sterol biosynthesis precursor were depleted 2-15 folds in F5 and G5 isolates. The ergostatetraenol and ergosterol esters, which are the end products of the sterol biosynthesis, were significantly up by 1.3-5 folds in G5, F5, DSY292 and DSY742 AR isolates (Figure 3 A). While examining the FA composition of these SEs, we found that 18:3-SE specifically depleted in Gu5 and DSY775, and elevated in G5, DSY292 and DSY742. Notably, none of these SEs changed significantly in DSY294/DSY296 pair ( Figure 3B). DAGs and TAGs show variations among various AS/AR matched pairs DAGs were found to be significantly depleted (by ,1.3-5 fold) among DSY 296, DSY289, DSY775 and G5 AR isolates, while they increased (by ,1.4-10 fold) among F5, DSY292 and Gu5 AR isolates (Supplementary Figure S1). In DSY742, only 18:3-DAG content was increased by 1.6 folds. Similarly, TAGs were found to be significantly depleted (by 1.4 fold or more) among Gu5, DSY289 and DSY775 AR isolates, and increased (by 1.5 fold or more) among G5, F5 and DSY742 AR isolates (Supplementary Figure S2). However, there was no change in TAG contents of DSY296 and DSY742 AR isolates (Supplementary Figure S2). Molecular lipid species show ripple effects on lipidome of AR isolates Each lipid class is sub-categorized into its molecular species which differs from others in fatty acid composition and their positional distribution [32]. By mass spectrometric analysis of the extracts of AS and AR pairs, we detected molecular lipid species belonging to five major lipid groups (PGL, SL, SE, DAG and TAG; see Supplementary Sheet S2). We could determine the abundance of over 260 species belonging to PGL, SL and SEs ( Figure 3A, 4, and 5). However, for DAGs and TAGs the data is rather relative, but nonetheless can be used for comparisons between AS/AR isolates (as described in methods). The total number of species detected for each of the AS/AR pairs was mostly the same (Supplementary Sheet S1). For example, over 200 PGL species were quantitatively detectable among all AR isolates. However, considerable difference existed in terms of relative abundance of different lipid species between AS and AR isolates. For example, out of 242 lipid species of 18 major lipid classes (PGL + SL + SE), among CaCDR1 attributed pairs, only the pair DSY544/DSY775 showed significant differences in ,128 lipid species, while other pairs did not show much variation in lipid species (,18-70 species) (Supplementary Sheet S3). In contrast, those pairs where azole resistance was due to CaMDR1 PCA is a mathematical algorithm that reduces the dimensionality of the data while retaining most of the variation in the data set [33]. The data are represented by principal components, with the first principal component accounting for the maximum possible variation in the data, and each succeeding component accounting for a portion of the remaining possible variation. Plots allow visual assessment of the similarities and differences among samples and help to determine whether and how samples can be grouped [24,25,34]. We performed PCA using the molecular species percentage composition of PGL + SL + SE (data in Supplementary Sheet S1) to identify and highlight the AR-specific, statistically significant lipid differences. To identify the lipid species changes, PCA was first done on combined data sets of all isolates. The 3-D PCA plot on all combined AS and AR datasets confirmed that all AS/AR pairs have unique lipid profiles as they did not segregate as different Several common molecular lipid species are more susceptible to variation in response to azole To focus on the molecular lipid species that could commonly vary between each AS and AR pair, PCA analysis was done on the individual pairs (Supplementary Figure S5A). The loading scores associated with the principal component 1, 2 and 3 are summarized in the Supplementary Sheet S4 (Supplementary Figure S5B). Upon careful examination of the loading values of the principal component 1, we could identify 17 different molecular PL species, whose composition consistently varied in most of the AR isolates (Supplementary Table S2). These included species ranging from mono-unsaturated lipid species (for example, PC30:1, PC32:1, PI30:1, PI34:1) to poly-unsaturated lipid species (PC36:4, PC36:5, PE36:4, PE36:5, etc.). Consistent variation among some MIPC (MIPC 42:0;3) and SE species was also evident (Supplementary Table S2). Supplementary Figure S6 depicts four typical variable molecular lipid species of PGLs between AS and AR isolates. Discussion Although the importance of lipids in the physiology of Candida including in MDR has been realized [9][10][11][12][13][14][15], this represents the first study to dissect and evaluate lipid metabolome of AS and AR isolates. By using a high throughput mass spectrometry based lipidome analyses of pathogenic C. albicans, we have performed comparative lipidomics between AS and AR matched pair isolates and could determine the abundance of 16 major lipid classes, which provided a significant coverage of the lipid metabolic network in Candida (Figure 7). We observed that the relative abundance of major classes of PLs remained similar between AS and AR pairs. However, the lipid profile of AR isolate of each pair was typical (Figure 1-3 Figure S1 and S2). What emerges from this comparative analysis is that there is no typical lipid composition which could be directly linked to azole resistance. Considering that membrane lipids are the target of environmental stresses and that each pair of AS and AR Candida has been isolated from different host, typical metabolic state of each pair is not surprising. This is also supported by the fact that the lipidome of AS isolates themselves differ from each other. Thus each AS Candida strain appears to have a mechanism to sense and respond to environment and change its lipid composition. The differences in lipid composition due to azole stress necessitate compensatory changes in other lipids. For example, PS is depleted among most AR isolates; however, the contents of PE which is formed after decarboxylation of PS are largely maintained (Figure 1). This supports the fact that defects in de novo PE synthesis compromises virulence of C. albicans [35]. PG is other PGLs which appeared to be more responsive to azole resistance as it was variable in all the isolates. It is noteworthy that PGL has earlier been implicated in optimal mitochondrial functions and maintenance of yeast susceptibility to azole antifungals [36]. While relative abundance of several major classes of lipids did not significantly vary between AS and AR isolates, a closer look revealed interesting insights at the molecular species level ( Figure 4A and 4B). Of note, the molecular species of PGLs were only slightly different in terms of numbers identified in each pair, but varied considerably in mole percentages distribution ( Figure 4A and 4B). The molecular lipid species differ from each other on the basis of FA-chain length, number of double bonds and their positional distribution, which play an important role in different cellular functions, by providing an excellent platform to the cell to remodel its metabolite level and maintain lipid homeostasis [24,32]. We analyzed commonality between molecular lipid species which could help AR cells to maintain its lipid homeostasis. The data indicates that azole resistant Candida undergoes massive lipidome remodeling where certain common lipid species are more significantly changed between most of the AS and AR isolates ( Figure 4 and Figure 5, Supplementary Table S2). These generally included mono-to poly-unsaturated PGL species containing 36 to 40 carbons and a few of SL and SE species (Figure 4 and Figure 5). It is thus apparent that azole resistant Candida shows a great deal of adaptability at the level of molecular lipid species to maintain its optimal membrane lipid environment. PCA analysis between individual AS and AR pair confirmed that the overall changes in the lipid profile, though typical between matched isolates, are all related with some commonalities at the molecular lipid species level (Supplementary Table S2). These common variable molecular species (for example, PC30:1, PC32:1, PI30:1, PI34:1, PC36:4, PC36:5, PE36:4, PE36:5, MIPC 42:0;3, SE) among AS and AR isolates could be considered as more responsive to the drug stress. , Supplementary Our pool of AR isolates included four pairs of each with either over-expressed CaCDR1 or CaMDR1 as major factor of azole resistance (Supplementary Table S1) [26][27][28][29]. The two proteins encoded by these genes, though functionally identical in term of drug extrusion, differ mechanistically in achieving it [5]. Another difference is that the two efflux pump proteins e.g. CaCdr1p and CaMdr1p show discrete preference to their recruitment to plasma membrane [9]. For example, it has been shown that ABC transporter CaCdr1p is predominantly localized within sphingolipid and ergosterol rich micro-domains as compared to MFS transporter CaMdr1p which shows no such preference [9]. Based on this back ground, we examined our lipidome data set to assess if overexpression of the two proteins of different lipid preferences could be correlated with the changes in lipid homeostatic environment. Our data confirmed that indeed the overexpression of either of the two transport proteins in AR isolates elicited different lipidomic response. For example, among PGLs, PS amounts were consistently declined in all CaMdr1p overexpressing AR isolates (Figure 1). In contrast, ergostatetraenol and ergosterol esters were significantly accumulated among CaMdr1p overexpressing AR isolates ( Figure 3A). None of the above changes were consistent in CaCdr1p over-expressing strains which typically showed depletion of signaling and storage neutral lipids like DAGs and TAGs levels (Supplementary Figure S1 and S2). It is well known that protein kinase C (PKC) is activated by the DAGs in the presence of PS as a cofactor [37][38][39][40][41][42]. Therefore, it is possible that these neutral lipids, along with PLs, might contribute to MDR development through very tightly regulated signaling cascades. Interestingly, PKC signaling pathway has been reported to contribute to antifungal tolerance in C. albicans and S. cereviseae [43,44]. Some efflux protein dependent interesting patterns among molecular lipid species could also be observed. For example, while the contents of 18:3-SE were specifically depleted in CaCdr1p overexpressing pairs, they were higher in CaMdr1p overexpressing pairs ( Figure 3A). Furthermore, molecular lipid species ratios for PC and PE, 36:6/36:4 and 36:6/34:4 (18:3-FA indicator), showed a decreasing trend among some CaCdr1p overexpressing AR isolates, while opposite was true among the CaMdr1p attributed AR isolates (Supplementary Figure S7). This comparison revealed that an overexpression of either of the two efflux proteins might necessitate typical changes in lipid profiles. It is also apparent that an overexpression of MFS transporter CaMdr1p is accompanied by more pronounced changes in lipid profiling as compared to CaCdr1p cells. This would imply that in addition to responding to azole stress, the lipidome of AR isolates might be also susceptible to the over-expression of membrane efflux pump proteins. This is not unexpected since differences in membrane environment can have a significant effect on the insertion, folding and functioning of membrane proteins [45][46][47][48]. However, the fact that these lipid profile changes could be due common regulation of MDR pump encoding genes and lipid metabolism cannot be ignored [49,50]. For example, mammalian presynaptic serotonin transporter (SERT) is functionally overexpressed only in the presence of cholesterol [51], while lactose permease (LacY) in E. coli requires PE for proper folding [52,53]. Thus, lipidomic differences in response to overexpression of either ABC or MFS transporter in AR isolates will affect the physical state of the membrane, which in turn could influence drug transport and substrate translocation. This aspect merits consideration in the overall scenario of MDR. Conclusion Taken together, in this study using a combination of high throughput mass spectrometry and statistical validation methods, we provide the first evidence of metabolic reprogramming between AS/AR matched pair isolates. Most metabolic changes are evident at the molecular lipid species level between each AS/ AR isolates. Our study also highlights general commonality among molecular lipid species in most AR isolates and lipid perturbations that might be directly or indirectly associated with the overexpression of either CaCdr1p or CaMdr1p. Notwithstanding the fact that our comparative lipidomics of AS and AR do not provide the direct metabolic state of these isolates as they would exist in the host environment, it does provide a snap shot of the lipidomic status of AR Candida. The molecular characterization of lipidome of AS and AR Candida isolates would serve as a starting resource point to link clinical/functional genomics with pathway specific signaling and gene/protein/metabolite expression and function in relation to multidrug resistance of pathogenic Candida. Our study also provides evidence that each AR isolate is rather unique in terms of its lipidome which reflects interplay of several genetic and host factors and could be an important consideration in designing therapeutic strategies. Figure S1 The composition of DAG classes among various AS and AR isolates of C. albicans. Data in the heat map is represented as mass spectral signal per mg dry lipid wt. normalized to internal standards). Values are means 6 SD (n = 3-5 for all Candida strains). Statistically significant fold changes have been depicted (P,0.05). No change is depicted by 'n.c.' and statistically insignificant change (P.0.05) is depicted by 'n.s'. Green, yellow and red color depicts the highest, mid and lowest values respectively. Data taken from Supplementary Sheet S2. (TIF) Figure S2 The composition of TAG classes among various AS and AR isolates of C. albicans. Data in the heat map is represented as mass spectral signal per mg dry lipid wt., normalized to internal standards. Values are means 6 SD (n = 3-5 for all Candida strains). Statistically significant fold changes have been depicted (P,0.05). No change is depicted by 'n.c.' and statistically insignificant change (P.0.05) is depicted by 'n.s'. Green, yellow and red color depicts the highest, mid and lowest values respectively. Data taken from Supplementary Sheet S2. Sheet S1 This sheet has the absolute amounts (nmol per mg dry lipid wt.) and the mol percentages of the molecular lipid.
19823060
s2orc/train
v2
2017-05-02T17:17:26.000Z
2017-05-02T00:00:00.000Z
An improved Ant Colony System for the Sequential Ordering Problem It is not rare that the performance of one metaheuristic algorithm can be improved by incorporating ideas taken from another. In this article we present how Simulated Annealing (SA) can be used to improve the efficiency of the Ant Colony System (ACS) and Enhanced ACS when solving the Sequential Ordering Problem (SOP). Moreover, we show how the very same ideas can be applied to improve the convergence of a dedicated local search, i.e. the SOP-3-exchange algorithm. A statistical analysis of the proposed algorithms both in terms of finding suitable parameter values and the quality of the generated solutions is presented based on a series of computational experiments conducted on SOP instances from the well-known TSPLIB and SOPLIB2006 repositories. The proposed ACS-SA and EACS-SA algorithms often generate solutions of better quality than the ACS and EACS, respectively. Moreover, the EACS-SA algorithm combined with the proposed SOP-3-exchange-SA local search was able to find 10 new best solutions for the SOP instances from the SOPLIB2006 repository, thus improving the state-of-the-art results as known from the literature. Overall, the best known or improved solutions were found in 41 out of 48 cases. Introduction In recent years, a large number of metaheuristic optimization algorithms (MOAs) has been proposed, and some of these were created based on inspiration drawn from natural phenomena [24]. Examples of these are the Ant Colony System algorithm that was inspired by the foraging behavior of certain species of ants and Simulated Annealing (SA) with some ideas taken from metallurgy [37,11]. Metaheuristics are often applied to find solutions of an acceptable quality to difficult combinatorial optimization problems, particularly NP-complete ones. A good example is the Sequential Ordering Problem (SOP) which consists in finding a minimum weight Hamiltonian path on a directed graph with weights that is subject to precedence constraints among the nodes. Although less time-consuming than the exact approaches, MOAs differ in their efficiency, which can sometimes be improved by combining ideas taken from other MOAs. Contributions The main aim of the paper is to show that Simulated Annealing could be used to improve the convergence speed of ACS and Enhanced ACS (EACS) algorithms. The proposed solution is easy to implement and does not increase the algorithms' asymptotic complexity. Moreover, we developed a modified version of the SOP-3-exchange local search (LS) heuristic as proposed by Gambardella et al. [21] for the SOP. The modification, again, includes ideas taken from the SA to allow the algorithm to escape local optima. A thorough experimental evaluation on a number of SOP instances from well-known datasets confirms the efficiency of the proposed algorithms. In fact, in several cases we obtained results of a better quality than those from state-of-the-art methods in the literature [23,26]. The paper is organized as follows: Section 2 focuses on recent ideas of improving the efficiency of the ACS (and related algorithms), some of which include the SA. We also briefly discuss recent work on solving the SOP. In Sec. 3 we describe our approach to improve the convergence speed of the ACS by using the SA. Section 4 presents a similar approach, but to improve the local minima escape ability of the SOP-3-exchange local search algorithm which is paired with the ACS and the EACS when solving the SOP. Section 5 presents the results of computational experiments conducted on two sets of SOP instances. The last section contains the conclusions and some ideas for further work. Related work Multiple modifications to the ACO family of algorithms have been proposed in the literature. Most of them refer to pheromone update rules and parameter tuning. Hassani et al. [15] proposed a modified global pheromone update rule for the ACS in which not only the global best ant but also all ants with inferior solutions may update the pheromone with probability calculated according to the acceptance rule of the SA. The initial temperature was set arbitrarily to 100 and an exponential cooling schedule was applied. The limited computational experiments showed that in most cases the algorithm achieved better results than the Ant System. Bouhafs et al. [7] proposed a two-phase approach based on the SA and ACS to solve the Capacitated Location-Routing Problem. The SA was used to find facility locations while the ACS was used to solve the corresponding location routing problem. In most cases The algorithm was able to improve the best-known solutions to the problem. In most of the ACO and SA combinations the latter plays the role of a local search used to improve the quality of the solutions generated by the ants. Behnamian et al. [5] proposed a hybrid of the ACO, SA and Variable Neighborhood Search algorithms for solving parallel-machine scheduling problems. The SA was used to guide the dedicated LS. A successful combination of the ACS and the SA was proposed by Ayob and Jaradat [4] for solving course timetabling problems. The SA was used along with the Tabu Search to improve solutions generated by the ACS. The results for the proposed algorithm were of better quality than those of the ACS alone or of the MAX-MIN Ant System. The SA was again used as the LS for the ACS by Wassila and Boukra [43]. The approach was slightly faster but comparable in terms of solutions quality, than other natureinspired metaheuristics for the intrusion detection problem. Similarly, the SA played the role of an LS improving the results generated by the ants in the ACS solving the Vehicle Routing Problem with Time Windows [9]. Chen and Chien [10] proposed a complex hybrid of four metaheuristics, namely of the Genetic Algorithm, SA, ACS, and the Particle Swarm Optimization, for solving the TSP. The SA played the role of a mutation operator in the GA part of the hybrid. In a paper by Xi et al. [44] the solutions generated by the ant system were later improved by the SA when solving the 3D/2D fixed-outline floor planning problem. McKendall and Shang [32] used the SA as the LS method in one of their Hybrid Ant System algorithms for solving the dynamic facility layout problem. The resulting algorithm was able to improve some of the best known results for the problem. Similarly, the solutions generated by the ACO were a starting point for the SA solving the problem of managing energy resources considering intensive use of electric vehicles [42]. The combined approach produced solutions of quality better than of the SA or ACO alone. Sequential Ordering Problem The Sequential Ordering Problem is a generalization of the Asymmetric TSP (ATSP). The goal is to find the shortest Hamiltonian path from a starting city (source node) to a destination city (final node) by going through each of the remaining cities (nodes) exactly once. Moreover, some cities have to be visited before others. Due to precedence constraints, the problem is sometimes referred to as the Precedence Constrained Traveling Salesman problem (PCATS). The SOP can be viewed as a scheduling problem in which many jobs have to be scheduled on a single machine. The processing times for the jobs are given along with the setup times between pairs of jobs. Also, some jobs have to be completed before others. The goal is to minimize the total makespan [17]. Other real-world problems that can be modeled as an instance of the SOP include the Single Vehicle single core of an Intel Xeon E5540 or E5649 both with a 2.53GHz clock. For most of the instances the optima were found under an hour, but for 12 instances no optima were found with the time limit set to 24 hours. The exact methods are time consuming, particularly if the size of the problem reaches a few hundred nodes, hence much of the research has been focused on heuristic algorithms for the SOP. Guerriero and Mancini proposed a parallel roll-out heuristic in which several threads simultaneously visit different portions of the solution space and periodically exchange information about the solutions found [27]. The algorithm was able to match the best-known solutions for most of the SOP instances from the TSPLIB repository, although its main drawback was a high computational cost. Gambardella et al. proposed a combination of the Ant Colony System and a novel LS procedure called the SOP-3-exchange [21]. The resulting algorithm, denoted as HAS-SOP, allowed to improve many best-known results for many SOP instances from the TSPLIB repository. Monetamanni et al. added to the HAS-SOP a Heuristic Manipulation Technique which creates and adds artificial precedence constraints to the original problem [35]. The method led to better results, particularly for large SOP instances. A discrete Particle Swarm Optimization hybridized with the SOP-3-exchange heuristic was proposed by Anghinolfi et al. [1]. The algorithm was able to improve many of the best results presented in [21,35]. Later, Gamabrdella et al., basing their findings on an analysis of the drawbacks of the HAS-SOP algorithm, proposed an improved ACS version called the Enhanced Ant Colony System (EACS) [22]. The two main changes were proposed. First, the construction phase of the EACS used information about the best solution found so far. Second, the LS was run only if the current solution was within 20% of the best found solution. The EACS was able to further improve some of the best results obtained by Anghionlfi et al. [1] and to date remains one of the most efficient methods for solving the SOP. Improving ACS Convergence with Simulated Annealing Ant colony optimization (ACO) is probably the best-known algorithm that was inspired by the foraging behavior of ants in nature. It is a population-based meta-heuristic algorithm that is often used to solve difficult combinatorial and continuous optimization problems. In general, it does not guarantee that the optimal solution will be found, but solutions that are found are often of good enough quality (for practical use) [14]. In the ACO, a number of artificial agents (ants) construct iteratively complete solutions to an optimization problem. An ant starts with an empty solution and, in subsequent steps, extends it with components selected from the set of all available components. Each component has an associated pheromone trail and a heuristic value. The higher the product of the pheromone concentration (value) and the heuristic value is, the higher the probability that it will be selected by the ant. In nature, ants communicate indirectly with one another by depositing small amounts of chemical substances called pheromones, e.g., an ant that has found a food source marks the path to the nest with small amounts of pheromone. The pheromone trail attracts other ants and leads them to the food source. The more ants that repeat the process, the higher the concentration of the pheromone trail becomes, hence the process becomes autocatalytic. The pheromone evaporates with time, so the pheromone concentration does not increase indefinitely. The ACO algorithms use artificial pheromone trails, with the pheromone concentration represented as real numbers. The set of all pheromone trails is usually called a pheromone memory and plays a crucial role in the performance of the ACO family of algorithms [12,14]. For the TSP (and related problems) the problem is usually modeled by using a graph G(V, E). An artificial ant constructs its solution starting from a randomly selected node. In subsequent steps it moves from the current node to one of the unvisited neighbor nodes by using the corresponding edge. The pheromone trails τ uv are deposited on the edges, (u, v) ∈ E, of graph G and together with a priori knowledge about the problem, reflected in the heuristic values associated with each edge η uv they guide the construction process. The Ant Colony System is an improved version of the Ant System by Dorigo et al. [13]. In the ACS the ant located at node i selects a next node j according to a pseudo-random proportional rule [14]: where η il is a cost associated with an edge (i, l), τ il is the value of the pheromone trail on edge (i, l), J i k is a set of available (candidate) nodes of ant k, and q 0 is a parameter, 0 ≤ q ≤ 1. J is a node (city) selected according to the probability distribution defined by: The choice defined by Eq. 1 depends on the value of parameter q 0 . If the randomly drawn number q is lower than the parameter q 0 , then the choice is greedy and the ant selects the node to which an edge leads with the maximum product of pheromone trail τ ij and heuristic η ij values. Otherwise q ≥ q 0 and the choice is random with the probability distribution given by Eq. 2. The first case is often referred to as exploitation of the knowledge gathered by the ants (in the pheromone memory). Usually, a value of q 0 close to 1 (often 0.9 and above) leads to good quality results in a shorter period of time as compared to the base ACO algorithm [14]. Some authors even use a higher value calculated as follows: q 0 = 1 − s |V | , where parameter s is the number of nodes that should be selected randomly with the probability defined by Eq. 2 [21]. During construction of the solutions the ants in the ACS update the values of the pheromone trails on the traversed edges. Each ant, after making a move from node u to node v, applies a local pheromone update rule that decreases the amount of pheromone on edge (a, b) according to: where ψ is a parameter regulating evaporation of the pheromone over time and τ 0 is the initial pheromone level. The rationale behind formula (3) is that it lowers the probability of selecting the same nodes by subsequent ants, hence it increases variety in the constructed solutions. The global pheromone update performed after the ants have completed construction of their solutions is more important. The update rule results in the increase in pheromone levels on trails corresponding to the best solution found so far (S best ) and its value by L best . For each (u, v) ∈ S best , the pheromone changes according to the formula: where ∆τ ab = L −1 best and ρ ∈ (0, 1) is a parameter regulating the strength of the pheromone increase. The global pheromone update ensures that edges belonging to the current best solution have higher probabilities of being selected in the algorithm's subsequent iterations. The global best solution is used during the global pheromone update because it leads to slightly better solutions than the iteration best solution [14]. In order to further shorten the computation time of Eq. 1, the so-called candidate set is used which consists of the nearest neighbors of the current node. The size of the candidate set is usually in the range of 10 to 25 [12,14]. For comparison, the size n of the problem is often two or three orders of magnitude larger, hence the use of candidate sets further limits exploration of the solution search space. The candidate sets are a greedy heuristic based on the observation that good quality solutions are comprised mainly of short edges. If all of the candidate set elements are already a part of the constructed solution the ant selects one of the remaining (unvisited) nodes. The candidate set is usually computed at the beginning and does not change. Randall and Montgomery [38] investigated the idea of dynamic candidate set updates for the TSP and the Quadratic Assignment Problem (QAP). The dynamic versions resulted in solutions of better quality but also significantly increased the computation time of the whole algorithm. Enhanced ACS The Enhanced ACS algorithm proposed by Gambardella et al. is an efficient metaheuristic for the SOP [22]. It differs from the ACS in two ways. The first is a modified solution construction phase which is much more focused on the best solution found so far. Instead of direct application of Eq. 1 an ant selects the node which follows the current node in the best solution so far (if the random number q is lower than the parameter q 0 ). Only if the node is already a part of the constructed solution does the ant consider other nodes, i.e. it selects the edge with the maximum product of the pheromone and heuristic values. If q ≥ q 0 , the selection process from the ACS is used. Parameter q 0 usually has a value of 0.9 or higher, hence this modification significantly speeds up the construction process although it limits the exploration capability of the EACS, and without a strong LS, it achieves results of lower quality than the ACS [23]. The second modification is strong integration of the solution construction phase with the LS. The LS is run only if the cost of the current solution is within 20% of the best solution found so far. Also, the LS is initialized so that only elements of the current solution which are out of order with respect to the best solution are placed on the so-called don't push stack. The elements of the stack are the starting points for the LS. This increases the emphasis on areas of the solution search space that are potentially unexplored. A slightly modified version of the EACS was proposed by Ezzat [19]. The main difference concerns the choice of the next node in the solution construction process. At first it tries to select the node v which follows the current node u in the best solution so far. If v is already a part of the constructed solution, it selects the next node with the probability defined by Eq. 3. This change favors exploration and makes the algorithm less exploitative than the EACS but still more exploitative than the ACS. Later Ezzat et al. adapted the EigenAnt algorithm to solve the SOP [20]. The computational experiments showed that the proposed algorithm had performance comparable to the EACS. Simulated Annealing Simulated Annealing is one of the most well-known general metaheuristic optimization methods. It was inspired by the Monte Carlo method of sampling the states of a (physical) thermodynamic system. In the SA, a solution to the optimized problem is equivalent to a state of the thermodynamic system, and its quality corresponds to the system's current energy [39]. The SA works as follows: starting from an initial solution X 0 , a sequence of solutions (X i ), X ∈ S is generated, where S is a set of all feasible solutions. Given a current solution X i , a candidate solution Y i is generated and its cost C(Y i ) is calculated. The next solution X i+1 is selected according to: Probability p i is defined as The physical analogy on which the SA is based requires that the system be kept close to a thermal equilibrium as the temperature is lowered. The most often used cooling schedule is the exponential schedule of the form: T i+1 = λT i , where λ is a parameter. In fact, the exponential cooling schedule usually lowers the temperature too fast for the system to reach a near-equilibrium state and does not guarantee convergence to the global optimum. Nevertheless, it is useful in practice because it is easy to implement and often leads to good quality solutions if the computation time is limited [8]. More advanced cooling schedules have been proposed; the two well-known ones are the adaptive cooling schedule by Huang et al. [39] and the efficient cooling schedule by Lam [33]. Combining ACS with Simulated Annealing The ACS generally offers a better convergence speed than the Ant System or ACO [12]. This stems, among others, from the more exploitative solution construction process and the global pheromone update rule that places emphasis on the best solution found so far. This usually speeds up the process of finding good quality solutions but also makes escaping local minima very difficult. Simulated Annealing, on the other hand, offers a simple solution to escape the local minima. We propose how to combine the ACS and SA to enhance the ACS search process while maintaining its exploitation oriented nature. The proposed algorithm, ACS with the SA (ACS-SA in short), can be summarized as follows. The ACS search process is guided (in part) by the pheromone trail values. At the end of each iteration the global pheromone update rule increases the values of the pheromone trails corresponding to the components (edges) of the current best solution (global best). In the proposed ACS-SA algorithm the global update rule uses instead an active solution which may not necessarily be the best solution found so far. At the end of every iteration each of the solutions generated by the ants is compared with the active solution. If the new solution is of better quality, it replaces the current active solution. Otherwise, the new solution may still replace the active solution but with a probability defined by the Metropolis criterion known from the SA. While the ACS is always focused on the neighborhood of the best solution found so far and can become trapped in a local optimum for a long time, the proposed ACS-SA has a greater chance of escaping the local optima by shifting focus to the solution with a higher cost. Figure 1 presents a pseudocode of the proposed ACS-SA algorithm. The major part of the algorithm does not differ from the ACS, i.e. the only differences are related to temperature initialization (line 1), the cooling schedule (line 26) and the active solution selection process (line 19). Inclusion of the SA into the ACS results in a more exploratory search process, but it may also lead to a prolonged examination of areas of the solution space that contain solutions of a poor quality. This is prevented by allowing the current global best solution to be selected as the active solution with a probability of 0.1 (line 20). This heuristic might not be necessary if a more advanced cooling schedule is used. The present work is intended to be proof of the concept that the SA may be used 1 Procedure SA_select_solution(active_solution,solutions) to improve the convergence speed of the ACS, hence the geometric cooling schedule was adapted for its simplicity. In future work a more advanced schedule, e.g. an adaptive cooling schedule by Lam [33], could be applied. Figure 2 presents the active solution selection procedure. The process iterates over a set of solutions built by the ants. If the cost of an ant's solution is lower than the cost of the active solution, it replaces the active solution (lines 3-5 in Fig. 2). Otherwise, the ant's solution (of a worse quality) may replace the active solution with a probability calculated according to the Metropolis criterion from the SA (lines 6-7). As the temperature is lowered, the probability of accepting a worse solution goes down to 0 and the process becomes equivalent to that of the ACS. The initial temperature T 0 plays an important role in the SA. In our work we applied the idea of an adaptive temperature calculation which was proposed in [3]. The calculation requires a sample of randomly generated solutions whose values (costs) are used to calculate the initial temperature according to: where ∆C is the mean of absolute differences between the costs of consecutive pairs of solutions from the sample, σ ∆C is the sample standard deviation and γ is a parameter denoting the probability of accepting a worse solution, i.e. with a higher cost. The idea behind Eq. 6 is based on the central limit theorem which states that the mean of a large sample of independent random variables is approximately normally distributed, hence, almost all (approx. 99.7%) absolute differences between the quality of randomly generated solutions fall in the range of (∆C − 3σ ∆C , ∆C + 3σ ∆C ). Knowing the approximation of the highest difference in quality between a pair of solutions allows to calculate the initial temperature so that the probability of accepting a worse solution is γ. Although the temperature initialization requires additional computations, it does not increase the asymptotic complexity of the ACS algorithm. In our experiments a sample of 1000 random solutions was used due to a negligible additional cost; however, a much smaller number could also be acceptable. Combining Enhanced ACS with the SA As described in Sec. 3.1, the EACS differs only slightly from the ACS. The differences are minor and concern the solution construction process and the LS application, hence it is straightforward to apply exactly the same ideas to incorporate the SA into the EACS as in the proposed ACS-SA algorithm. Due to its more (i.e. relative to the ACS) exploitative behavior, the EACS is even more susceptible to getting trapped in the local minima, hence it should also benefit from the SA component. The resulting algorithm will henceforth be denoted as the EACS-SA. Efficient Local Search for the SOP Even though the ACS, MMAS and related algorithms perform competitively to other nature inspired metaheuristics their convergence can still be improved with a problem-specific local search [14]. When combined with the LS, the ACS is responsible for finding a candidate solution, while the aim of the LS is to improve it by performing small changes leading to a neighboring solution of a better quality. In this section we start with a description of the state-of-the-art LS heuristic for the SOP and later propose a modified version which incorporates the SA component. SOP-3-exchange Gambardella et al. [21] proposed an efficient LS heuristic for the SOP called the SOP-3-exchange. It adapts the 3-opt heuristic known from the TSP to the SOP without an increase in algorithm time complexity. The SOP-3-exchange belongs to the family of edge-exchange procedures, in which a new solution is generated by replacing k existing edges with another set of k edges for which the cost of the solution is lower. This operation is usually called k-exchange, and the value of k can be fixed (typically 2 or 3) or can vary as in the Lin-Kernighan heuristic [28]. Starting from the initial solution and applying the k-exchange iteratively until no further improving exchange exists leads to a k-optimal solution. This process requires in the worst-case scenario, O(n k ) time. During a k-exchange procedure k existing edges are removed producing k disjoined paths which are then reconnected with k new edges. In some cases, reconnection of the paths requires that some of them be reversed, e.g. in the case of a 2-opt move and a closed path < 0, . . . , i − 1, i, i + 1, . . . , h − 1, h, h + 1, . . . , n − 1 >; there are two possible ways to reconnect the subpaths after removal of the (i, i + 1) and (h, h + 1) edges, namely < . . . , h + 1, i + 1, . . . , h − 1, h, i, i − 1, . . . > and < . . . , i − 1, i, h, h − 1, . . . , i + 1, h + 1, . . . >; both require a reversal of the subpath. The reversal, however, is problematic for the SOP because the distances between the nodes are asymmetric, hence the length of the path after the reversal should be recalculated what requires O(n) time. Because the cost of a k-opt move should be calculated in a constant time an efficient implementation of the k-opt heuristic for the SOP should be restricted only to path-preserving exchanges [21]. The smallest k that allows a path-preserving exchange is k = 3, denoted as the path-preserving-3-exchange shown in Fig. 3. By removing the (h, h + 1), (i, i + 1) and (j, j + 1) edges and adding (h, i + 1), (j, h + 1) and (i, j + 1) edges the two neighboring subpaths are swapped, thus preserving the relative order of the elements. After performing the path-preserving-3-exchange one would still need to verify if the precedence constraints for the two subpaths are preserved. This requires O(n 2 ) time in the general case but can be avoided as in the method proposed by Gambardella et al. [21]. There are two necessary procedures to reduce the computation time. The first requires keeping the lexicographic order while searching for the path-preserving-3-exchange. The second, is the use of a labeling method. Figure 3 shows how the route changes when applying the path-preserving-3-exchange. The tree indices h, i, and j (h < i < j) define two sub-paths in the route: left (h + 1, i) and right (i + 1, j). The subpaths are swapped as a result of performing the exchange, i.e. the right path comes before the left path. This can only happen if there are no precedence constraints between the considered node and the nodes in the left path. The path-preserving-3-exchange as proposed by Gambardella et al. [21] consists of forward and backward searches for feasible path-preserving-3-exchanges. The forward search involves incrementing j iteratively, thus increasing the length of the right path by one. This requires that only the precedence constraints be checked between the elements of the left path and the node considered for inclusion into the right path. Eventually, a precedence constraint is hit and the procedure is repeated with the left path being extended with a single element (by incrementing i) and the right path set to a single element, i.e. (j), j = i + 1. After all of the possibilities are exhausted h is incremented and the process repeats for all possible i and j values (i < j < n). This leads to O(n 3 ) possible pairs of subpaths each requiring O(n) constraints verification, hence a total complexity of O(n 4 ). The cost of constraints verification can be reduced to O(1) due to the labeling procedure. The procedure works as follows. Each time the left subpath is extended with a new node u (during the sop-3-exchange), mark(v) is set to count for every node v for which there exists a precedence constraint between u and v. The count is a variable initially set to 0 and incremented each time the left path grows, i.e. h is incremented. Thanks to the procedure, each time the right path is extended with a node x one needs only to check the value of mark(x). If the value equals the count, then the node at index j (in the right path) has to be visited after the nodes in the left path, hence the two paths cannot be swapped. This reduces the complexity of the whole search for a feasible path-preserving-3-exchange to O(n 3 ), which is asymptotically equal to the complexity of a 3-opt heuristic used to solve the TSP. The forward search for the path-preserving-3-exchange considers only exchanges defined by indices i, j and h such that 0 < h < i < j < n, where n is the number of nodes. The backward search is analogous to the forward search but the left and right paths are expanded in the direction of decreasing indices, i.e. the left path "moves" from the end of the sequence to the beginning. Summarizing, the time complexity of finding a single profitable path-preserving-3-exchange using the described procedure is O(n 3 ). This is still expensive as the procedure is applied (to a single solution) in a loop until no further improving move is found and it has to be repeated for the subsequent solutions. Gambardella et al. [21] proposed two additional changes to reduce the algorithm's computation time. The first is to limit the search to only a subset of all potential moves. By default the SOP-3-exchange for each index h considers all valid i and j indices. Assuming most of the changes will involve relatively short paths, the i values can be restricted to h + 1, h + 2, h + 3 for the forward procedure and h − 1, h − 2, h − 3 for the backward procedure. This version was named OR-exchange [21]. The second change involves the use of two additional heuristics, i.e. don't look bits and don't push stack. Don't look bits is a data structure that was proposed by Bentley [6] which works as follows. A bit is associated with each node of the solution. At the beginning all the bits are turned off and are turned on when the SOP-3-exchange starts looking for a profitable exchange originating from the node. If the don't look bit is turned on, the corresponding node is ignored by the subsequent SOP-3-exchange searches until the node is involved in a profitable pathpreserving-3-exchange. Then all six pivot nodes (h, h + 1, i, i + 1, j, j + 1) are turned off. Use of the don't look bits aims to focus the search on the changing parts of the solution. The purpose behind the use of the don't push stack is similar -it contains nodes h to be selected as a starting point of a path-preserving-3-exchange. At the beginning the stack is initialized with all of the nodes. During the search a node, h, is removed from the stack and if the feasible move originating from node h is found the six nodes involved in the exchange are pushed onto the stack (if they do not belong to it already). An additional benefit of using the don't push stack is that the linear order in which the nodes are considered during the search for a profitable path-preserving-3-exchange is broken. Fig. 5 (the search in the backward direction is analogous). It starts with a given index h that denotes the starting point of a possible path-preserving-3-exchange and searches for the remaining two points, denoted by indices i and j. The labeling procedure is applied incrementally (lines 4-6). The function is_move_accepted in line 12 simply checks if the proposed decrease in the solution value is better than the current best, but it can be replaced by a more advanced criterion as will be shown later. Improving SOP-3-exchange Efficiency with SA The SOP-3-exchange LS is efficient in improving solutions generated by the ants; however, the improvement process is greedy and only better (downhill) moves are accepted. It makes it possible to reach a local optimum quickly; however, it also makes it unable to find any better solution that would require making at least one uphill move. Similarly to our idea of incorporating the SA into the ACS and EACS algorithms, we propose to include the SA decision process into the SOP-3exchange in order to make it more explorative. The proposed modification is easy to implement as it only requires to modify the greedy condition as to whether to accept a given subpath exchange in the forward search for a path-preserving-3-exchange (line 12 in Fig. 5) (analogously in the backward search). The pseudocode of the proposed modification is shown in Fig. 6. The decision whether to accept the proposed move (subpath exchange) is made based on the change (decrease) in the solution value and the value of the best move found so far. If the proposed move is better than the current best, it is always accepted. Otherwise, if it results in the same decrease of the solution length then it is accepted with a probability of 10% (lines 4-5 in Fig. 6). It allows to accept moves which do not change the solution value but which result in a different relative order of the solution nodes. Finally, if the proposed move is worse than the best move found so far, it is accepted with the probability calculated using the Metropolis criterion, as in the SA. Similarly to the ACS-SA, there are two parameters related to the SA component of the proposed SOP-3-exchange-SA algorithm, namely λ LS and γ LS . The former is used in the geometric cooling schedule to lower the temperature T LS , while γ LS is related to an initial probability of accepting a worse move. There is, however, a slight difference in the temperature initialization relative to the ACS-SA. In the ACS-SA the initial temperature is calculated based on a sample of differences in the quality (length) of the randomly generated solutions, just before the main computations. In the SOP-3-exchange-SA the sample comes from the values of the differences in the solution quality (delta values) resulting from the subsequent path-preserving-3-exchanges considered during the initial runs of the SOP-3-exchange-SA (lines 11-16 in Fig. 6). In other words, there is no dedicated temperature initialization phase in the SOP-3-exchange-SA and the sample of delta values is collected on the run in order not to slow down the whole algorithm. After the sample of 10 5 (a value found experimentally) is collected, the initial value of temperature T LS is calculated, and in subsequent invocations of the SOP-3-exchange-SA the temperature is reset to this initial value without recalculating. Computational Experiments A series of computational experiments was conducted in order to evaluate the performance of the proposed algorithms. In the first part of the experiments we focused on the efficiency of the ACS-SA and EACS-SA used alone, i.e. without the problem-specific LS. In the second part the focus was placed on the efficiency of the algorithms coupled with the SOP-3-exchange and SOP-3-exchange-SA LS heuristics. The ACS and EACS require that a number of parameters be set. Based on preliminary computations and suggestions from the literature we used the following settings in our experiments: number of ants, m = 10; β = 0.5; ψ = 0.01 and ρ = 0.1, and local and global pheromone evaporation ratio, respectively; q 0 = n−20 n , where n is the size of the problem. The computations were repeated 30 times for each configuration of the parameter values and the problem instance. The computations were conducted on a machine equipped with a Xeon E5-2680v3 12 core CPU clocked at 2.5GHz, although a single core was used per run. All algorithms were implemented in C++ and compiled with the GNU compiler with the -Ofast switch 1 . ACS-SA Parameter Tuning The first part of the experiments was focused on the behavior of the ACS-SA algorithm depending on the values of the SA-related parameters. The proposed ACS-SA algorithm uses a simple exponential cooling schedule T k = T 0 · λ k , where λ < 1 is the cooling factor and T 0 is the initial temperature. Although the exponential cooling schedule does not guarantee convergence to a global optimum, it has the advantage of being easy to implement and often performs well in practice [31]. Preliminary computations showed that the most important factor for the performance of the ACS-SA was the λ parameter which directly influences the speed of the SA convergence. The best performance was observed for λ ≥ 0.999, for which the probability of accepting worse quality solutions and, hence, escaping local minima remained high for a relatively long time. It is not without significance that the algorithm was run for 10 5 iterations, and for shorter/longer runs Table 1: Table containing the p-values of the post-hoc pairwise comparison between the results of the ACS-SA with various (λ, γ) values (shown in the second row) according to the non-parametric, two-sided multiple comparison procedure by Mack and Skillings at α = 0.05 [30]. The test itself corrects for the Type I family-wise error. The +/symbol after a value denotes that the configuration in a row was significantly better/worse than the configuration in a column. [31]. A number of "promising" values was selected for a more thorough investigation, namely λ ∈ {0.999, 0.9995, 0.9999}. The initial temperature T 0 was calculated for each problem instance during the initialization phase so that the probability of accepting a worse solution (an uphill move) at the beginning was approx. equal to the specified probability γ (a parameter, independent of a problem instance). The mean difference between the successive solutions was estimated based on a sample of randomly generated solutions. In our experiments we considered γ ∈ {0.1, 0.5, 0.9} leading to a total of 9 combinations of λ and γ. The algorithm was run for a total of 14 SOP instances from the TSPLIB repository, namely: ft53. We used statistical tests to verify if the results for the various values of parameters differed significantly. The proposed experimental design can be viewed as a two-way (two-factor) layout in which the main factor is the combination of λ and γ values, while the second factor (also called a blocking factor) is the problem instance (13 instances in our case) [30]. More specifically, the design can be described as a randomized block design with an equal number of replications per treatmentblock combination. A suitable non-parametric (distribution-free) statistical test was proposed by Mack and Skillings and is an equivalent of a parametric two-way ANOVA [34]. The null hypothesis, H 0 , which is of our interest here is that of no differences in the medians (of the solution quality) for algorithms with various λ and γ values considered here (a total of 9 combinations). Rejecting the null hypothesis would mean that the different values of the λ and γ parameters lead various performance of the ACS-SA. The test requires that the Mack-Skillings statistic be computed (MS ) which is then compared with a critical value ms α at the α level of significance (α = 0.05 in our case) [30]. The null hypothesis H 0 is rejected if MS ≥ ms α . In our case MS ≈ 72.68 while the critical value ms 0.05 ≈ 15.23, hence H 0 was rejected, providing rather strong evidence that the values of λ and γ have a significant impact on the quality of the results generated by the ACS-SA. This is an expected result because the value of λ should have a strong effect on the search trajectory of the SA. After the rejection of H 0 , we can apply a post-hoc test to compare the individual pairs of algorithm results obtained for respective pairs of λ and γ values. A suitable asymptotically distributionfree, two-sided, multiple comparison procedure using within-block ranks was proposed by Mack and Skillings [30,34]. Table 1 contains the final p-values of the pairwise comparison. As can be observed, in most cases there were no significant differences between the results of the ACS-SA with the various λ and γ values. The only exception was configuration λ = 0.9999 and γ = 0.1, for which the results were significantly better 6 out of 8 times. On the other hand, configuration λ = 0.9999 and γ = 0.9 was worse 7 out of 8 times. This shows that the SA component of the ACS-SA has the strongest influence if the temperature is decreased slowly. It is important to properly adjust the initial probability γ of accepting a worse quality solution and, hence, the initial temperature T 0 . If the probability is high, the algorithm easily accepts inferior solutions, particularly at the beginning of the computations, and drifts away from the good quality solutions. It is worth emphasizing that these observations are valid for the computation budget (time) used in the experiments; greatly increasing the computation time could show even better convergence for higher γ values. Figure 7 shows the convergence plots for the ACS-SA with various λ and γ levels: for λ = 0.999 the temperature drops relatively quickly and convergence of the ACS-SA resembles that of the ACS. For λ = 0.9999 the temperature drops more slowly and the algorithm has a greater chance of escaping the local minima for a longer period of time. By increasing the initial temperature (as for γ = 0.9) we can extend the initial "free wandering" phase at the expense of slower convergence. ACS-SA and EACS-SA Performance The first experiment showed that the SA component indeed had a significant impact on ACS-SA search convergence. In the subsequent experiment we focused on a comparison between the ACS-SA relative to the ACS. We also considered the EACS and the EACS combined with the SA (EACS-SA). Both the ACS-SA and the EACS-SA were run with λ = 0.9999 and γ = 0.1, chosen based on the previous experiment. To make the comparison fair, all of the algorithms were run with a time limit of 60 seconds of CPU time. Although the limit was relatively low it was sufficient to detect differences in the performance of the algorithms. A total of 20 SOP instances of varying size were selected from the TSPLIB repository. The boxplots of the mean solution error are shown in Fig. 8 and Fig. 9. The differences between the quality of the solutions generated by the algorithms are clearly visible. For the smaller instances, performance of the ACS and EACS was relatively similar and, in most cases, worse than that of the ACS-SA and EACS-SA, respectively. The differences became more distinct for larger instances (up to 380 nodes), for which the EACS outperformed the ACS in most cases. The ACS-SA generally beat the ACS but even better performance was achieved by the EACS-SA version, particularly for the largest instances. The results were compared statistically to make the comparison more complete. For each problem instance, the Kruskal-Wallis non-parametric one-way analysis of variance test (an extension of the Mann-Whitney U test) with α = 0.05 was applied to check the hypothesis that the results of the four algorithms came from the same distribution. The hypothesis was rejected in 19 out of 20 cases meaning that the results of the algorithms differed significantly. In such cases a post-hoc test was applied to compare all pairs of results. For this purpose the Bonferroni-Dunn test was employed with a family-wise Type I error correction (α F W = 0.05) [40]. The results are summarized in Tab. 2. For each pair of algorithms, only the final verdict is shown with a letter indicating the algorithm that achieved significantly better results than the others. The ACS-SA outperformed the ACS in 12 cases, while never generating worse results. The SA component is The Simulated Annealing component in both the ACS-SA and the EACS-SA does not increase the asymptotic time complexity of the algorithms. Only the initial temperature calculation requires a number of random solutions to be constructed, while the main ACS loop is little affected by the Metropolis rule and the cooling schedule computations. Figure 10 shows the mean number of iterations performed by each of the considered algorithms within a time limit of 60 sec. As can be observed, the number of iterations depends mostly on the size of the problem instance, while the differences between the algorithms are relatively small. The EACS and EACS-SA are faster than the other two algorithms due to the less expensive solution construction process which builds a new solution by reusing significant parts of a solution from the previous iteration. SOP-3-exchange-SA Parameter Tuning Similarly to the ACS-SA and EACS-SA the SOP-3-exchange-SA algorithm has two more, SA-related, parameters, namely λ LS and γ LS . Based on preliminary computations, several values were preselected for further investigation, namely λ LS ∈ {0.8, 0.9, 0.95, 0.99} and γ LS ∈ {0.1, 0.5, 0.9}. All 12 combinations of the parameters values were considered. For each combination the EACS algorithm with the SOP-3-exchange-SA was run on a set of 14 instances of sizes from 400 to 700 selected from the SOPLIB2006 repository, namely: R.400.100. 15 The non-parametric Mack-Skillings test was used to verify if there were any significant differences between the results for the different λ LS and γ LS values, similarly to Sec. 5.1. The null hypothesis H 0 stating that there were no differences between the medians of the solutions' quality produced for the different parameter values was rejected if M S ≥ ms α , where M S is the Mack-Skillings statistic and ms α is the critical value at specified level of significance α. In our case, M S ≈ 1840.75 and ms 0.05 ≈ 19.66, hence H 0 was rejected, providing strong evidence for the significant differences between the quality of results of the EACS-SA with SOP-3-exchange-SA obtained for the different λ LS and γ LS values. A post-hoc multiple comparison test by Mack and Skillings [30] was applied to find out for which values of the parameters the results were of better quality. Table 3 contains the computed p-values, where a value at the intersection of the i-th row and j-th column denotes the p-value of the comparison between the results obtained for values of λ LS and γ LS corresponding to the i-th row and j-th column, respectively. An analysis of the test results revealed that for λ LS = 0.99 and γ LS = 0.1 the results were significantly better than for any other combination of values. Simultaneously, the worst configuration, in terms of solution quality, was λ LS = 0.8 and γ LS = 0.1, hence the γ LS parameter is of lower significance than λ LS , which directly influences the speed of the temperature decrease in the SA component of the SOP-3-exchange-SA. Generally, the best results were obtained for λ LS equal to 0.95 and 0.99. Comparison of algorithms The last part of the experiments concerned the performance of ACS, ACS-SA, EACS and EACS-SA combined with the LS algorithms, i.e. SOP-3-exchange and SOP-3-exchange-SA. This gives a total of 8 algorithm combinations. To make the comparison fair, the algorithms were run with the same time limit of 120 seconds and the same values of parameters (where appropriate). The algorithms were run on SOP instances (48 in total) from the SOPLIB2006 repository [35]. It is worth noting, that the EACS with the SOP-3-exchange is the current state-of-the-art metaheuristic for the SOP [23]. Table 3: Table containing the p-values of the post-hoc pairwise comparison between the results of the EACS-SA with the SOP-3-exchange-SA local search algorithm with various (λ LS , γ LS ) values according to the non-parametric, two-sided multiple comparison procedure by Mack and Skillings at α = 0.05 [30]. The +/-symbol after a value denotes that the results for the configuration in a row were significantly better/worse than those obtained for the configuration in a column. Table 4: Table containing the p-values of the post-hoc pairwise comparison between the results of the ACS, ACS-SA, EACS and EACS-SA algorithms according to the non-parametric, two-sided multiple comparison procedure by Mack and Skillings at α = 0.05 [30]. The +/-symbol after a value denotes that the results for the algorithm in a row were significantly better/worse than those obtained for the algorithm in a column. The LS1 and LS2 subscripts denote the local search method used, i.e. the SOP-3-exchange and SOP-3-exchange-SA, respectively. A quick analysis of the obtained results showed noticeable differences in the efficiency of the investigated algorithms. The experiment design allows to check for statistically significant differences between the algorithms by using the non-parametric Mack-Skillings test for a two-factor layout. The first factor is the algorithm that is applied while the second (blocking) factor is the SOP instance that is solved. The null hypothesis H 0 of our interest is that there are no differences in the quality of the solutions generated by the algorithms. The rejection of H 0 would mean that the algorithms differ in the quality of generated solutions. The critical value for the test at level of significance equal to 0.05 is ms 0.05 ≈ 14.03 and the Mack-Skillings statistic is M S ≈ 8843.11, meaning that M S > ms 0.05 , hence H 0 was rejected. Rejection of the null hypothesis allows us to apply a post-hoc test (also proposed by Mack and Skillings [30]) to perform a pairwise comparison of the algorithms. The resulting p-values are shown in Tab. 4. As can be observed, all values are either close to 0 or close to 1, meaning that the differences are either sharp or nonexistent, respectively. Not surprisingly, all EACS variants obtained significantly better results than the ACS-based algorithms. The most efficient algorithm was the EACS with the SOP-3-exchange-SA LS, which obtained results that were significantly better than any of the other remaining algorithms. The second best was the EACS-SA with the SOP-3-exchange-SA LS. Out of the four ACS variants the ACS with SOP-3-exchange-SA performed better than the other three, thus confirming the efficiency of the proposed SOP-3-exchange-SA LS. The worst performing were the ACS-SA with the SOP-3-exchange and the ACS-SA with the SOP-3-exchange-SA. The poor performance of the ACS-SA variants can be explained by the weakened emphasis on the exploitation which admittedly increases the probability of escaping from local optima but also slows the overall convergence of the algorithm, which is clearly visible if the computational budget is modest, as in the experiment conducted here (120 sec.). Even though some of the algorithms can be seen as generally more efficient than others, this is not true in every case, as can be observed in Tab. 5, in which the sample mean and sample standard deviation values are presented for the EACS and EACS-SA algorithms. The two most efficient, in terms of solution quality, were the EACS with the SOP-3-exchange and the EACS-SA with the SOP-3-exchange LS. While the former achieved lower mean values for more problem instances, the latter performed particularly well for instances of a size up to 500. For the largest instances (R.600.* and R.700.* ), the EACS with the SOP-3-exchange obtained the lowest mean values in 14 out of 16 cases. This suggests that the EACS-SA algorithm did not have enough time to converge within the specified time limit. Similar observations can be made from the analysis of the best solutions found by the algorithms presented in Tab. 6. The table also contains the values of the best-known solutions; some of which were obtained by Gouveia and Ruthmair by using an exact method (branch-and-cut) [26] and by Papapanagiotou et al. [36], while the rest by metaheuristics, including the EACS with the SOP-3-exchange [23]. For the 18 SOP instances, all four algorithms were able to find the best-known solution at least once per 30 runs. For the 10 SOP instances new best solutions were found by the proposed algorithms. The EACS with the SOP-3-exchange-SA found the new best solutions for 6 instances, i.e. R.300.1000. 15 15. Overall, the best known or improved solutions were obtained by at least one of the algorithms in 37 out of 48 cases (77%). All algorithms struggled most with instances in which the number of precedence constraints was smallest, i.e. 1% (instances R.*.*.1 ) which suggests that there is still some room for improvement of the LS algorithms. The results as presented above confirm that the proposed incorporation of the SA into the main algorithm (EACS) and into the local search (SOP-3-exchange) is able to improve the quality of the generated solutions to the SOP. In order to further clarify the differences between the existing approach, i.e. the EACS with the SOP-3-exchange LS, and the proposed EACS-SA with the SOP-3-exchange-SA, both were run on SOP instances from the SOPLIB2006 repository, however the computation time was increased to 600 seconds. This is a five-fold increase vs the time limit used in the experiments presented above. By giving the algorithms more time, we lower the risk of one algorithm dominating an other because of the limited time. The results are presented in Tab. 7. In most cases the results of the EACS-SA with the SOP-3-exchange-SA were of a better quality than those obtained for the EACS with the SOP-3-exchange, although the relative differences between the algorithms depended on the SOP instance that was being solved. The results were checked for a statistically significant differences using the non-parametric Wilcoxon rank sum test at a significance level of α = 0.05 (the respective p-values are reported in the table). In 33 out of 48 (69%) cases (instances) the solutions generated by the EACS-SA with the SOP-3-exchange-SA were of a significantly better quality than those generated by the EACS with the SOP-3-exchange. In 4 cases (8%) the results of the former algorithm were significantly worse and in 11 cases (23%) no significant differences were observed. Taking into account the best solutions generated during 30 executions of the algorithms for each of the SOP instances considered, the proposed algorithm reached the best-known results in 31 cases, and in 10 cases new best solutions were found. Because of the increased computation time limit, in 7 out of those 10 cases the results were improvement over those presented in Tab. 6. To summarize, the best-known or improved results were generated for 41 out of the 48 (85%) SOP instances considered here. The EACS with the SOP-3-exchange found the best known results in 18 cases; however, no new best solutions were found in any case. All of the SOP instances for which the EACS-SA with the SOP-3-exchange-SA generated significantly worse results than the EACS with the SOP-3-exchange are of the form R.*.1000.1 what suggests either an overall inferior convergence of the former algorithm for this kind of instances, or an insufficient time limit to match the convergence of the latter algorithm. In fact, the second possibility seems to be true because for the smallest of the R.*.1000.1 instances, i.e. R.200.1000.1, the EACS-SA with the SOP-3-exchange-SA generated significantly better results, and for the second smallest instance, i.e. R.300.1000.1, there were no significant differences between the results of the two algorithms. To confirm our assumption, both algorithms were run for the instances: R.300.1000.1, R.400.1000.1, R.500.1000.1, R.600.1000.1, and R.700.1000.1 but with the time limit increased to 1200 seconds (doubled) per run. The results are presented in Tab. 8. As can be seen, the advantage of the EACS with the SOP-3-exchange over the EACS-SA with the SOP-3-exchange-SA disappeared, and both algorithms generated results of a similar quality. Statistical comparison based on the non-parametric Wilcoxon rank sum test showed no significant differences for the R.400.1000.1, R.500.1000.1 and R.600.1000.1 instances. Surprisingly, the increased time limit allowed the EACS-SA to obtain significantly better results for the two remaining instances, i.e. R.300.1000.1, and R.700.1000.1, although the advantage was small relative to the optimum values. Considering all the results, the efficiency of the proposed algorithm (in terms of the quality of solutions) was statistically significantly better than the original approach for approx. 73% of the SOP instances, while never being worse. However, a sufficient computation time is necessary to reach this level of performance. In most cases 600 seconds was enough, whereas for a few instances the limit of 1200 seconds was necessary. In practical applications, the algorithm could be sped up by using parallel computations. The algorithms differ not only in the quality of the generated solutions but also in the relative speed. The SA component does not affect the asymptotic time complexity of the ACS and EACS but it may influence the solution search "trajectory", thus possibly impacting the runtime, particularly if a local search is used. The SOP-3-exchange tries to improve a solution by searching only for the improving changes (moves) and its time complexity depends on the relative order of nodes in the solution. If the solution changes slightly from iteration to iteration, the runtime shortens because of the focusing only on the changed parts of the solution. In contrast, the SOP-3-exchange-SA, due to the SA component, may also accept a number of worse (up-hill) moves, hence the overall runtime should increase. Figure 11 shows a bar plot of the average number of iterations performed for the EACS and EACS-SA with both LS variants vs the size of the SOP instance. As expected, the algorithms with the SOP-3-exchange-SA were slower than the algorithms with the SOP-3-exchange. The fastest algorithm was the EACS with the SOP-3-exchange, beating the EACS-SA with the same LS. Interestingly, the slowest algorithm was the EACS with the SOP-3-exchange-SA; it was even slower than the EACS-SA with the same LS. This is probably due to the fact that the EACS can relatively easily get trapped in a "deep" local minimum from which an escape is difficult even if the SOP-3-exchange-SA accepts a number of up-hill moves. On the other hand, the EACS-SA focuses on a larger number of different solutions during the search, some of which are less time-consuming to improve by the LS. Finally, the larger the size of the instance, the lower the number of iterations performed by the algorithms. Summary The Ant Colony System and particularly its enhanced version (EACS) are competitive metaheuristics whose efficiency has been shown in a number of cases [14,21,23]. Nevertheless, we have shown that the search process of the ACS and EACS can still be improved with ideas taken from Simulated Annealing. Specifically, instead of increasing the pheromone values based on the current best solution, the proposed ACS-SA and EACS-SA algorithms increase the pheromone values based on the current active solution that is chosen from among all the solutions constructed by the ants. The active solution may not necessarily be the current best solution as it is selected probabilistically Table 7: Results of the EACS with the SOP-3-exchange (I) and EACS-SA with the SOP-3-exchange-SA (II) algorithms for SOPLIB2006 instances with the time limit set to 600 sec. Verdict denotes the algorithm for which the obtained results were of a significantly better quality than the results of the other algorithm according to the non-parametric Wilcoxon rank-sum test at a level of significance α = 0.05. Cases for which there was no significant difference are marked with a "-". Table 8: Results of the EACS with the SOP-3-exchange (I) and EACS-SA with the SOP-3-exchange-SA (II) algorithms for the selected SOPLIB2006 instances with the time limit set to 1200 sec. The meaning of the columns is as before. Std by using the Metropolis criterion from the SA. This change weakens the exploitative focus of the ACS and EACS, thus increasing the chance of escaping local optima. The computational experiments on a set of SOP instances from the TSPLIB repository and subsequent statistical analyses have shown that in most cases the resulting ACS-SA and EACS-SA algorithms perform significantly better than the original algorithm. An efficient local search heuristic is necessary for state-of-the-art performance in solving the SOP. Based on the same SA inspirations, we proposed an enhanced version of the state-of-the-art SOP-3-exchange heuristic by Gambardella [21]. The resulting SOP-3-exchange-SA algorithm is more resilient to getting trapped in local minima, at the expense of increased computation time. The computational experiments conducted on a set of 48 SOP instances sized from 200 to 700 showed that the proposed EACS and EACS-SA with the SOP-3-exchange and SOP-3-exchange-SA local searches are in many cases able to find solutions of better quality than the original EACS with the SOP-3-exchange (a current state-of-the-art metaheuristic for the SOP), within the same computation time limit. In fact, new, best solutions were obtained for 10 instances. In total, the best known or improved solutions were obtained at least once for a total of 85% of the SOP instances considered here. Although the proposed modifications are easy to implement and improve the performance of the original algorithms, they have some minor drawbacks. First, they increase the computation time relative to the original algorithms. Second, they require to set the values of the new parameters related to the SA cooling schedule (λ and γ). Also, relatively poor performance for SOP instances with a small number (1%) of precedence constraints shows that there is still room for improvement, both in the ACS-SA, EACS-SA and local search methods. In the future, a more advanced cooling schedule could be used to improve the convergence of the SA component of the proposed algorithms. A good candidate seems to be the adaptive cooling schedule that was proposed by Lam [33], although it requires a complex parameter setting and a method of controlling how much the subsequent solutions differ from one another. An interesting idea could also be to activate the SA component only if search process stagnation is detected. Because the proposed fusion between the ACS and SA is problem-agnostic one could try to apply it to solve other difficult combinatorial optimization problems. The performance of the proposed algorithms in terms of computation time could also be improved with the help of parallel computations, as the ACS is susceptible to parallelization even with modern GPUs [41]. Acknowledgments: This research was supported in part by PL-Grid Infrastructure.
213604200
s2orc/train
v2
2019-09-17T00:44:00.380Z
2019-12-01T00:00:00.000Z
Heart attack mortality prediction: an application of machine learning methods The heart is an important organ in the human body, and acute myocardial infarction (AMI) is the leading cause of death in most countries. Researchers are doing a lot of data analysis work to assist doctors in predicting the heart problem. An analysis of the data related to different health problems and its functions can help in predicting the wellness of this organ with a degree of certainty. Our research reported in this paper consists of two main parts. In the first part of the paper, we compare different predictive models of hospital mortality for patients with AMI. All results presented in this part are based on real data of about 603 patients from a hospital in the Czech Republic and about 184 patients from two hospitals in Syria. Although the learned models may be specific to the data, we also draw more general conclusions that we think are generally valid. In the second part of the paper, because the data is incomplete and imbalanced we develop the Chow–Liu and tree-augmented naive Bayesian to deal with that data in better conditions, and compare the quality of these algorithms with others. Introduction An enormous amount of data is being generated every day. Analyzing big datasets is impossible without the help of automated procedures. Machine learning [1] provides these procedures. The most commonly used form of machine learning is supervised classification [2]. Its goal is to learn a mapping from the descriptive features of an object to the set of possible classes, given a set of features-class pairs. Probabilities play a central role in modern machine learning [3]. Probabilistic graphical models (PGMs) [4] have emerged as a general framework for describing and applying probabilistic models. A PGM allows us to efficiently encode a joint distribution over some random variables by making assumptions of conditional independence. A Bayesian network classifier (BNC) [5] is a Bayesian network applied to the classification task. BNCs have many strengths, including good interpretability, the possibility of including prior knowledge about a domain, and competitive predictive performance. They have been successfully applied in practice, e.g., [6][7][8]. Acute myocardial infarction (AMI) is commonly known as heart attack. A heart attack occurs when an artery leading to the heart becomes completely blocked and the heart does not get enough blood or oxygen. Without oxygen, cells in that area of the heart die. AMI is responsible for more than half of deaths in most countries worldwide. Its treatment has a significant socioeconomic impact. One of the main objectives of our research is to design, analyze, and verify a predictive model of hospital mortality based on clinical data about patients. A model that predicts mortality well can be used, for example, for the evaluation of medical care in different hospitals. Evaluation based merely on mortality would not be fair for hospitals where complicated cases are often dealt with. It seems better to measure the quality of health care using the difference between predicted and observed mortality. A related work was published by Krumholz et al. in [9], where the authors analyzed the mortality data in USA hospitals using the logistic regression model. In another work [10], the authors designed and verified a predictive model of hospital mortality in ST elevation myocardial infarction (STEMI). In another work [11], the authors analyzed the medical records of patients suffering myocardial infarction from a third world country, Syria, and a developed country, the Czech Republic, and presented an idea of how to deal with incomplete and imbalanced data for tree-augmented naive Bayesian (TAN). Data Our dataset contains data from 787 patients from 2 different countries ( 603 patients from The Czech Republic and 184 from Syria) characterized by 24 variables. The attributes are listed in Table 1. Most records contain missing values, i.e. for most patients only some attribute values are available, and some attributes are not available for Syrian patients, i.e. the data is incomplete. The thirty-day mortality is recorded for all patients; 89% of the patients survived, i.e. the data is imbalanced. In The Czech Republic, the results of blood tests are reported in millimoles per liter of blood. In Syria some of the measurements are reported in milligrams per liter and some in millimoles per liter. We standardized all measurements to the millimoles per liter scale. Machine learning methods Since the explanatory variables may combine their influence and the influence of a variable may be mediated by another variable, it is worth studying the relations of variables altogether. We will do it in two steps: (1) since the mortality prediction is of our primary interest, we will compare how different classifiers are able to predict mortality, (2) to get an overall picture of the relations between all variables, we will learn some Bayesian network models from the collected data, (3) to handle incomplete and imbalanced data, we will provide an idea of how to develop the Chow-Liu [12] and TAN algorithms [5] to be able to process this data. We will work with different versions of data which vary depending on how we treat variables that have more than two states: (1) real valued ordinal variables, (2) discrete valued variables (with five states at most), and (3) binary variables. We will discuss the values' transformation in more detail in the next sections. Ordinal attributes In our data, we have several categorical variables (sometimes also called nominal variables). These are variables that have two or more categories. For example, sex is a categorical variable having two categories (male and female). However, for some machine learning methods we need ordinal attributes which are attributes whose values have an ordering of values that is natural for the quantification of their impact on the class. This is satisfied by all attributes that can take only two values even if they are nominal, e.g. by sex (0 for male, 1 for female), mortality (0 for survived, 1 for died). In our data it seems that the ordinality can be assumed for most real valued attributes, but note that the fact that there might also exist laboratory tests whose values deviate from a normal range in both directions (i.e. both lower and higher values) may increase the mortality. We will refer to the ordinal data as D.ORD. Discrete attributes Discrete variable is a variable that can take values from a finite set. Some classification methods require discrete variables. To get a statistically reliable estimates of model parameters it is advisable to keep the number of values as low as possible while still being able to express the significant relations. We performed discretization of all real-valued attributes. It is not easy to find the optimum number and the values of split points in discretization. Fortunately, there exists the Czech National Code Book that classifies numeric laboratory results, with respect to age and sex, into nine groups 1, 2, . . . , 9. The group 5 corresponds to standard values in the standard population. We further reduced the number of states to 5 by joining some groups together. We will refer to data in this form as D.DISCR. Binary attributes Binary data are data whose variables can take on only two possible states, traditionally termed 0 and 1 in accordance with the binary numeral system and Boolean algebra. In our case, all laboratory tests are encoded using two binary attributes. The first attribute takes a value of 0 for the standard values of the test and a value of 1 if the values are decreased. The second attribute takes a value of 0 for the standard values of the test and value of 1 if the values are increased. The age, height, and weight attributes are removed. From the demographic group of attributes only sex and body mass index (BMI) were kept with BMI being encoded using two binary attributes BMI high and BMI low where the BMI greater than the mean takes a value of 1, otherwise it takes a value of 0. We will refer to data in this form as D.BIN. Attribute selection Before learning a model, we preprocess the data. Usually, one of the most useful parts of preprocessing is the attribute selection, where irrelevant attributes are removed. Attribute selection is a process by which we automatically search for the best subset of attributes in our dataset. The notion of "best" is relative to the problem we are trying to solve, but typically means the highest accuracy. Three key benefits of performing attribute selection on our data are: • It reduces overfitting. Less redundant data means lower possibility of making decisions based on a noise. • It improves accuracy. Less misleading data means that modeling accuracy improves. • It reduces training time. Less data means that algorithms train faster. The CfsSubsetEval method of Weka [13] selects the subsets of attributes that are highly correlated with the class while having low intercorrelation. We searched the space of all subsets by a greedy best first search with backtracking. Data D after the application of this attribute selection method will be suffixed as D.AS. Tested classifiers For tests, we used a large subset of classifiers implemented in Weka. Classifiers that performed best in the preliminary tests qualified for the final tests. In the final tests we compared the following classifiers: • Decision tree C4.5 [14]. • Naive Bayes (NB) classifier [16] assumes that the value of a particular explanatory variable (attribute) is independent of the value of any other attribute given the class variable. All BN algorithms implemented in Weka assume that all variables are discrete finite variables. We will use NA in the results of these classification methods. We use the leave-one-out cross-validation as the model evaluation method. It means that N separate times, the classifier is trained on all the data except for one point and a prediction is made for that point. After that, the average error is computed and used to evaluate the model. Prediction quality For each data record classified by a classifier there are possible classification results. Either the classifier got a positive example labeled as positive (in our data the positive example is the patient not survived) or it made a mistake and marked it as negative. Conversely, a negative example may have been mislabeled as a positive one, or correctly marked as negative. This defines the following metrics: In other words, it shows you how many correct positive classifications can be gained as you allow for more and more false positives. As an example, in Figure 1 we report the ROC curve for the naive Bayes classifier with the ordinal attributes. Its area under the curve is 0.782. Results of experiments In Table 2, we compare the results of different classifiers on different versions of data. The C4.5 classifier with D.DISCR has the highest accuracy of 0.942, its recall and precision are also among the best achieved. However, its area under the ROC curve is very low, only 0.371, which suggests that this classifier cannot be satisfactorily tuned if we want to sacrifice precision to recall or vice versa. The contribution of attribute selection method (CfsSubsetEval method of Weka) to the performance of models was pretty good where the accuracy was improved in general except C4.5 with D.ORD, and LOG.REG with D.ORD and D.BIN. Moreover, the AUC and F-measure were improved in most of the models. Moreover, Precision, recall, and F-measure values of almost all methods are very low because of imbalanced data where we predict patients who will not survive. In Figure 2, we present the tree structure of the C4.5 learned from the discrete data. It has achieved the highest accuracy from all tested classifiers. Its structure is surprisingly simple. If the patient is Czech then it is predicted to survive if the patient is Syrian then the LDL cholesterol value should be checked. If it is below 4.78 then the patient is predicted to survive, otherwise, if LDL cholesterol value is between 4.78 and 6.28 then it depends on the Syrian hospital in which he/she is treated. If he/she is treated in the public hospital (SYR1) then he/she dies; if he/she is treated in the private one (SYR2) then he/she survives. If his/her LDL cholesterol values are higher than 6.28 then he/she dies (no matter what Syrian hospital he/she is treated in). The simplicity of the C4.5 classifier is in line with the general recommendation that in order to avoid the overfitting of training data the models should be as simple as possible. This is probably the best we can learn from data but most probably it oversimplifies the reality. More data would be needed. The highest AUC was achieved by naive Bayes classifier with the ordinal attributes. The highest value of F-measure was achieved by BN.K2 with discrete attributes selected by the method CfsSubsetEval method of Weka [13]. The learned BN model is actually also a naive Bayes model, see Figure 3. We can conclude that there is no single winner-a classifier that would be the best in terms of all considered criteria. Moreover, the classifiers differ in what variables they consider to be important for AMI mortality prediction. We believe Dealing with incomplete and imbalanced data As we can see from Section 2, our dataset contains incomplete and imbalanced data. In [11] we presented an idea to develop TAN [5] to handle incomplete and imbalanced data (Algoritms 1 and 2), where the conditional mutual information (CMI) is defined as: where the sum is only over x,y,z such that f (x,z) > 0 and f (y,z) > 0. Compute Algorithm 1 TAN For Incomplete Data return I p 10: Endprocedure 11: Compute I p = I(A i , A j |C)) between each pair of attributes, i ̸ = j , using the Procedure CMI. 12: Build a complete undirected graph in which the vertices are the attributes A 1 , A 2 , . . . , A n . Annotate the weight of an edge connecting A i to A j by I p = I(A i , A j |C)). 13: Build a maximum weighted spanning tree. 14: Transform the resulting undirected tree to a directed one by choosing a root variable and setting the direction of all edges to be outward from it. 15: Construct a TAN model by adding a vertex labeled by C and adding edges from C to all other nodes in the graph. In a similar way, we can create a procedure that enables the Chow-Liu algorithm to deal with incomplete data, where a normal Chow-Liu algorithm [12] just deals with complete data. The procedure is shown in Algorithm 3, where the mutual information (MI) is defined as: where the sum is only over x,y such that f (x) > 0 and f (y) > 0. Compute I p = I(X, Y ) from D 9: return I p 10: Endprocedure 11: Compute I p = I(A i , A j ) between each pair of attributes, i ̸ = j , using the Procedure MI. 12: Build a complete undirected graph in which the vertices are the attributes A 1 , A 2 , . . . , A n . Annotate the weight of an edge connecting A i to A j by I p = I(A i , A j ). 13: Build a maximum weighted spanning tree. 14: Transform the resulting undirected tree to a directed one by choosing a root variable and setting the direction of all edges to be outward from it. The idea behind Algorithms 1 and 3 is that we think if we use more data then the estimates of mutual information and conditional mutual information are more reliable. Results We will refer to TAN and Chow-Liu which deal with incomplete and imbalanced data as TANI and CLI. We used 10-fold cross-validation to compare how the results change. The results are summarized in Table 3. We compare the results of our methods with those of TAN in bnclassify 1 we will refer to it as (TB), Chow-Liu [12] (we will refer to it as CL), EM algorithm [19] for Chow-Liu using Hugin 2 (we will refer to it as EMCL), normal TAN [5], and [20] (this algorithm deals with TAN based on the EM principle, where they have proposed an adaptation of the learning process of tree augmented naive Bayes classifier from incomplete data, where any variable can have missing values in the dataset) (we will refer to it as FL), and SMOTE algorithm [21] for TAN (we will refer to it as ST), on two versions of dataset (binary and discrete attributes). For measures of the prediction quality, we use log-likelihood (LL) and AUC. Moreover, we use the 10-fold cross-validation as the model evaluation method. Algorithm TANI with D.BIN has achieved the highest AUC (ROC = 0.953 ) and the highest LL with −2744.4279. The results of Algorithm 1 is better than those of the normal TAN algorithm in both datasets D.DISCR and D.Bin. However, ST has achieved the second highest LL with D.DISCR (LL= −6043.0785) but the AUC is (ROC = 0.802), also its ROC is better than the ROC(s) of Algorithm 1 with D.DISCR and Algorithm 3 with both datasets. We can conclude that the TANI is a single winner with D.Bin. Quality of classifiers tested on artificial data The data we have is not big enough to have a very good result. Where TAN [5] is a reliable model and has been tested on many datasets, we decided to use the model BN.TAN [5]; its results are presented in Table 2 to generate a sequence of datasets with those sizes (3000, 5000, 7000, and 10,000) and 10% missing completely at random, with 26 attributes including the class in two different types of probability (basic probability distribution and binary distribution) to test the Algorithms (Algo 1, TANI, and FL [20]). See Figures 4 and 5. We can see that our Algorithm 1 is better than the others, and TANI does not seem good with the big binary datasets. Conclusion We used medical data on patients with AIM to compare the results of (a) classification models and (b) Bayesian networks modeling the relations found in data. Although the conclusions might seem to be specific only for the data used here, we also report general observations. In principle, the BN learning algorithms are able to discover the mediated correlation, since they test not only pairwise independence but also the conditional independence given values of other variables. Bayesian networks are a tool of choice for reasoning in uncertainty, with incomplete data. However, often, Bayesian network structural learning only deals with complete data. We have proposed here an adaptation of the learning process of the Chow-Liu and TAN from incomplete and imbalanced datasets. These methods have been successfully tested on our dataset. We have seen that the TANI algorithm is a single winner with D.Bin.
228867350
s2orc/train
v2
2020-12-14T20:14:15.914Z
2020-11-05T00:00:00.000Z
QUALITY ASSURANCE OF BOTTLED DRINKING WATER USING THE HAZARD ANALYSIS CRITICAL CONTROL POINT SYSTEM APPROACH This paper discusses on implementing the design of a quality assurance system using the Hazard Analysis Critical Control Point HACCP approach. The HACCP is often considered by people who are not familiar as a problematic, complicated system that has to be left to experts. This system focuses on preventive measures by controlling the hazards of the drinking water treatment process to prevent the occurrence of diseases due to poisoned water and maintain product quality. This research was conducted at a bottled drinking water company. This company needs to commit to producing products that are hygienic and safe for consumption. In this study, laboratory testing of the finished goods was intended to determine the conformance quality of the product. The sample test result found the coliform bacteria in the bottled drinking water product. At last, this study developed critical control points in the daily operations by applying the whole HACCP principles based on the latest applicable standards. This is an open access article under the CC BY-NC license INTRODUCTION Water is a human's essential need, but Municipal Waterworks has not been able to fulfil the high demand for drinking water. It leads to the rapid growth of drinking water in containers companies in Indonesia to make drinking water accessible for all. Drinking water in containers is massively consumed in our society. It is usually provided in fast-food restaurants, hotels, parties, seminars, and offices. In producing drinking water, companies should surely be able to assure and maintain their quality to ensure consumers' satisfaction. Quality assurance refers to all planned and systematic actions that attempted to make sure that products produced in the factory fulfil all the requirements of quality standards. The quality assurance system can be implemented using the Hazard Analysis Critical Control Point (HACCP) system approach before the implementation of the ISO 22000 standard throughout the value chain [1]. The technique focuses on preventive measures to control the hazards during the processing process of bottled drinking water to prevent contaminated and poisoned water that may cause diseases. HACCP is a systematic tool of quality assurance systems to identify, access, and control hazards, focusing on the prevention of identified hazards [2] [3]. The primary key to HACCP is the anticipation of hazards and identification of control points that prioritize the prevention measures and not relying too much on the product test on the final stage [2]. According to Mortimore and Wallace [3], people who are not familiar with HACCP often hold the misconception that it is an intricate, complicated system that has to be left to experts and can only be done by large companies. Good Manufacturing Practice (GMP) and Sanitation Standard Operating Procedure (SSOP) are the basic requirements during the implementation of HACCP. GMP is either an essential or a prerequisite program for the implementation of HACCP [4,5,6,7,8]. In Indonesia, GMP is implemented and regulated by Ministerial Regulation of the Ministry of Industry, No. 75/M-IND/PER/7/2010 on the guidelines in providing quality processed food [9]. SSOP are either written or objective instructions used by certain industries, specific for each food processing establishment, describing the procedures for performing daily operations in the production, storage, and transportation of the products [5]. It ensures the safety of the production system that includes scheduling the sanitation procedure, implementing monitoring programs, ensuring that all personnel understands sanitation, providing sanitation training, and promoting sanitation practices in the business unit [10] [11]. HACCP is a part of the quality management system that provides evidence which ensures that quality assurance requirement is well-fulfilled [12]. It is important to note that benefits and effectiveness in preventing identified hazards depend on two main issues, namely [13] prerequisite programs, and product and process design within the HACCP plan. Codex Alimentarius formulated the guidelines for implementing the HACCP system into twelve stages of systematic procedures contained in CAC/RCP 1-19691- , Rev. 4-2003. The responsible government regulatory agencies are able to efficiently monitor and assess their proper implementation [15] [16]. In implementing the HACCP system, a company needs to consider all aspects, including national standards and specific regulations regarding drinking water products, such as SNI 3553:2015 [17], which says that chlorine residue should not exceed 0.1 mg/l. However, it is also important to note that a previous study by Morris [18] found that even low chlorine residue is still hazardous to human health. Hence, this paper describes the steps of designing the HACCP system at a bottled drinking water company. Referring to a study conducted by Praveena et al. [19] for conformity verification, this research employs laboratory testing to investigate whether product quality has already met all the requirements. METHOD This research was conducted in several steps. The first step is the direct observation on the production line, do some interviews, and discussion with the production department to give information on the existing problems. Then, the second step is data collecting, namely primary data and secondary data. Primary data includes a flowchart of the product potential hazards and the procedure of the production process. The secondary data includes GMP, SSOP documents, Indonesian national standards, general overview of the company, and other data related to the product. The third step is GMP was assessed using a questionnaire and verified directly in the field. Its formulation of the survey follows the standard requirement, namely the Ministerial Regulation of the Ministry of Industry, No. 75/M-IND/PER/7/2010. The fourth step is formulating SSOP questionnaire criteria based on its documentation standard [11]. The fifth step is samples were tested in a Laboratory of the Medical Faculty, Universitas Tarumanagara. This laboratory testing uses the Simple Random Sampling method to generate the time of taking the samples. The sixth step is the early stages of HACCP system planning, such as HACCP team formation, product description, identification of product usage, formulation of the flowchart, and verification of flowchart. Next, hazards are analyzed based on HACCP principles, which involves three stages, namely, hazard identification, significant hazard determination, and size control determination. Hazard identification is implemented by observing the production process of the drinking water and indepth discussion with the company management. The possibility level and severity level of hazard are combined to determine its significant level, as shown in Table 1. Determining the possibility of hazard based on the real situation of the field is categorized as follows; a) Low potential of a hazard: unlikely to happen, no historical record; b) Medium potential of a hazard: may happen, minimum historical records but have happened before; c) High potential of a hazard: often happen, based on existing historical records Whereas determining the severity level of hazard based on its impact on human health is divided into; a) High level: life-threatening hazard; b) Moderate level: hazard may pose risks to human health; c) Low level: hazard causes the food or beverage becomes improper for consumption. LH. MH* HH* *Generally thought as significant and will be considered in the process of determining the CCP Information: L= Low, M= Medium, H= High A significant hazard is analyzed to figure out whether it is regarded as the Critical Control Point (CCP), or not, using a decision support scheme, as can be seen in Figure 1. The hazards identified on CCP will be analyzed to figure out its critical limit to determine its tolerance level, whether it is safe enough for consumption or not. The next step is determining the monitoring procedure 4W+1H (what, where, when, who, how) and corrective actions for each CCP that deviates the agreed critical limit. The last stage will conclude the verification and documentation procedure that should be implemented. RESULTS AND DISCUSSION Based on the verification of the assessment results of GMP and SSOP, there are some findings in the field that should be fixed immediately by the company for supporting HACCP program implementation. Table 2 shows the determination process of the sampling time of finished goods. [21]. Designing the HACCP System Here are the stages of designing the HACCP system, among other things: First stage: Forming a HACCP team During this research, the HACCP team is formed, and it involves a factory manager, supervisor of quality control, and supervisor of production. Table 3 shows the product description containing complete information on the products. Third stage: Identification of product usage This drinking water product can be consumed directly by consumers without exceptions or going through the cooking process. It is sold in minimarkets, supermarkets, and offices. Everyone can consume this product. Fourth stage: Formulating the flowchart Flowchart of the production process is formulated to provide an overall overview of the production process manifested by the company to make it easier for people and other institutions to understand the process, as shown in Figure 2. Sixth stage: Hazard analysis Hazards are classified into three criteria, namely biological, chemical, and physical factors, as shown in Table 4, Table 5, and Table 6, respectively. Table 7 shows the processes or items assessed as a significant hazard, which should be analyzed furthermore. Seventh stage: Determination of critical control point (CCP) using a decision support scheme. Table 8 shows how the steps in deciding a significant hazard as a critical control point. Table 7. The Results of Significant Hazard [22] No. Process or item Labelling P Unfit moulding, messy wrinkles, improper position Unstandardized printing (labelling and steamer machine) Packing P Carton is not tightly closed Carton dimension is not standard Palleting Eighth stage: Determination of critical limit As mentioned earlier, a critical limit is designed based on SNI 3553:2015. The critical limit that has been decided is Chlorine residue at 0 mg/L, finish tank ozone at 0.4-0.6 mg/L, and content of ozone in the product at 0.1-0.4 mg/L. Ninth and tenth stages: Determination of monitoring procedure and corrective actions Corrective action is needed to find out what to be done after deviation occurred on the critical limit. The monitoring procedure that can be implemented is what (Chlorine residue), who (QC department), when (2 times a day, before and after breaks), how (using chlorine test kit). Corrective actions can be manifested if chlorine residue is found in the product so that production can be stopped, and active carbon is replaced with the new one. Eleventh stage: Determination of verification procedure The verification stage is implemented by investigating the finished goods to know the strong indicator of the implementation of the HACCP system. Twelfth stage: Creating documentation The documentation process starts from the prerequisite program to the implementation of the HACCP program itself and continuing onward. The prerequisite program documentation is related to the critical point, while the literature in implementing the HACCP program is closely related to the critical control point. CONCLUSION This research shows the deviation of biological criteria in the drinking water product because of the coliform bacteria found through a laboratory test. The standard guideline used in the company is SNI. 01-3553-2006, but the latest standard requirement of mineral water is on SNI. 3553:2015. Therefore, from this research finding, we suggest that the company utilizes the new guidelines, namely SNI. 3553:2015. This study developed critical control points which should be controlled by the company, such as carbon filter and finish tank, by implementing HACCP principles. Besides, quality control is also executed to anticipate all possible hazards that may happen in the form of a control point. The limitation of this study is their subjective judgments by management in determining the significance level of hazard.
21588770
s2orc/train
v2
2018-04-03T05:09:22.555Z
1996-01-01T00:00:00.000Z
Negative chronotropic and inotropic effects of U-92032, a novel T-type Ca2+ channel blocker, on the isolated, blood-perfused dog atrium. We investigated the effects of U-92032 ((7-((bis-4-fluorophenyl) methyl)-1-piperazinyl)-2-2(2-hydroxyethylamino)-4-(1-methylethyl)- 2,4, 6-cycloheptatrien-1-one), a novel T-type Ca2+ channel blocker, on sinus rate and atrial contractile force in the isolated, blood-perfused atrium of the dog. U-92032 (1 to 300 nmol) induced negative chronotropic and inotropic responses in a dose-dependent manner, and the percentage decrease in sinus rate was less than that in atrial contractile force. Atropine did not affect the negative responses to U-92032. These results suggest that U-92032, a T-type Ca2+ channel blocker, simultaneously decreases the sinus rate and atrial force as do L-type Ca2+ channel blockers in the isolated dog atrium. ABSTRACT-We investigated the effects of U-92032 ((7-((bis-4-fluorophenyl)methyl)-1-piperazinyl)-2-2(2hydroxyethylamino)-4-(1-methylethyl)-2,4,6-cycloheptatrien-l-one), a novel T-type Ca2+ channel blocker, on sinus rate and atrial contractile force in the isolated, blood-perfused atrium of the dog. U-92032 (1 to 300 nmol) induced negative chronotropic and inotropic responses in a dose-dependent manner, and the per centage decrease in sinus rate was less than that in atrial contractile force. Atropine did not affect the negative responses to U-92032. These results suggest that U-92032, a T-type Ca 2+ channel blocker, simul taneously decreases the sinus rate and atrial force as do L-type Ca2+ channel blockers in the isolated dog atrium. Keywords: U-92032, T-type Ca 2+ channel, L-type Ca 2+ channel The spontaneous depolarization of sinoatrial (SA) nodal pacemaker cells is mainly caused by the activation of T-type (Ica_T) and L-type (Ica_L) Ca 2+ currents, hyper polarization-activated inward current (If) and delayed rectifier K+ current (IK) (1). Although there are many reports on the cardiac effects of ICa_L, I f and IK blockers in mammalian cardiac tissues, there is little information on the effects of Ica_T blockers on the heart. The T-type Ca2+ channels are found in cardiac tissues, atrial myo cytes (2), SA nodal cells (3), Purkinje fiber cells (4) and ventricular myocytes (5). Inorganic Ca2+ channel block ers such as divalent cations show little preferential block of T-type versus L-type Ca2+ channels. Likewise most or ganic Ca2+ channel blockers do not offer better selectiv ity. Although tetramethrin (3) and niguldipine (6) blocked T-type Ca 2+ channels of the cardiac cells preferentially, their effects on two types of Ca2+ channels still over lapped. It has been recently reported that U-92032 at a low concentration (1 pM) blocked selectively the T-type Ca 2+ channels, but at a high concentration (10 pM), it blocked both T-type and L-type Ca 2+ channels in guinea pig atrial cells (7). Therefore, to elucidate the role of Ica-T in the regulation of heart rate and myocardial con tractility, we studied the effects of U-92032, a novel Ica_T blocker, on the SA nodal pacemaker activity and myo cardial contractility in isolated, blood-perfused dog atrium. An isolated right atrial preparation was perfused with heparinized arterial blood from an anesthetized support dog. The details of the preparation have been described in a previous paper (8). Support dogs weighing 12 to 23 kg were anesthetized with sodium pentobarbital (30 mg/kg, i.v.) and ventilated artificially through a cuffed tracheal tube with room air by using a respirator (model 607; Harvard Apparatus Co., Inc., Millis, MA, USA). So dium heparin (500 USP units/kg, i.v.) was administered to each dog at the beginning of the perfusion of the isolated atrial preparation, and 200 USP units/kg were given each hour thereafter. * To whom correspondence should be addressed . Isolated right atrial preparations were obtained from other Mongrel dogs weighing from 9 to 13 kg. Each dog was anesthetized with sodium pentobarbital (30 mg/kg, i.v.). The right atrium was excised and immersed in cold Ringer's solution. The sinus node artery of the isolated right atrium was cannulated, and each preparation was perfused with heparinized blood from the carotid artery of the anesthetized support dog by the aid of a peristaltic pump (model 1210, Harvard Apparatus). A pneumatic resistance was placed in parallel with the perfusion system so that the perfusion pressure could be maintained con stant at 100 mmHg. The rate of blood flow to the atrial preparation was 2 to 8 ml/min. The venous effluent from the preparation was led to a collecting funnel and returned to the support dog through the external jugular vein. The preparation was anchored to a stainless steel bar and placed in a cup-shaped glass container kept at 37 *C. The upper part of the cardiac preparation was con nected to a force-displacement transducer (AP 620G; Nihon Kohden, Tokyo) by a silk thread. The cardiac tissue was usually stretched to a resting tension of 2 g. Iso metric tension was recorded on a thermo-writing recti graph (RTA-1200, Nihon Kohden). A pair of bipolar silver electrodes was brought into contact with the epi cardial surface of the isolated preparation in order to record the atrial electrogram. The atrial rate was derived from the electrogram with a cardio-tachometer (AT-600G, Nihon Kohden). The femoral arterial blood pressure of the support dog, heart rate derived from lead II of the ECG and the rate of blood flow to an atrial prepara tion were monitored simultaneously. We examined the effects of U-92032 at a dose of 1 300 nmol on the SA nodal pacemaker activity and atrial contractility in 9 isolated right atria. We also studied the effects of atropine (10 nmol When U-92032 in a dose of 1-300nmol was injected into the sinus node artery of the isolated, blood-perfused right atrium of the dog, U-92032 induced negative chronotropic and inotropic effects dose-dependently (P<0.001) in 9 isolated, perfused dog atria (Fig. 1). The percentage changes in sinus rate in response to U-92032 were smaller than those in atrial contractile force; e.g., U-92032 at a dose of 300 nmol decreased the sinus rate and atrial contractile force by 8.7±2.6% and 28.0±4.4%, respectively. The threshold dose of the chronotropic and inotropic response was 30 nmol. Atro pine at a dose of 10 nmol did not affect the negative chronotropic and inotropic responses to U-92032 at 100 nmol in 5 isolated, perfused dog atria, when it abolished the negative chronotropic and inotropic responses to acetylcholine (1 or 3 nmol). In the present study, we demonstrated that U-92032, an ICa-T blocker, decreased the sinus rate and atrial contrac tile force, and the negative cardiac responses to U-92032 were not inhibited by atropine in the isolated, blood-per fused dog atrium. These results suggest that U-92032, an Ica-T blocker does not selectively decrease the sinus rate without affecting the myocardial contractile force in the dog heart. It has been suggested that ICa_T has an important role for generation of the pacemaker potential in SA nodal cells (1,3), although ICa_T has been found in mammalian atrial myocardial cells (2), Purkinje fibers (4) and ven tricular myocytes (5). The characteristics of Ica-T in each cardiac tissue are similar. U-92032 at low concentrations (0.1-1 pM) blocked Ica-T selectively in guinea pig atrial cells, but at high concentrations (more than 1 pM), it blocked both Ica-T and ICa_L (7). Additionally, U-92032 at 6 pM also selectively blocked Ica-T but not nifedipine-sen sitive non-inactivating currents in a mouse neuronal cell line, N1E-115 cells (9). Thus, U-92032 at a low dose would be expected to work as an ICa_T blocker. Because of the limited solubility of U-92032, we tested the cardiac effects of U-92032 at a dose range of 1 300 nmol in iso lated, blood-perfused dog atrium. Because we injected a drug into the sinus node artery of the isolated dog atrium, it is difficult to determine the drug concentration in the perfused blood. However, our injected doses of U-92032 roughly correspond to 1-100 pM. The threshold dose of U-92032 for negative chronotropic and inotropic effects was not different and the percentage decreases in sinus rate induced by U-92032 at low doses were less than those in atrial contractile force (Fig. 1). Therefore, it is con ceivable that the inhibition of Ica-T by U-92032 at the doses used in the present study does not decrease the sinus rate selectively in isolated perfused dog atrium. However, the possibility of the inhibition of Ica-L by U-92032 can not be completely excluded in the present study. Ica-L blockers, verapamil and nicardipine, decreased both the sinus rate and atrial contractile force in isolated perfused dog atrium (10). Thus, to define the selective control of the sinus rate by the inhibition of Ica-T, we need further studies using other Ica-T blockers including tetramethrin and nigludipine. To control the sinus rate, we have previously investi gated the cardiac effects of IK and If blockers in addition to IC,-L blockers on isolated, blood-perfused dog heart preparations. Zatebradine, an If blocker different from IK and Ica-L blockers, directly and selectively decreased the sinus rate without decreases in the atrial contractile force and attenuated the increase in the sinus rate induced by adrenergic interventions (11,12). Thus, because of non selective depression by U-92032 of the chronotropic and inotropic effects, U-92032 may not be useful as a brady cardic agent in the heart.
24462650
s2orc/train
v2
2018-04-03T06:25:12.365Z
2006-08-25T00:00:00.000Z
Homophilic Interactions of Tetraspanin CD151 Up-regulate Motility and Matrix Metalloproteinase-9 Expression of Human Melanoma Cells through Adhesion-dependent c-Jun Activation Signaling Pathways* The tetraspanin membrane protein CD151 has been suggested to regulate cancer invasion and metastasis by initiating signaling events. The CD151-mediated signaling pathways involved in this regulation remain to be revealed. In this study, we found that stable transfection of CD151 into MelJuSo human melanoma cells lacking CD151 expression significantly increased cell motility, matrix metalloproteinase-9 (MMP-9) expression, and invasiveness. The enhancement of cell motility and MMP-9 expression by CD151 overexpression was abrogated by inhibitors and small interfering RNAs targeted to focal adhesion kinase (FAK), Src, p38 MAPK, and JNK, suggesting an essential role of these signaling components in CD151 signaling pathways. Also, CD151-induced MMP-9 expression was shown to be mediated by c-Jun binding to AP-1 sites in the MMP-9 gene promoter, indicating AP-1 activation by CD151 signaling pathways. Meanwhile, CD151 was found to be associated with α3β1 and α6β1 integrins in MelJuSo cells, and activation of associated integrins was a prerequisite for CD151-stimulated MMP-9 expression and activation of FAK, Src, p38 MAPK, JNK, and c-Jun. Furthermore, CD151 on one cell was shown to bind to neighboring cells expressing CD151, suggesting that CD151 is a homophilic interacting protein. The homophilic interactions of CD151 increased motility and MMP-9 expression of CD151-transfected MelJuSo cells, along with FAK-, Src-, p38 MAPK-, and JNK-mediated activation of c-Jun in an adhesion-dependent manner. Furthermore, C8161 melanoma cells with endogenous CD151 were also shown to respond to homophilic CD151 interactions for the induction of adhesion-dependent activation of FAK, Src, and c-Jun. These results suggest that homophilic interactions of CD151 stimulate integrin-dependent signaling to c-Jun through FAK-Src-MAPKs pathways in human melanoma cells, leading to enhanced cell motility and MMP-9 expression. that CD151 association increases the binding activity of integrin ␣ 3 ␤ 1 to laminin through stabilizing its activated conformation (20). It was also reported that CD151 regulates platelet function by modulating outside-in signaling events of the major platelet integrin ␣ IIb ␤ 3 (21). In addition to integrin association, CD151 associates with phosphatidylinositol 4-kinase and protein kinase C on the cytosolic surface, thereby linking integrins to these signaling molecules (8,22). CD151 has also been shown to regulate expression of a protein-tyrosine phosphatase, PTP, and its recruitment to cell-cell junctions (19) and to inhibit adhesion-dependent activation of Ras (23). It thus appears that the intracellular signaling pathways initiated by integrin binding to the extracellular matrix could be altered by the integrin-associated tetraspanin CD151. Taken together, CD151 is thought to participate in adhesion-dependent transmembrane signaling pathways by modulating integrin activity and modifying integrin-mediated outside-in signaling pathways as well. However, the modified integrin signaling pathways by which CD151 manifests its activity have not been established. In this report, we investigated the functional effects of CD151 expression on cellular activities related to cancer invasion and metastasis and then attempted to identify CD151-mediated signaling pathways for the induction of such cellular functions. We showed that CD151 increases motility and MMP 2 -9 expression of human melanoma cells through adhesiondependent c-Jun activation-signaling pathways. Furthermore, we established that these signaling pathways are initiated not only by the matrix binding of integrin molecules but also by homophilic interactions between CD151 proteins on the surface of neighboring cells. Finally, detailed analysis of signaling events indicated that the CD151-␣ 3 ␤ 1 /␣ 6 ␤ 1 integrin complexes increase c-Jun activity through the activation of FAK, Src, p38 MAPK, and JNK. Transfection of CD151 cDNA and Selection of Stable Clones-Full-length CD151 cDNA was subcloned into the EcoRI/KpnI sites of a pcDNA3 vector (Invitrogen), downstream of a cytomegalovirus promoter. The CD151 cDNA expression construct was transfected into MelJuSo human melanoma cells by using Lipofectamine (Invitrogen) according to the manufacturer's instructions. pcDNA3 vector only was also transfected as a control. Neomycin-resistant clones were isolated by growing the cells in DMEM/F-12 containing 10% fetal bovine serum and 0.5 mg/ml G418 (Invitrogen). Stable transfectant clones were characterized by immunoblotting and flow cytometric analyses for their expression levels of CD151 protein. Reverse Transcription-PCR Analysis-Total cellular RNA was purified from the cultured cells using Trizol reagent (Invitrogen) according to the manufacturer's protocol. First strand cDNA synthesis was performed with 1 g of total RNA using a cDNA synthesis kit (Promega, Madison, WI). For PCR amplification, 5Ј-aaggtaccaggatgggtgagttcaacgag-3Ј was used as the sense primer, and 5Ј-atgaattcggtcagtagtgctccagcttg-3Ј was used as the antisense primer. This primer pair amplifies a 760-bp fragment of CD151 cDNA. The reaction mixture was subjected to 25 PCR amplification cycles of 60 s at 94°C, 90 s at 55°C, and 90 s at 72°C. ␤-Actin amplification was used as an internal PCR control (27) with 5Ј-gatatcgccgcgctcgtcgtcgac-3Ј as the sense primer and 5Ј-caggaaggaaggctggaagagtgc-3Ј as the antisense primer. The PCR products were visualized using ethidium bromide in 1% agarose gel. 2 Immunoblotting Analysis-Cells were washed, harvested, and lysed in lysis buffer (50 mM Tris-HCl, pH 7.5, 150 mM NaCl, 2 mM EDTA, 1% Nonidet P-40, 1 mM phenylmethylsulfonyl fluoride, 10 g/ml aprotinin, 20 g/ml leupeptin, and 2 mM benzimidine) on ice for 10 min. For phosphoprotein analysis, cell lysis buffer was supplemented with phosphatase inhibitors (1 mM sodium orthovanadate, 1 mM NaF, and 10 mM ␤-glycerophosphate). After centrifugation at 15,000 ϫ g for 10 min, the supernatants were collected and quantified for protein concentration by Bradford assay. Equal amounts of protein per lane were separated onto 10% SDS-polyacrylamide gel and transferred to an Immobilon-P (Millipore Corp., Bedford, MA) membrane. The membrane was blocked in 5% skim milk for 2 h and then incubated with a specific antibody for 2 h. After washing, the membrane was incubated with a secondary antibody conjugated with horseradish peroxidase. After final washes, the membrane was developed using enhanced chemiluminescence reagents (Amersham Biosciences). Flow Cytometric Analysis-Cells were incubated with 10 g/ml anti-CD151 monoclonal antibody (mAb) for 30 min, washed with cold PBS, and then incubated with saturating concentrations of fluorescein isothiocyanate-conjugated goat antimouse IgG (PharMingen) for 30 min at 4°C. After washing with PBS, the cells were fixed with 2% formaldehyde in PBS. Cell surface immunofluorescence was analyzed by flow cytometry performed on a FACScan (BD Biosciences). Immunoprecipitation-Cells were lysed in immunoprecipitation buffer (25 mM Hepes, pH 7.5, 150 mM NaCl, 5 mM MgCl 2 ) supplemented with 1 mM phenylmethylsulfonyl fluoride, 10 g/ml aprotinin, 20 g/ml leupeptin, and 1% Brij 98 or 1% Triton X-100 for 2 h at 4°C. The lysate was centrifuged (16,000 ϫ g, 15 min), and the supernatant was precleared with a combination of protein A-and protein G-agarose (Amersham Biosciences) precoated with normal mouse IgG for 2 h at 4°C. After preclearing, the lysate was incubated with a specific antibody coupled to the protein A/G-agarose beads for 2 h at 4°C. Immune complexes collected on the beads were then washed four times with immunoprecipitation buffer and resolved by SDS-PAGE. Proteins were detected by immunoblotting analysis using specific antibodies. Invasion Assay into Matrigel-24-well Transwell chamber inserts (Corning Costar, Cambridge, MA) with 8-m porosity polycarbonate filters were precoated with 80 g of basement membrane Matrigel (BD Biosciences) onto the upper surface and with 20 g of gelatin onto the lower surface. Culture supernatant of NIH3T3 fibroblasts in DMEM supplemented with 10% fetal bovine serum was placed in the lower well. MelJuSo cells suspended in DMEM/F-12 medium containing 0.1% fetal bovine serum were added to the upper chambers (2 ϫ 10 4 cells/well) and incubated for 24 h at 37°C in 5% CO 2 . Cells were fixed and stained with hematoxylin and eosin. Noninvading cells on the upper surface of the filter were removed by wiping out with a cotton swab, and the filter was excised and mounted on a microscope slide. Invasiveness was quantified by counting cells on the lower surface of the filter. Wound-healing Migration Assay-For the measurement of cell migration during wound healing, cells (5 ϫ 10 5 ) were seeded in individual wells of a 24-well culture plate. When the cells reached a confluent state, cell layers were wounded with a plastic micropipette tip having a large orifice. The medium and debris were aspirated away and replaced by 2 ml of fresh serum-free medium. Cells were photographed every 12 h after wounding by phase-contrast microscopy. For evaluation of "wound closure," five randomly selected points along each wound were marked, and the horizontal distance of migrating cells from the initial wound was measured. Gelatin Zymography-Type IV collagenase activities present in conditioned medium were visualized by electrophoresis on gelatin-containing polyacrylamide gel as previously described (28). Briefly, conditioned medium from cells cultured in serumfree medium was mixed 3:1 with substrate gel sample buffer (40% (v/v) glycerol, 0.25 M Tris-HCl, pH 6.8, and 0.1% bromphenol blue) and loaded without boiling onto 10% SDS-polyacrylamide gel containing type 1 gelatin (1.5 mg/ml). After electrophoresis at 4°C, the gel was soaked in 2.5% Triton X-100 with gentle shaking for 30 min with one change of detergent solution. The gel was rinsed and incubated for 24 h at 37°C in substrate buffer (50 mM Tris-HCl, pH 7.5, 5 mM CaCl 2 , and 0.02% NaN 3 ). Following incubation, the gel was stained with 0.05% Coomassie Brilliant Blue G-250 and destained in 10% acetic acid and 20% methanol. Cell Aggregation Assay-L cells transfected with CD151 or vector alone were washed with PBS containing 2 mM EDTA and rendered into single cell suspension by seven gentle passes through a 22-gauge needle after scraping. After washing with Puck's saline (5 mM KCl, 140 mM NaCl, 8 mM NaHCO 3 , pH 7.4), suspensions of single cells (1 ϫ 10 5 cells/ ml) were seeded into individual wells of a 24-well culture plate and incubated in 5% CO 2 at 37°C with agitation at 70 -80 rpm using an orbital shaker. Photographs were taken every 15 min after incubation under a phase-contrast microscope on three predetermined fields, and both the total cell number (A) and the number of cells remaining as single cells (B) were counted. The results were expressed as the percentage of cells that formed aggregates as follows: (A Ϫ B)/A ϫ 100 (%). In some experiments, the transfectants were preincubated with antibody (20 g/ml) and then washed free of unbound antibody before incubation. In experiments to determine whether aggregation was homophilic, distinct populations of cells were prelabeled with 5-and 6-CFSE (carboxyfluorescein diacetate succinimidyl ester) (Molecular Probes, Inc., Eugene, OR) before suspension. For these experiments, phase and fluorescent images of the same field were photographed after a 30-min incubation with orbital shaking. Promoter Assay-A 1305-bp DNA fragment (Ϫ1285 to ϩ20), corresponding to the promoter of the human MMP-9 gene (29,30), was generously provided by Dr. Seung-Taek Lee (Yonsei University, Korea) (31). For mt-AP-1 of the MMP-9 gene promoter, in which distal and proximal AP-1 binding sites (Ϫ533 to Ϫ527 and Ϫ79 to Ϫ73, respectively) were destroyed, 5Ј-TGAGTCA-3Ј was changed to 5Ј-TGAGTtg-3Ј (underlined lowercase letters indicate the mutated bases) by the QuikChange II site-directed mutagenesis kit (Stratagene, La Jolla, CA). For mt-NF-B of the MMP-9 promoter, in which a NF-B binding site (Ϫ600 to Ϫ590) was destroyed, 5Ј-GGAATTCCCC-3Ј was mutated to 5Ј-GatcgatCCC-3Ј. After subcloning the mutant MMP-9 promoters into a promoterless luciferase expression vector, pGL3 (Promega), the corresponding mutations in the constructs were verified by DNA sequencing. The pGL3 vector containing wild-type or mutant MMP-9 promoter was transfected into MelJuSo cells by using Lipofectamine. Luciferase activity in cell lysate was measured using the Promega luciferase assay system according to the instructions of the manufacturer. To normalize luciferase activity, each of the pGL3 vectors was co-transfected with a pRL-SV40⌬Enh, which expresses Renilla luciferase by an enhancerless SV40 promoter (31). Electrophoretic Mobility Shift Assay-Cells were incubated with serum-free medium for 4 h, and nuclear extracts were prepared as previously described (32). Double-stranded oligonucleotide probes corresponding to the putative AP-1 binding site (Ϫ86 to Ϫ66; 5Ј-TGACCCCTGAGTCAGCACTTG-3Ј; the AP-1 recognition sequence is underlined) and the putative NF-B site (Ϫ607 to Ϫ582; 5Ј-GCCCCGTGGAATTCCCCC-AAATCCTG-3Ј; the NF-B recognition sequence is underlined) in the proximal MMP-9 promoter sequences were labeled with [␥-32 P]ATP using T4 polynucleotide kinase and purified by a G-50 Sephadex column. The 32 P-labeled probes (ϳ40,000 cpm) were then incubated with nuclear extracts (10 g of protein) for 20 min at room temperature. Samples were resolved on native 5% polyacrylamide gel, and the gel was dried and subjected to autoradiography. Specificity for binding of AP-1 factors and NF-B to the corresponding sequences of the MMP-9 promoter was confirmed by using cold competitors having typical AP-1 and NF-B binding sequences (Promega), respectively. Detergent-free Purification of Membrane Fractions-Mock and CD151 transfectant cells were washed with ice-cold PBS and then scraped into buffer A (20 mM Tris-HCl, pH 7.5, 2 mM EDTA, 1 mM EGTA, 1 mM phenylmethylsulfonyl fluoride, 10 g/ml aprotinin, 20 g/ml leupeptin, and 2 mM benzimidine). The cells were homogenized using a Dounce homogenizer (20 strokes). A postnuclear supernatant was obtained by centrifugation (2500 ϫ g, 10 min, 4°C), adjusted to 10% sucrose and loaded onto a 30% sucrose cushion in an ultracentrifuge tube. After centrifugation for 60 min at 150,000 ϫ g in a T-1270 rotor of a tabletop ultracentrifuge (Beckman Instruments), a lightscattering band confined to the 10 -30% sucrose interface was collected and stored at Ϫ70°C until use. In Vitro Kinase Assays-Cellular proteins (200 g) were incubated with anti-FAK, anti-Src, or anti-paxillin Abs and immunoprecipitated using protein A/G-agarose beads. Immune complexes collected on the beads were washed three times with immunoprecipitation buffer and once with kinase buffer (20 mM PIPES, pH 7.2, 10 mM MgCl 2 , 1 mM dithiothreitol) and added to an in vitro kinase reaction mixture containing 5 g of acid-denatured enolase and 10 Ci of [␥-32 P]ATP (33). The reaction was incubated at 30°C for 30 min and then stopped by boiling with SDS-sample buffer. After electrophoresis on a 10% polyacrylamide gel, the radioactive proteins were visualized by autoradiography. CD151 Increases Motility, MMP-9 Expression, and Invasiveness of Human Melanoma Cells-First, we examined CD151 expression levels in two human metastatic melanoma cell lines, C8161 and MelJuSo, among which C8161 was illustrated to have a higher metastatic ability than MelJuSo (24,25). A 760-bp PCR product and a 29-kDa protein band were detected in C8161, but not in MelJuSo cells, by reverse transcription-PCR analysis using CD151 cDNA-specific primers and immunoblotting analysis using anti-CD151 mAb, respectively (Fig. 1, A and B), indicating that CD151 is differentially expressed in a cell line-specific manner among human melanoma cells. The tet- and MelJuSo cell lines were analyzed by reverse transcription-PCR using CD151 cDNA-specific primers and immunoblotting using mAbs specific to each protein, respectively. ␤-Actin mRNA and actin protein from each cell line were also analyzed to control for equal amounts of mRNAs and proteins, respectively. C, the stable clones of CD151 cDNA-transfected MelJuSo cells were examined for CD151 expression by immunoblotting analysis using anti-CD151 mAb. D, cell surface expression levels of CD151 protein in CD151 transfectant clones were analyzed by flow cytometry using anti-CD151 mAb. raspanins CD9 and CD63, which were previously reported to suppress melanoma metastasis (28, 34 -36), exhibited similar protein amounts between these two human melanoma cell lines (Fig. 1B). Since a positive role of CD151 in cancer metastasis has been shown in some other cancer cell types (14,15), we hypothesized that the higher metastatic ability of C8161 cells, as compared with MelJuSo cells, may be attributed to the upregulation of CD151 rather than the down-regulation of CD9 and CD63. To determine whether CD151 expression indeed elevates the metastatic potential of melanoma cells, MelJuSo cells devoid of endogenous CD151 expression were transfected with a CD151 cDNA expression vector. Among several trans-fectant clones displaying high CD151 protein expression, two clones, CD151/M-76 and CD151/ M-77, were selected for further functional analyses (Fig. 1, C and D). To investigate the functional effect of CD151 expression on cellular activities related to the cancer metastasis process, the in vitro invasive efficacy of each transfectant clone was determined by Boyden chamber assay using Matrigel. CD151 transfectant clones, CD151/M-76 and CD151/ M-77, showed a 3-fold higher invasiveness than the mock transfectant ( Fig. 2A), suggesting that CD151 expression increases the invasive ability of melanoma cells. We further examined the migrating ability of CD151 transfectant clones into wounded spaces on culture plates. Both CD151 transfectant clones exhibited a 3-fold higher migrating ability than the mock transfectant (Fig. 2B). The basement membrane-degrading ability of the transfectant clones was also examined by measuring gelatinase activity in the culture supernatant. Among the two types of gelatinases, MMP-2 and MMP-9, the enzyme activity and protein level of MMP-9 were much higher in CD151 transfectant clones than in the mock transfectant, whereas the activity and expression of MMP-2 were not affected by CD151 expression (Fig. 2, C and D). To examine whether increased MMP-9 activity by CD151 affects cell motility, siRNA targeted to pro-MMP-9 was transfected into CD151 transfectant cells. As a result, knockdown of MMP-9 suppressed the stimulating effect of CD151 on the motility of MelJuSo cells, indicating that cell motility induced by CD151 involves MMP-9 activity (Fig. 2, E and F ). Since cell migration and gelatinase secretion are essential for the invasion process, it seems likely that the CD151-induced invasiveness is in part due to the positive effect of CD151 expression on motility and MMP-9 expression of melanoma cells. Functional Involvement of FAK, Src, p38 MAPK, and JNK in CD151-stimulated Motility and MMP-9 Expression-To identify the signaling molecules involved in CD151-induced cell motility and MMP-9 expression, CD151 transfectants were analyzed in the presence of several inhibitors of signal transduction mediators. Among the inhibitors tested, the inhibitors FIGURE 2. CD151 expression enhances invasiveness, motility, and MMP-9 production of MelJuSo melanoma cells. A, each transfectant clone (2 ϫ 10 4 cells) was seeded into a Transwell chamber insert equipped with a Matrigel-coated filter. After 12 h of incubation, cells on the lower surface of the filter were stained with Gill's hematoxylin and counted. Results are means Ϯ S.E. of triplicate cultures. B, confluent cell cultures were wounded with plastic micropipette tips. Cells were photographed at 48 h after wounding by phase-contrast microscopy, and the measurement of cell migration during wound healing was performed as described under "Experimental Procedures." Results are means Ϯ S.D. of triplicate cultures. The asterisks indicate that the differences are statistically significant (* and **, p Ͻ 0.01 versus mock transfectant, Student's t test). C, conditioned media obtained from cells cultured in serum-free medium for 3 days were electrophoresed on a 10% SDS-polyacrylamide gel containing type I gelatin. After the removal of SDS, the gels were incubated in gelatinase substrate buffer and then visualized by Coomassie staining. D, MMP-2 and MMP-9 protein levels in the conditioned media of the cell cultures were assessed by immunoblotting analyses using anti-MMP-2 and anti-MMP-9 mAbs, respectively. E, a CD151 transfectant clone of MelJuSo cells, CD151/M-77, was transfected with siRNA targeted to pro-MMP-9, and pro-MMP protein levels in cell lysate were analyzed by immunoblotting using MMP-9 mAb at 48 h after transfection. F, following siRNA transfection, cell migration was measured at 48 h after wounding in a similar fashion as in B. A dagger indicates that the differences are statistically significant ( †, p Ͻ 0.03 versus control siRNA-transfected cells, Student's t test). including PP1 (a Src kinase family inhibitor), SB203580 (a p38 MAPK inhibitor), and SP600125 (a JNK inhibitor) suppressed the motility and MMP-9 expression of CD151 transfectant clones close to the levels of the mock transfectant (data not shown). To verify the participation of FAK, Src, p38 MAPK, and JNK in CD151 signaling pathway(s) for the induction of cell motility and MMP-9 expression, siRNAs targeted to FAK, Src, p38 MAPK, and JNK were transfected into CD151 transfectant cells. Protein levels of FAK, Src, p38 MAPK, and JNK were effectively knocked down by each specific siRNA (Fig. 3, A, D, G, and J). All four siRNA types employed inhibited the migrating ability of CD151 transfectant cells in a dose-dependent manner (Fig. 3, B, E, H, and K). Also, all of the FAK, Src, p38 MAPK, and JNK siRNA-transfected cells exhibited significantly decreased activities and expression levels of MMP-9 compared with control siRNA-transfected cells retaining endogenous levels of these signaling molecules (Fig. 3, C, F, I, and L). It thus appears that FAK, Src, p38 MAPK, and JNK are functionally involved in CD151 signaling pathway(s) leading to increased motility and MMP-9 expression of MelJuSo melanoma cells. Induction of MMP-9 Expression by CD151 Is Mediated by Activation of AP-1 Factors-Since MMP-9 appeared to be a target gene up-regulated by CD151 signaling pathway(s) in MelJuSo melanoma cells, we investigated the transcriptional regulation mode of the MMP-9 gene by using several mutants of its 5Ј-proximal promoter region. When a reporter vector containing a wild-type promoter of the MMP-9 gene was transiently transfected into MelJuSo cells, the CD151 transfectant cells showed about a 20-fold higher luciferase activity than the mock transfectant cells (Fig. 4A). In contrast to the wild-type promoter, the promoters having mutations at the AP-1 binding sites (mt-5Ј-AP-1 and mt-3Ј-AP-1) did not respond to CD151 for their activities for reporter gene expression. However, mutation of the NF-B binding site did not abolish the stimulating effect of CD151 on MMP-9 promoter activity. To determine whether CD151 expression increases DNA binding activity of AP-1 transcriptional factors, we compared the binding of nuclear proteins to a putative AP-1 binding site (Ϫ79 to Ϫ73) of the MMP-9 promoter between mock and CD151 transfectant cells. As shown in Fig. 4B, DNA binding activity of AP-1 factors in CD151 transfectant cells was more significant than that in mock transfectant cells. Moreover, incubation with anti-c-Jun antibody resulted in a partial supershift of the AP-1/DNA complex with the gel shift assay, indicating that c-Jun participates in the formation of the AP-1/DNA complex. Thus, these data indicate that CD151 increases MMP-9 gene transcription by activating AP-1 transcription factors, including c-Jun. CD151 Signaling Pathway(s) Depends on Activation of the Associated ␣ 3 ␤ 1 and ␣ 6 ␤ 1 Integrins-CD151 has been reported to associate with various types of integrins and, in particular, forms stable complexes with ␣ 3 ␤ 1 and ␣ 6 ␤ 1 integrins in many types of cells (4,6,8,17,18). To determine whether ␣ 3 ␤ 1 and ␣ 6 ␤ 1 integrins are also associated with CD151 in MelJuSo melanoma cells, the CD151transfected MelJuSo cells were lysed with the nonionic detergent Brij 98, a mild lysis condition preserving tetraspanin-integrin interactions, and the cell lysates were immunoprecipitated with anti-CD151 antibody. As a result, ␣ 3 , ␣ 6 , and ␤ 1 integrin subunits were detected in the CD151 immunoprecipitate of CD151 transfectant cells but not in mock transfectant cells (Fig. 5A), although the protein level of each integrin subunit was not different between CD151 and mock transfectant cells (Fig. 5B). This result indicates that CD151 can form complexes with ␣ 3 ␤ 1 and ␣ 6 ␤ 1 integrins in MelJuSo cells. Since ␣ 3 ␤ 1 and ␣ 6 ␤ 1 integrins are known to be receptors of laminin/fibronectin and laminin, respectively, we examined whether CD151-stimulated MMP-9 expression is dependent on cell adhesion to extracellular matrix components, such as laminin and fibronectin. As illustrated in Fig. 5C, the stimulating effect of CD151 on MMP-9 expression became more prominent when the cells were attached to laminin, fibronectin, and laminin-rich Matrigel. However, CD151 did not exert its inducing activity for MMP-9 expression when the cells were plated on poly-(L)-lysine, which does not activate integrins. In CD151-deficient mock transfectant cells, a slight increase in MMP-9 expression was observed by cell adhesion to laminin and fibronectin, suggesting that integrin activation alone is not sufficient to induce MMP-9 expression. Thus, CD151 appears to cooperate with associated integrins to induce MMP-9 expression in melanoma cells. To examine how associated integrins activated by their binding to extracellular matrix modulate CD151-dependent signaling pathway(s), the activation status of the CD151 signaling mediators, which were demonstrated in Fig. 3, were compared between mock and CD151 transfectant cells a short time after plating the cells on laminin. CD151 expression significantly elevated phosphorylation-dependent activation of signaling components, such as FAK, Src, p38 MAPK, and JNK dependently of cell adhesion to laminin (Fig. 5D). CD151-mediated phosphorylation of MAPKAPK-2 and c-Jun, downstream effectors of p38 MAPK and JNK, respectively, was also found to be dependent on cell adhesion to laminin. However, adhesion events without integrin activation, such as cell binding to poly-(L)-lysine, did not increase the phosphorylation levels of these CD151 signaling components. On the other hand, phosphorylation of ERK1/2 appeared to be affected by neither integrin activation nor CD151 expression, implying that ERK1/2 does not participate in integrin-dependent CD151 signaling pathways in MelJuSo cells. Taken together, these data strongly suggest that CD151 cooperates with associated integrins to provoke outside-in signaling pathways leading to the activation of FAK, Src, p38 MAPK, JNK, MAPKAPK-2, and c-Jun. CD151 Is a Homophilic Interacting Protein-Since some membrane proteins involved in cell adhesion and migration, such as E-cadherin and CD99, were found to be self-ligand molecules and their homophilic interactions regulate intracellular signaling pathways (37,38), we tested the possibility that CD151 is a homophilic interacting membrane protein. After transfecting a CD151 expression vector into murine L-cell fibroblast cells, which do not exhibit homotypic cell-to-cell adhesion, we compared the ability of stable CD151 transfectant L cells to adhere to each other or to empty vectortransfected control L cells by spontaneous cell aggregation assay using cells in suspension. We found that control L cells did not aggregate, but CD151-transfected L cells aggregated in a time-dependent manner (Fig. 6B). The aggregation of CD151-transfected cells was reduced to half after incubation with anti-CD151 antibody, suggesting that the L cell aggregation is mediated by CD151. To confirm whether this aggregation was homophilic, we mixed CD151-transfected L cells with an equal number of fluorescently labeled control L cells. As a result, no fluorescent cells were present in the aggregates, indicating that CD151-expressing L cells did not bind to control L cells lacking CD151 (Fig. 6C). However, when fluorescently labeled CD151 transfectant cells were mixed with unlabeled control L cells, every cell in the aggregates was labeled (Fig. 6D). These data illustrate that CD151-expressing L cells bind only to the same type of L cells having CD151 but not to L cells lacking CD151. It thus appears that CD151 is a homophilic interacting cell surface protein. Homophilic CD151 Interactions Enhance Cell Motility and MMP-9 Expression-To assess the effect of homophilic CD151 interactions on the motility and MMP-9 expression of MelJuSo cells, we prepared membrane fractions of MelJuSo cells transfected with either a CD151 expression vector or empty vector. As expected, CD151 was present in the membrane fraction of the CD151 transfectant but not in that of the mock transfectant (Fig. 7A). The CD151 transfectant cells treated with membrane fraction containing CD151 exhibited increased cell motility compared with untreated cells (Fig. 7B). The CD151-containing membrane fraction also increased MMP-9 expression in (Fig. 7C). However, pretreatment of CD151 transfectant cells with anti-CD151 Ab blocked the inducing effect of the CD151 membrane fraction on MMP-9 expression (Fig. 7D). Meanwhile, the CD151-deficient membrane fraction obtained from mock transfectant cells did not affect the motility and MMP-9 expression of CD151 transfectant cells. In addition, mock-transfectant cells lacking CD151 did not respond to the CD151-containing membrane fraction for the induction of cell motility and MMP-9 expression. Thus, these results indicate that homophilic interactions of CD151 increase cell motility and MMP-9 expression in MelJuSo cells. Homophilic CD151 Interactions Stimulate the c-Jun Activation Signaling Pathways in an Integrin-dependent Manner- We examined whether homophilic interactions of CD151 trigger the outside-in signaling pathway(s) leading to c-Jun activation. Treatment of CD151 transfectant cells, but not mock transfectant cells, with the CD151-containing membrane fraction increased the phosphorylation levels of CD151 signaling mediators, such as FAK, Src, and c-Jun, in a time-dependent manner (Fig. 8A). However, phosphorylation levels of these signaling molecules were not increased in CD151 transfectant cells incubated with the CD151-deficient membrane fraction. These data suggest that homophilic CD151 interactions between neighboring cells activate the signaling pathways for c-Jun activation in one another. We next investigated the possible influence of integrin activation on signaling pathways provoked by homophilic CD151 interactions. When CD151 transfectant cells were seeded onto plates coated with poly-(L)-lysine, homophilic CD151 interactions resulted in a slight increases in the phosphorylation levels of Src and c-Jun, along with no increase in FAK phosphorylation (Fig. 8B). However, cell adhesion to laminin not only increased the phosphorylation level of FAK but also significantly augmented the positive effect of homophilic CD151 interactions on the phosphorylation of Src and c-Jun. Kinase activities of FAK and Src associated with paxillin in focal adhesion complexes were also found to be increased by homophilic CD151 interactions dependent on cell adhesion to laminin (Fig. 8C). These data indicate a stimulating role of integrins in CD151-mediated signaling pathways. Meanwhile, as illustrated in mock transfectant cells attached to laminin, simple activation of laminin-binding integrins without any homophilic CD151 interaction was not sufficient to induce phosphorylation of these signaling molecules. To assess the involvement of CD151-associated integrins, ␣ 3 ␤ 1 and ␣ 6 ␤ 1 , in up-regulating CD151 signaling to c-Jun, we incubated CD151 transfectant cells with anti-␤ 1 integrin antibody before seeding the cells on laminin-coated plates. The anti-␤ 1integrin antibody effectively suppressed the stimulating effect of homophilic CD151 interactions on c-Jun phosphorylation (Fig. 8D), indicating direct participation of ␤ 1 -type integrins in modulating CD151 signaling for c-Jun activation. The dependence of CD151 signaling on ␤ 1 -type integrins was also observed in MMP-9 expression (Fig. 8D). Taken together, these results strongly suggest that activation of the CD151-associated ␣ 3 ␤ 1 and ␣ 6 ␤ 1 integrins amplifies the c-Jun activation signaling pathways initiated by homophilic CD151 interactions. We next investigated the participation of MAPKs in CD151 signaling to c-Jun by using inhibitors specific for ERK, p38 MAPK, and JNK. c-Jun phosphorylation in CD151 transfectant cells was significantly blocked by the p38 MAPK inhibitor, SB203580, and the JNK inhibitor, SP600125, as well as by the Src kinase inhibitor, PP1, but not by the ERK inhibitor, PD98059 (Fig. 8E). Since previous results showed CD151-induced adhesion-dependent activation of Src, p38 MAPK, and JNK, but not ERK (Fig. 5D), it is very likely that Src-mediated activation of p38 MAPK and JNK may play an important role in transducing CD151 signals to c-Jun. We finally examined whether integrin-dependent CD151 signaling events also occur in another melanoma cell line, C8161, which possesses endogenous CD151 (Fig. 1A). Similar to CD151-transfected MelJuSo cells, C8161 cells responded to the CD151-containing membrane fraction for the phosphorylation of FAK, Src, and c-Jun in an adhesion-dependent manner (Fig. 8F ). However, the phosphorylation levels of these signaling molecules in C8161 cells were not increased in the absence of homophilic CD151 interaction and integrin activation. These results indicate that homophilic CD151 interactions between two contacting human melanoma cells with endogenous CD151 activate the intracellular signaling pathways in one another with the cooperation of associated integrins. DISCUSSION Tetraspanin CD151 has been implicated in the regulation of cell motility, cancer invasion, and metastasis (14,15,39). Anti-CD151 antibody has been shown to inhibit wound-healing migration of endothelial cells, chemotactic motility of neutrophils, and phagokinetic motility of cancer cells (14,39). CD151 overexpression enhanced invasive and metastatic abilities of several cancer cell lines, whereas treatment of cells with anti-CD151 antibody suppressed these abilities (14,15). In this report, we also demonstrated that transfection of CD151 cDNA into a CD151-deficient melanoma cell line up-regulates MMP-9 expression, resulting in the promotion of cancer cell motility and invasiveness (Fig. 2). Among the tetraspanin proteins, CD9 and CD63 have also been associated with the invasion metastasis of melanoma, but these associations have opposing effects to that of CD151 (28, 34 -36, 40 -42). Transfection with CD9 resulted in suppression of cell motility and metastasis of murine melanoma cells (34,36). CD63 expression has been shown to be inversely correlated with the malignant progression of human melanoma (40,41), and several transfection studies have demonstrated the suppressing role of CD63 in the invasion and metastasis of melanoma cells (28,35,42). Thus, tetraspanins CD9, CD63, and CD151 appear to play a role in the processes of melanoma invasion and metastasis, in which CD151 acts as a positive effector opposite to the role of CD9 and CD63. CD151 (46). Furthermore, outside-in signaling through ␣ 6 ␤ 1 integrin was markedly influenced by its lateral association with CD151 (45). The short C-terminal cytoplasmic region of CD151 was found to be particularly important for determining the outside-in signaling functions of ␣ 6 ␤ 1 integrin (45). Thus, most studies of CD151 have focused on its role in modulating the activity and function of associated integrin molecules. Therefore, participation of CD151 in signal transduction has been confined to its regulatory activity toward integrin-mediated transmembrane signaling events. In contrast to previously identified roles for CD151 as a modulator for integrin-mediated signaling, we here demonstrated that CD151 can transduce its own signals leading to increases in MMP-9 expression, cell motility, and invasiveness. Cross-linking of tetraspanins CD81 and CD82 at the cell surface with antibodies was reported to transduce activation signals, such as tyrosine phosphorylation, calcium fluxes, and inositol turnover (47)(48)(49)(50). Therefore, we postulated that the CD151-involved sig-naling pathways may be initiated by ligand binding to CD151 as well as by activation of associated integrins. As a result of searching for a ligand for CD151, we found that CD151 is a self-ligand molecule, implying that homophilic interactions of CD151 proteins take place between two neighboring cells (Fig. 6). We also found that the positive effect of CD151 expression on cell motility and MMP-9 expression is further elevated when CD151-expressing cells were treated with a CD151-containing membrane fraction but not with a CD151-deficient membrane fraction (Fig. 7). Furthermore, treatment with a CD151-containing membrane fraction activated signaling molecules, such as FAK, Src, and c-Jun, in CD151-expressing cells (Fig. 8A), suggesting that homophilic CD151 interactions between two contacting cells provoke intracellular signaling events in both cells. However, the CD151 signaling appears to be dependent on the activation of laminin-binding integrins. Adhesion-dependent activation of several signaling molecules, including FAK, Src, p38 MAPK, JNK, MAPKAPK-2, and c-Jun, became FIGURE 6. Homophilic interactions drive CD151-dependent cell adhesion. A, L cells were transfected with CD151 or vector alone (control), and the stable transfectant clones were examined for CD151 expression by immunoblotting analysis using anti-CD151 antibody. B, two independent CD151 transfectant clones and control transfectants of L cells were suspended in Puck's saline and treated with anti-CD151 mAb or normal mouse IgG, and the aggregation assay was carried out as described under "Experimental Procedures." Data are means Ϯ S.D. of three independent experiments. C, the CD151 transfectants suspended in Puck's saline were mixed with an equal number of CFSE-labeled control transfectant cells. After 30 min of orbital shaking, phase-contrast and fluorescent images of the same field were photographed. Magnified images of the aggregated cells are shown in insets in phase-contrast images. D, the CD151 transfectants labeled with CFSE were mixed with unlabeled control cells, and the images were taken as described in the legend for C. evident when CD151-expressing cells were cultured on laminin-coated plates, whereas little activation of these signaling molecules was observed in the cells plated on poly-(L)-lysine, which does not activate any integrins (Fig. 5D). In particular, integrin activation by cell binding to laminin clearly augmented the stimulating effect of homophilic CD151 interactions on the activation of FAK, Src, and c-Jun, although homophilic CD151 interactions without integrin activation could increase phosphorylation levels of Src and c-Jun to some extent (Fig. 8, B and C). The requirement of both homophilic CD151 interactions and integrin activation for full activation of FAK, Src, and c-Jun was observed not only in CD151-transfected MelJuSo melanoma cells but also in C8161 melanoma cells possessing endogenous CD151 (Fig. 8F). Pretreatment of CD151-expressing cells with anti-␤ 1 antibody abolished the positive effect of homophilic CD151 interactions on c-Jun activation even when the cells were bound to laminin, demonstrating a critical role of CD151associated ␣ 3 ␤ 1 and ␣ 6 ␤ 1 integrins in CD151 signaling (Fig. 8D). The dependence of CD151 signaling on integrin activation was also observed in MMP-9 gene expression (Figs. 5C and 8D). Taken together, it appears that CD151 and the associated integrins stimulate signaling-triggering activities reciprocally, suggesting cross-talk between CD151 and inte-grin signaling events. Furthermore, CD151-␣ 3 ␤ 1 /␣ 6 ␤ 1 integrin complex-mediated signaling is not only initiated by integrin-activating cell-to-laminin adhesion but also provoked by homophilic CD151 interactions generating0 cellto-cell adhesion. Many genes that participate in tumor cell invasion and migration have been identified, including adhesion molecules, small GTPases, cytoskeletal components, and matrix metalloproteinases (51). However, there is little consensus on what controls the expression of these genes and how a program of gene expression is coordinated to manifest an invasive phenotype. In the present study, we demonstrated that CD151 functions as a positive regulator in the adhesion-dependent activation of c-Jun, a component of the AP-1 transcription complex. An increase in the phosphorylation level of c-Jun by cell adhesion to laminin was observed in CD151 transfectant cells but not in mock transfectant cells (Fig. 5D). Homophilic interactions of CD151 further increased c-Jun phosphorylation in an adhesion-dependent manner (Fig. 8, A, B, and F), demonstrating the marked effect of CD151 signaling on the activation of c-Jun upon integrin activation. Increased expression of AP-1 component proteins and AP-1 activity has been shown to enhance invasion and motility in various model systems (52). Overexpression of c-Jun induces the invasiveness of chick embryo fibroblasts and MCF-7 breast cancer cells (53,54). In contrast, expression of a c-Jun mutant in which Ser-63 and Ser-73 are mutated to alanine residues, so that the protein cannot be phosphorylated by JNK, inhibits the migration of fibroblasts (55). A dominant negative mutant of c-fos, one of the Jun subfamily partners in AP-1 dimers, inhibits the motility of fibrosarcoma cells, along with growth arrest at the G 1 phase of the cell cycle (56). Another Fos subfamily member, Fra-1 also modulates the invasiveness and motility of MCF-7 cells (57). We here showed that the stimulating effect of CD151 on cell motility was abolished when JNK was knocked down by siRNA (Fig. 3K). The functional involvement of AP-1 activity in invasion is more evident in the regulation of MMP-9 gene expression. Two AP-1 binding sites exist in the 5Ј-proximal promoter region of the MMP-9 gene (29), and both sites were found to be essential for CD151-induced MMP-9 gene transcription (Fig. 4A). Results from a gel mobility shift assay indicated that CD151 overexpression in MelJuSo cells increased the binding of nuclear proteins, such as c-Jun, to oligonucleotides containing AP-1 consensus sequences (Fig. FIGURE 7. Homophilic CD151 interactions enhance motility and MMP-9 expression of MelJuSo cells. A, membrane fractions of mock and CD151 transfectant cells were prepared and subjected to immunoblotting analysis with anti-CD151 mAb. Total membrane protein levels were also examined by staining a parallel blot with Coomassie Blue. B, the cell motility of mock and CD151 transfectants treated with the membrane fractions of each transfectant (50 g of protein) was determined by wound migration assay. Results are means Ϯ S.D. of triplicate cultures. An asterisk indicates that the differences are statistically significant (*, p Ͻ 0.01, Student's t test). C, gelatin zymography and immunoblotting analysis using anti-MMP-9 mAb were carried out for the conditioned media obtained from the cells cultured in the absence or the presence of the membrane fractions of mock or CD151 transfectant cells for 3 days. D, mock and CD151 transfectant cells were pretreated with normal mouse IgG or anti-CD151 mAb (0.1 mg/ml) for 3 h and incubated with the membrane fractions of CD151 transfectant cells for 3 days. The conditioned media were subjected to immunoblotting analysis using anti-MMP-9 mAb. 4B). Thus, the CD151-intergrin complex-mediated signaling pathway(s) appears to activate AP-1 transcription factors, including c-Jun, which, in turn, elicits changes in the expression of genes involved in the invasion process of melanoma cells. AP-1 transcription factors are subject to regulation by MAPK signaling pathways with respect to biochemical activity, gene expression, and protein stability (58 -60). Among three major MAPK pathways, the ERK1/2 pathway is activated by a variety of mitogens, including growth factors and tumor promoters, whereas the JNK and p38 MAPK pathways are mainly stimulated by environmental stress and inflammatory cytokines (61). Several lines of evidence indicate that functional interplay through pathway cross-talk exists between these MAPK signaling pathways (62)(63)(64)(65). We found here that the signaling events initiated by CD151-integrin complexes in MelJuSo human melanoma cells result in the activation of p38 MAPK and JNK, but not ERK1/2 activation, along with c-Jun phosphorylation (Fig. 5D). Inhibitors for p38 MAPK and JNK, SB203580 and SP600125, respectively, blocked the positive effect of homophilic CD151 interactions on c-Jun phosphorylation, whereas PD98059, an ERK1/2 inhibitor, did not (Fig. 8E). In addition, CD151-induced cell motility and MMP-9 expression were abrogated by transfection of siRNAs that knock down p38 MAPK and JNK (Fig. 3, G-L). These results indicate that both p38 MAPK and JNK play a role of upstream effectors for c-Jun activation in CD151-integrin complex-mediated signaling cascades in MelJuSo cells. Among several integrin-mediated signaling pathways, the FAK-Src-MAPKs pathway has been suggested to regulate cell movement by influencing adhesion turnover at the leading edge in migrating cells (66 -68). The results in this study indicate that CD151-␣ 3 ␤ 1 /␣ 6 ␤ 1 integrin complexes utilize the FAK-Src-MAPKs pathway to increase melanoma cell motility. Since CD151 increases the extracellular matrix binding activity of associated integrins (20,46) as well as their signaling activity, it is very likely that CD151 contributes to cell movement by strengthening integrin-mediated cell adhesion to the substratum and by activating the FAK-Src-MAPKs pathway. Thus, the role of CD151 in cell movement includes a signaling aspect as well as a structural aspect. Our present data show that the FAK-Src-MAPKs pathway leading to c-Jun activation also plays an essential role in CD151induced MMP-9 gene expression. The functional role of MAPKs signaling to AP-1 factors in the regulation of MMP-9 expression has been demonstrated in various cell types (64,69,70). Taken together, it appears that CD151-integrin complex activates MAPKs, such as p38 MAPK and JNK, through the FAK-Src signaling pathway in MelJuSo cells. In summary, we have demonstrated for the first time that homophilic CD151 interactions activate integrin-dependent signaling events that lead to increases in c-Jun-mediated MMP-9 gene expression, cell motility, and invasiveness in MelJuSo human melanoma cells. The signaling pathways initiated by CD151-␣ 3 ␤ 1 /␣ 6 ␤ 1 integrin complexes increase c-Jun activity through the activation of FAK, Src, p38 MAPK, and JNK. Positive cross-talk between p38 MAPK and JNK pathways also contributes to c-Jun activation by CD151-integrin complexes. These findings may be useful in designing therapeutic interventions that block CD151-induced integrin-dependent AP-1 activation through FAK/Src-mediated activation of p38 MAPK and JNK, resulting in the reduction of MMP-9 expression and cell motility and the consequent blocking of invasion and metastatic spread of malignant melanoma.
59306550
s2orc/train
v2
2019-01-24T15:22:41.680Z
2019-01-17T00:00:00.000Z
Gap-induced inhibition of the post-auricular muscle response in humans and guinea pigs A common method for measuring changes in temporal processing sensitivity in both humans and animals makes use of GaP-induced Inhibition of the Acoustic Startle (GPIAS). It is also the basis of a common method for detecting tinnitus in rodents. However, the link to tinnitus has not been properly established because GPIAS has not yet been used to objectively demonstrate tinnitus in humans. In guinea pigs, the Preyer (ear flick) myogenic reflex is an established method for measuring the acoustic startle for the GPIAS test, while in humans, it is the eye-blink reflex. Yet, humans have a vestigial remnant of the Preyer reflex, which can be detected by measuring skin surface potentials associated with the Post-Auricular Muscle Response (PAMR). A similar electrical potential can be measured in guinea pigs and we aimed to show that the PAMR could be used to demonstrate GPIAS in both species. In guinea pigs, we compare the GPIAS measured using the pinna movement of the Preyer reflex and the electrical potential of the PAMR to demonstrate that the two are at least equivalent. In humans, we establish for the first time that the PAMR provides a reliable way of measuring GPIAS that is a pure acoustic alternative to the multimodal eye-blink reflex. Further exploratory tests showed that while eye gaze position influenced the size of the PAMR response, it did not change the degree of GPIAS. Our findings confirm that the PAMR is a sensitive method for measuring GPIAS and suggest that it may allow direct comparison of temporal processing between humans and animals and may provide a basis for an objective test of tinnitus. Introduction The main justification for undertaking animal neurophysiology is because it can give us an insight into how the human brain works in health and disease. Ideally any research should be crossvalidated by using equivalent methods in animals and humans. This is particularly true in translational studies of clinical problems such as tinnitus (Eggermont, 2016). There are several objective tests used in animal models of tinnitus but the most popular involve modification of the acoustic startle response (Hayes et al., 2014;von der Behrens, 2014). The acoustic startle response involves many muscles including those in the limbs and the head. In small, active rodents it can be measured with a transducer in the cage floor as the animal "jumps" (Turner et al., 2006), while in larger, less active animals it can be measured by motion tracking cameras monitoring the ear flick or pinna reflex (Berger et al., 2013). However, in humans, these methods are not suitable and the eye-blink reflex is usually used instead (Fournier and Hebert, 2016). The lack of an equivalent test for animals and humans has two drawbacks. First, it has not been possible to confirm to what extent the animal behavioural methods assess the human perception of tinnitus. Second, one cannot be certain that the putative neural mechanisms for tinnitus that are derived from animal research are tinnitusspecific. They may equally be associated with other phenomena, such as hearing loss, insomnia or stress. The most commonly used method for identifying tinnitus in animals is Gap-induced Inhibition of the Acoustic Startle (GPIAS); a form of pre-pulse inhibition (PPI). It relies on a short gap in a continuous background noise or tone to provide a cue that inhibits the usual startle response following a loud sound (Turner et al., 2006). The ratio between the magnitude of the response to the startling sound presented alone (no-gap trial) and trials in which a gap preceded the startling sound (gap trials) is calculated as the GPIAS ratio (Turner et al., 2006). If the gap is too short or if the tinnitus percept masks the gap then there will be no difference between the gap and no-gap trials. The whole body reflex has been used to demonstrate tinnitusrelated changes in GPIAS, in the mouse (Longenecker and Galazyuk, 2011;Moreno-Paublete et al., 2017), rat (Turner et al., 2006) and gerbil (Nowotny et al., 2011). However, one of the limitations of this method is that the whole body response habituates quite rapidly, especially in humans (Groves et al., 1974) where the eye-blink reflex is used instead (Fournier and Hebert, 2013;Shadwick and Sun, 2014). Although the eye-blink reflex has been used to demonstrate GPIAS, it has not yet been used to demonstrate the presence of tinnitus in humans and it is only rarely used in animals. Fournier and Hebert (2013) found that a group of tinnitus subjects did show a deficit in gap detection ability for the eye-blink reflex, but it was not specific for a background band-passed noise matched to the tinnitus pitch, as predicted in the original hypothesis (Turner et al., 2006). Psychoacoustic measures attempting to demonstrate a key assumption of the GPIAS method -that tinnitus "fills in" the gap in the background noise e have also failed (Campolo et al., 2013;Boyen et al., 2015). Thus it is still necessary to validate the GPIAS model in humans with tinnitus. Going forward it is worth noting that the whole body and eye-blink responses are not purely acoustic reflexes. They respond to startling stimuli presented in the visual or somatosensory domains (Yeomans et al., 2002). This will further complicate the interpretation of any results based on them. In guinea pigs, we have overcome the problem of habituation by measuring the Preyer or pinna reflex using infrared motion tracking (Berger et al., 2013(Berger et al., , 2018. The Preyer reflex was first described in guinea pigs in the late 19th century (Preyer, 1882). It is a pure acoustic reflex, as it is not produced in response to startling stimuli in the visual or somatosensory domains (Fox et al., 1989;Hackley, 2015). In the rat, it involves a di-synaptic pathway (see Fig. 1) and this may also be true in the human (Hackley et al., 2017). Following a startling sound, large numbers of auditory nerve fibres simultaneously activate the cochlear root nucleus (CRN) which projects to the medial facial nucleus (MFN) which in turn innervates the muscles around the pinna (Horta-Junior et al., 2008) or ear such as the posterior auricular muscle (PAM). The CRN also projects to the caudal pontine reticular nucleus (PnC) which is involved in the whole body startle (Lee et al., 1996;Lingenhohl and Friauf, 1994) and this in turn projects to the MFN as well as the dorsolateral facial nucleus (DLFN) which innervates the orbicularis oculi muscle that is responsible for the eye-blink (Morcuende et al., 2002). The pathways mediating the acoustic startle reflex and their modulation by PPI are complicated and their details are not yet certain (Moreno-Paublete et al., 2017), but a simplified diagram summarising them is shown in Fig. 1. There is a short latency, purely acoustic pathway which mediates PPI that is shown by the thick red arrows (Gomez-Nieto et al., 2010). The broader acoustic startle response has multimodal inputs that can modify the general motor output and PPI involves many parallel pathways starting at the cochlear nucleus and feeding into the caudal pontine reticular nucleus as indicated by the blue arrows (Yeomans et al., 2006;Fendt et al., 2001;Koch et al., 1993;Li et al., 1998). The broader PPI measured by the eye-blink reflex is altered by certain psychiatric conditions such as schizophrenia and obsessive compulsive disorder (Kohl et al., 2013) and so studying modification of the pure acoustic reflex may be more appropriate for acoustic conditions such as temporal processing or tinnitus. The Preyer reflex shows robust PPI in rats (Cassella and Davis, 1986), as well as GPIAS in guinea pigs that can be used to identify tinnitus (Berger et al., 2013(Berger et al., , 2018Wu et al., 2016). In humans, the Post-Auricular Muscle Response (PAMR) is a vestigial remnant of the Preyer reflex (Hackley, 2015). It can be measured non-invasively from a scalp electrode placed behind the ear, over the insertion of the muscle to the pinna (Fig. 2). One of the unusual characteristics of this muscle response is that it is Simplified diagram of the potential pathways involved in the mammalian acoustic startle reflex and its modification by pre-pulse inhibition (PPI). There is a di-synaptic pathway from the cochlea to the cochlear root nucleus (CRN) and then to the medial facial nucleus (MFN) that innervates the posterior auricular muscle (PAM). There is also a tri-synaptic pathway from the CRN to the caudal pontine reticular formation (PnC) and then the facial nucleus. These are very short-latency pathways (thick black arrows). There is also a short-latency acoustic pathway which mediates PPI that is shown by the thick red arrows from the cochlea to the ventral cochlear nucleus (VCN), ventral nucleus of the trapezoid body (VNTB) and then to the CRN. Other pathways start at the cochlear nucleus and involve structures such as the periolivary nuclei (PON), dorsal nucleus of the lateral lemniscus (DNLL) and the central nucleus of the inferior colliculus (CIC). The CIC projects to the external nucleus of the inferior colliculus (ECIC) which has connections to both the superior colliculus (SC) and directly to the pedunculopontine tegmental nucleus (PPTg). The SC may project directly to the dorsolateral facial nucleus (DLFN) but the main input to the DLFN is from the PnC. Thus the PPTg may provide a longer latency route (shown by the thin blue arrows) for mediating multimodal PPI. amplified when the eye gaze is focused towards an extreme lateral position. Apparently this is due to the output from the superior colliculus that activates the oculomotor and abducens nuclei also innervating the MFN and increasing tonic activity in the PAM . Indeed, the PAMR reflex was originally described as being part of an oculo-auricular response (Wilson, 1908). There are three muscles inserted into the base of the auricle (Gray, 1989;Smith and Takashima, 1980): the posterior, superior and anterior auricular muscles, as illustrated schematically in Fig. 2. The superior muscle may be partially covered by the temporal muscle, while the anterior muscle is generally smaller than the other two (Talmi et al., 1997). Thus, although all three muscles are innervated by the facial nerve, it is traditionally the posterior muscle that has been used for measuring the vestigial Preyer reflex (Dus and Wilson, 1975;O'Beirne and Patuzzi, 1999). In guinea pigs, the myogenic potential measured from immediately behind the pinna is also referred to as the PAMR for convenience. This article evaluates the PAMR and GPIAS in guinea pigs and in young healthy human participants as an important precursor to translational research in tinnitus patients. Establishing the PAMR as a method for measuring GPIAS would also show its potential for studying auditory temporal processing more generally (Fournier and Hebert, 2016). The first section reports a study that directly compared the traditional Preyer reflex (pinna movement) to the PAMR, measured using a chronically implanted electrode in the same guinea pigs, in order to confirm that both can be used to demonstrate GPIAS. The second section deals with human volunteers and had four objectives: 1) directly compare the startle reflex measurements obtained from eye-blink recording to the PAMR in the same participants; 2) demonstrate proof-of-concept that the PAMR can be modified by preceding gaps in noise; 3) determine whether increasing the size of the PAMR potential by changing eye gaze position would make it easier to measure GPIAS; and 4) confirm whether there was an optimal gap position for producing GPIAS. 2. Comparing the GPIAS measured using the Preyer reflex and PAMR in guinea pigs Materials and methods Animals All procedures were in accordance with the European Communities Council Directive (86/609/EEC) and were approved by the University of Nottingham Animal Welfare and Ethical Review Body. Experiments were conducted on a total of nine tricolour guinea pigs (two male, seven female) weighing between 440 and 750 g at the time of electrode implantation. Guinea pigs were group-housed on a 12:12 h light:dark cycle, and food and water were freely available. Recordings The flexion of the pinna, indicative of the Prefer reflex, was measured behaviourally using a motion-tracking system of three infrared cameras (Vicon Motion Systems, Oxford, UK). A reflective marker (4 mm diameter) was attached to each pinna using cyanoacrylate adhesive. The motion-tracking system used these markers to triangulate the position of the ears, and subsequently to track pinna movement during the presentation of startling stimuli. All data were analysed offline using Matlab (R2014b, MathWorks, MA, USA). Further details are given in Berger et al. (2013). The PAMR was recorded using a chronically implanted electrode array. This comprised four Teflon-insulated silver wires, which were heated to produce a ball on the end to prevent them damaging the dura over the cortex. The wires were soldered to a circuit board attached to a Tucker Davis Technologies (TDT, Alachua, FL, USA) zero-insertion-force-clip connector. For the surgery, animals were anaesthetised with a mixture of ketamine (40 mg/kg, i.p. http://www.levetpharma.com/our-registrations/anaestamine-100-mgml-solution-for-injection/) and xylazine (8 mg/kg, i.p. www.drugs.com/vet/rompun-20-mg-ml-injectable-can.html) before being transferred to an isoflurane/O 2 mixture from a face mask to maintain areflexia. Temperature was maintained at 38 ± 0.5 C using a rectal probe and homeothermic blanket (https:// www.harvardapparatus.co.uk/webapp/wcs/stores/servlet/ haisku3_10001_11555_39108_-1_HAUK_ProductDetail_N_37610_ 37611_37613), the head was shaved and wiped with an iodine solution. Following a midline incision, the connective tissue from the top of the cranium was reflected and four burr holes drilled. Two of these were used to insert small, stainless steel anchoring screws and two were made over the frontal cortex so that ground and reference electrodes could be placed on the dura. The other two wires were pushed into a tunnel under the skin to lie on the muscle immediately behind the pinna (see Fig. 3B). The underside of the board and the electrode burr holes were covered in Kwik-Cast silicone rubber (https://www.wpi-europe.com/products/laboratorysupplies/adhesives/kwik-cast.aspx) and sealed in place with dental acrylic. The wound was sutured and the edges made to adhere to the acrylic using cyanoacrylate adhesive (https://www. 3m.com/3M/en_US/company-us/all-3m-products/~/3M-Vetbond-Tissue-Adhesive/?N¼5002385þ3294397973&rt¼rud). All procedures were made using full aseptic precautions. Anaesthetic cream was applied to the wound, antibiotic (enrofloxacin) administered (https://www.baytril.com/en/farm-animals/product/) and the animal monitored until full recovery. PAMR recordings were conducted at least 24 h later in a cage (310 Â 150 Â 210 mm) inside a sound-attenuated chamber, with a The active electrode (pink) was placed behind the right ear at the insertion of the muscle to the pinna, the PAMR reference (purple) and ground (blue) were placed at the tip of the pinna and centre of the forehead respectively. The eye-blink electrode (brown) was placed under the middle of the right eye and the eye-blink reference (green) was placed at the corner of the eye (approx. 1.5 cm apart). The diagram also illustrates stimulus production and electrode signal recording. TDT ¼ Tucker Davis Technologies. zero-insertion-force-clip headstage attached to the implanted electrodes. Animals were awake and freely moving throughout recording. Auditory stimuli were presented free-field via a single ¾-inch tweeter (http://www.mx-spk.com/image/XT19TD00-04spec) positioned~30 cm above the centre of the cage. Two ¼-inch free-field microphones attached to a preamplifier (https://www. gras.dk/products/measurement-microphone-cartridge/externallypolarized-cartridges-200-v/product/645-40bp) and https://www. gras.dk/products/preamplifiers-for-microphone-cartridge/ traditional-power-supply-lemo/product/675-26ac-1 placed at either end of the cage, were used to calibrate signals. Recorded EMG signals were filtered online between 60 and 300 Hz. Data was collected with a revised version of Brainware provided by its author (J. Schnupp, University of Oxford, UK). Stimuli Stimulus conditions for the Preyer reflex and PAMR were the same startling stimuli embedded in the same continuous background noise. The startle stimulus was a broadband (white) noise burst of 20 ms duration that included linear rise/fall times of 1 ms. There were five different continuous background noise conditions; four 2-kHz wide narrowband noise conditions centred at 5, 9, 13, and 17 kHz, and white noise, as described previously (Berger et al., 2018). Gaps of 50 ms duration, starting 100 ms before the startling stimulus, were randomly inserted on half of the trials, resulting in 10 'gap'/'no-gap' conditions for each background noise condition. Each animal separately underwent six Preyer reflex and six PAMR testing sessions on different days. Sound presentation levels were determined individually for each animal prior to implantation, with startling stimuli of either 100, or 105 dB SPL and background carrier stimuli of 55, 60, or 70 dB SPL in a sound level-dependency test (Berger et al., 2013). The purpose was to avoid the startle sound being too loud for the response to be inhibited by a gap or being too soft so that the response rapidly habituates, as well as to optimise the test for each animal. At this point, one guinea pig (#9) was excluded because it failed to show any consistent evidence of GPIAS with the Preyer reflex. Data analysis Raw data for the Preyer reflex comprised x, y and z coordinates for each of the reflective markers captured by the Vicon motion-tracking software. Custom-written Matlab © software was programmed to plot each individual startle response and calculate the peak-to-trough of the pinna displacement (see Berger et al., 2013). For the PAMR collected in implanted animals, custom-written Matlab scripts (R2014b, MathWorks, MA, USA) were used for offline analysis. PAMR amplitudes were determined using peak-totrough amplitudes of electromyographic potentials in the 50 ms following the startling stimulus, averaged across repeated trials. For each animal and each background noise condition, the mean pinna displacement (mm) and mean PAMR amplitude (mV) for gap and no-gap trials were calculated. Both datasets were nonnormally distributed and so a non-parametric Wilcoxon matched pairs signed rank test was performed to test the statistical significance (p < 0.05) between gap and no-gap trials. For illustrative purposes, the amount of GPIAS was also expressed as a percentage decrease in the pinna displacement/PAMR amplitude in gap trials compared to no-gap trials, which is equivalent to the GPIAS ratio used in other studies. Fig. 3 shows a representative set of raw traces of the Preyer reflex and PAMR for the no-gap startle trials. Table 1 reports the summary findings for all animals. Comparing the GPIAS measured using the Preyer reflex and PAMR For the Preyer reflex, eight guinea pigs demonstrated statistically significant GPIAS in at least four of the five background noise conditions. The mean percent decrease in the pinna displacement in gap trials compared to no-gap trials was 27.3% (SD ¼ 8.5). Comparatively speaking, for the PAMR the same eight guinea pigs demonstrated statistically significant GPIAS (p < 0.05) in at least three of the five background noise conditions. However, the mean percent decrease in the PAMR amplitude was somewhat reduced and more variable (mean ¼ 19.8%, SD ¼ 15.8). This difference in % GPIAS withstood statistical testing with a main effect of Preyer reflex versus PAMR (F (4, 75) ¼ 4.56, p ¼ 0.036), but no main effect of background noise condition (F (4, 75) ¼ 0.97, p ¼ 0.429). Thus in our hands both the Preyer reflex and PAMR can be used to demonstrate GPIAS in the guinea pig. However, the Preyer response is larger and seems more robust. Materials and methods Humans A total of 32 participants were recruited from around the campus by poster advertisements and word-of-mouth. All participants gave informed written consent and the studies were approved by the University of Nottingham, School of Medicine Ethics Committee (Ref: F11122014) on 5th January 2015. They were paid a small honorarium for the inconvenience of attending one or two sessions. Participants were aged 18e30 years with clinically normal hearing in both ears, normal (uncorrected) eyesight, and were fluent in English. After an otoscopic examination, hearing was assessed in each ear separately from 0.125 to 12 kHz using the British Society of Audiology (http://www.thebsa.org.uk/wpcontent/uploads/2014/04/BSA_RP_PTA_FINAL_24Sept11_ MinorAmend06Feb12.pdf) procedure with a Diagnostic Audiometer (GSI 16) in a sound proof booth. Normal hearing was defined by audiometric thresholds 20 dB Hearing Level in the frequency range 0.125e4 kHz. Eight participants (four female, four male) were recruited in Study 1 which directly compared the conventional eye-blink reflex to the PAMR, in the same participants. One consented participant (male) was excluded from Study 1 due to thresholds !30 dB HL at 4 kHz. A further 24 different participants (18 female, six male) were recruited in Study 2 which examined the effect of various design parameters on the PAMR and corresponding GPIAS. Three of these were excluded because of elevated hearing thresholds and a further seven were excluded because they did not show a reliable PAMR response following 30 stimulus repetitions as defined in the Data Analysis section below. Eight completed study 2a (seven female, one male) which investigated the effect of eye gaze position on GPIAS and six completed study 2b (four female, two male) which investigated the effect of gap position on GPIAS. Stimulation Electrophysiological measurements took place in a sound-attenuating booth that also acted as a Faraday cage (IAC Acoustics, Winchester, UK). Participants were asked to sit quietly and refrain from moving their head. Eye gaze position was controlled by asking participants to fixate on a black cross that was placed on the facing wall. Short breaks were permitted between recording sessions in order to check on comfort and level of arousal. A recording session lasted approximately one hour and this included the attachment of electrodes and explanation of the procedure. Stimuli were created using Matlab software (version r2014b, Mathworks, Natick, MA, USA). The startle stimulus was a 20-ms broadband noise burst presented at 105 dB SPL with near instantaneous rise-fall time (0.1 ms). No-gap trials consisted of startle pulses presented in a 1-kHz continuous background pure tone presented at 70 dB SPL. Gap trials were similar to no-gap trials, except that a silent gap (50 ms) was inserted in the continuous background tone before the startle pulse. In pilot studies, we had found that participants preferred a pure tone background rather than the white noise we have used in the guinea pigs. To reduce the risk of habituation of the PAMR response and to reduce anticipation of the startle stimulus, the inter-trial-interval (ITI) was randomly varied between 18 and 22 s. The gap duration was fixed at 50 ms and in the first study it started 100 ms before the onset of the startle as these parameters were used in previous eye-blink studies (Fournier and Hebert, 2013) and in our guinea pig work. All stimuli were delivered to the right ear alone; in study 1 using circumaural Sennheiser HD-280 Pro headphones, and in study 2 using ER-1 inserts (https://www.etymotic.com/auditory-research/ insert-earphones-for-research/er1.html). Transducers were connected to a Tucker Davis Technologies RP2.1 (Alachua, FL, USA) interface which was utilised as a digital signal processor and headphone amplifier (HB7). Study 1 was a repeated-measures design with eye gaze position (0 "forward", 30 "partially to the right", 45 "fully right") as the independent variable. The test session comprised of three testing blocks (one per gaze position starting at 0 ) with each block containing 30 no-gap trials. Study 2a was a repeated-measures design with eye gaze position (0 "forward", 45 "fully right") and gap/no-gap as the independent variables. The test session comprised of two testing blocks (one per eye gaze position) with each block containing a random sequence of 60 gap and 60 no-gap trials. Study 2b, was a repeatedmeasures design with gap condition (gap, no gap) and gap position (20, 50, 100 and 500 ms) as independent variables. The values of gap position reflected the interval between the end of the gap and the start of the startle stimulus. Eye gaze position was fixed throughout in the forward position. There were four testing blocks (one per gap position), with each block containing a random sequence of 20 gap and 20 no-gap trials and the blocks presented in a randomised order. Gap duration was fixed at 50 ms across both studies. Recording procedures Eye-blink reflex and PAMR were recorded at a sampling rate of 2500 Hz and filters set at 0.1e250 Hz using a BrainAmp DC system (BrainVision, Gilching, Germany) with 10 mm cupped AgCl electrodes fitted with impedances below 3 kU. PAMR electrode placement was guided by methods in Patuzzi and O'Beirne (1999). For the PAMR, the active electrode was placed behind the right (ipsilateral) ear, over the insertion of the muscle to the pinna (Fig. 2), with the reference electrode on the tip of the pinna (to avoid any intrinsic muscle responses) and the ground electrode on the centre of the forehead (Benning et al., 2004). For the eye-blink reflex, the active electrode was placed under the middle of the right (ipsilateral) eye, with the reference electrode at the corner of the eye at a distance of about 1.5 cm (Blumenthal et al., 2005). Data analysis All data was analysed using custom-made Matlab software (version r2014b) with EEGLAB toolbox (SCCN, University of California, San Diego, USA). The data were rectified and filtered offline using a bandpass filter of 1e300 Hz to exclude neurogenic potentials (Thornton, 1975). For Table 1 Comparison of GPIAS measured using the Preyer reflex and the PAMR. Percentage GPIAS of the Preyer reflex and PAMR response for all guinea pigs for each background condition. The numbers in bold black represent statistically significant GPIAS values (p < 0.05). The numbers in grey indicate no significant GPIAS observed for that given background frequency. ID ¼ individual participants, BBN ¼ broadband noise. ID Preyer (GPIAS %) PAMR (GPIAS %) BBN 4-6 (kHz) 8-10 ( detecting a reliable response, a criterion threshold was defined as 2.5 times the standard deviation of the mean of the baseline, and the baseline was defined as a 2-s segment of the signal prior to the acoustic startle. As the peaks in the individual traces differed in latency (Fig. 4), a window of analysis was specified. For the eye-blink reflex, this was 45e75 ms and for the PAMR it was 10e30 ms (Fournier and Hebert, 2013;Patuzzi and O'Beirne, 1999). Additionally, due to the differences in the mean amplitude between participants, each data set was normalised whenever data from different subjects were to be compared directly. Normalised individual PAMR responses were obtained by taking each data point and dividing by the largest data point value in all of the session data. For each participant who exhibited a reliable PAMR, the percentage GPIAS of the PAMR was calculated using a ratio of the peak-to-baseline measure of the amplitudes for gap and no-gap trials, using the formula: 100-(mean PAMR amplitude gap trials/mean PAMR amplitude no-gap trials) *100. As in the guinea pig data, mean PAMR amplitudes (mV) were non-normally distributed and so non-parametric Wilcoxon matched pairs signed rank tests were performed. Comparing the eye-blink response and PAMR responses In the initial recordings comparing the two reflexes (first part of study 1) the simplest set of conditions was used where the eyes were in the forward position and the startle was presented without any preceding gap. A representative set of raw traces of the eyeblink reflex and the PAMR are shown for an individual participant in Fig. 4. In both cases, the mean response waveform bore little resemblance to that of the individual trials. For individual trials, the eye-blink response was usually characterised by multiple peaks, whereas the PAMR typically had a single peak. Across trials, individual eye-blink responses varied in their maximum peak latency (49e75 ms) to a greater degree than the PAMR (14e26 ms). The amplitude of the maximum peaks for the eye-blink and PAMR varied over their respective recording sessions ( Fig. 4C and D). When the amplitude for each trial was plotted across the session, there was a weak trend towards declining amplitudes over time, with the slope of the regression line for the eye-blink response more than twice as steep as that for the PAMR. However the variability in amplitudes were so variable that overall there was no statistically significant linear reduction for either type of recording. Out of seven participants only two showed mean eye-blink responses above threshold, whereas four showed mean PAMR responses above threshold. When averaged across the group, the magnitude of the amplitude was comparable for both types of recordings, but the average PAMR response was more clearly defined than the average eye-blink response, and it had a single primary peak and a narrower range of latencies. This is illustrated in Fig. 5. Processing optimisation to detect the PAMR response to allow comparison of the gap and no-gap conditions Next, a different form of analysis was used to more appropriately reflect the shape of the underlying potential in each trial and give a more accurate indication of whether or not a PAMR response could be detected from an average of the first 30 trials. This was based on a method for aligning peaks that has previously been used for analysing visual evoked potentials (McGillem and Aunon, 1977). The fact that the PAMR response was typically a single peak with a relatively narrow range of latencies meant that it was possible to produce a group-averaged waveform that contained little smearing caused by latency shifts from trial to trial. To achieve this, the highest value of the predominant peak in each trial was set as the zero timepoint and the adjacent segment of trace (±10 ms) was aligned, for all 30 trials, so that an adjusted waveform was obtained for each participant. This was then compared to the average aligned waveform of the greatest peak from a previous 2 s of baseline trace, starting at 3 s before the startle pulse. The individual averaged responses using the unaligned data for all seven participants is shown in Fig. 6A, where only the four participants (labelled #1e4) gave a response that was above threshold. The corresponding responses generated by the alignment procedure are shown in Fig. 6B. The adjusted waveforms were sharper and greater in amplitude. As a result, the data for participant #7 now reached the threshold for defining a significant response. In Fig. 6B, the mean of the highest peak outside the acquisition window, within a 2 s segment of baseline, is plotted as a grey line and the red dotted lines show values for ±2.5 times the standard deviation of these peaks across trials. This method of alignment appeared to provide a more sensitive way of estimating the PAMR than a conventional stimulus-linked average, and the amplitude of the adjusted waveform appears to be a more appropriate way of estimating response magnitude when comparisons between different conditions are needed. In the second part of study 1 we wanted to confirm the optimal eye gaze position to maximise the amplitude of the PAMR. Our research question focussed on whether eye gaze position affected the PAMR response and so did not evaluate the effect of eye position on the eye-blink data in this or subsequent experiments. Those five participants with a detectable PAMR response when the eye gaze was directed forward were tested with the additional conditions of eye gaze partially right and eye gaze fully right. An example of the aligned averaged PAMR waveform with the eye gaze at all three positions is shown in Fig. 7A (participant #2). Results showed that the aligned averaged PAMR waveform progressively increased in amplitude as the eye gaze became more lateralised. This pattern was true for all five participants and the grand average from the five participants is shown in Fig. 7B. A non-parametric Friedman test was used to test differences between the three eye gaze positions. The result of this test showed that the amplitude was dependent on the eye gaze position; c 2 (2, N ¼ 5) ¼ 8.4, p ¼ 0.0085. Post-hoc analysis with a Dunn's test demonstrated a significantly greater amplitude in the fully right condition compared to the forward condition (p ¼ 0.013). There was no significant difference between the forward and partially right, or between the partially right and fully right conditions (p > 0.999 and p ¼ 0.1733, respectively). Gap induced inhibition of the PAMR response Having optimised the method for estimating the PAMR response, the next step was to test whether it was possible to reduce PAMR amplitude by preceding it with a gap in a continuous sound (GPIAS). In this part of the study, the peak responses were aligned across trials and we also studied the effect of eye gaze position (Group 2a, n ¼ 8) and gap position (Group 2b, n ¼ 6) on the efficacy of the gap in reducing the PAMR response to the subsequent startle pulse. An example of GPIAS in an individual participant with eye gaze directed forward is shown in Fig. 8. In this participant, the adjusted PAMR response significantly decreased in amplitude when the gap condition was compared to the no-gap condition, demonstrating a reduction of 27% (Wilcoxon rank-sum test; p < 0.001). The data were then analysed for all eight participants in group 2a to determine if a more reliable GPIAS could be obtained with the eyes gazing right-ward compared to forward and the results are summarised in Fig. 9. The two eye gaze positions demonstrated similar GPIAS reductions; right-ward 17% and forward 20%. The Wilcoxon matched pairs signed rank test demonstrated a significant reduction across the no-gap and gap conditions (Z ¼ À26, p ¼ 0.031) for the forward eye gaze position, but not for the rightward position (Z ¼ À18, p ¼ 0.156). A paired t-test was then used to determine if there was a significant difference in the amount of GPIAS that was obtained in the two eye gaze positions, but no difference was found (t(6) ¼ 1.162 p ¼ 0.289). In conclusion, although the right-ward position increased the amplitude of the PAMR response, it did not increase the degree of GPIAS that could be demonstrated. As the forward eye gaze position showed a more reliable GPIAS and was more tolerable for participants than the right-ward conditions, the forward position alone was used in subsequent testing. In group 2b, six participants were tested with stimuli where the 50-ms gap was placed at different times before the startle pulse to examine the delay between the end of the gap and the start of the pulse. Four delays were used and the results summarised in Fig. 10. The 20-and 50-ms gap conditions showed a mean GPIAS value of 5% and 17% respectively, while the 100-and 500-ms conditions exhibited a gap induced facilitation value of 14% and 10%, Fig. 7. Effect of eye direction on PAMR amplitude. A Average aligned waveforms for participant #2 for each eye-gaze position ("forward" -blue, "partially right" e red, and "fully right" -green). B The bar chart shows the mean amplitude of the peaks taken from the five participants with a detectable PAMR response for the three eye-gaze positions. Dunn's test was used to show a significant difference between the forward and fully right eye-gaze conditions (*p < 0.05). Average aligned waveforms for no-gap (red) and gap (blue) conditions (mean of 60 trials; ± Standard Error (SE) in pink for the no-gap trials. The SE for the gap trials was too small to plot). Inset: Histogram showing the mean peak-to-baseline amplitude of the PAMR in response to no-gap and gap conditions with a 27% reduction in PAMR amplitude following the gap (***p < 0.001). respectively. A repeated-measures one way ANOVA with a Greenhouse-Geisser correction determined that GPIAS values obtained for each gap position did not differ significantly from one another (F(1.214, 0.753) ¼ 0.561, p ¼ 0.729). Although the variance in the data was rather high, the observed pattern suggested that the 50-ms gap position might be optimal for demonstrating GPIAS. Use of the PAMR as a measure of the acoustic startle response When recordings are made from the human scalp, as many as 15 separate potentials can be identified following a brief acoustic pulse (Picton et al., 1974). The cranial muscular responses show temporal overlap and it can be difficult to disentangle them to identify a single source (Streletz et al., 1977). The post-auricular muscle usually has a single belly that is small and well-defined (Talmi et al., 1997) and contains a relatively small number of muscle units which are spontaneously active (De Grandis and Santoni, 1980). The acoustic startle synchronises the muscle unit activity to give a short latency potential that can be measured from the skin surface behind the ear. By placing a reference on the ear lobe, it is possible to obtain a relatively pure signal without much interference from other cranial muscles. The post-auricular muscle does not usually produce any measurable movement of the auricle, but its activation does seem to be analogous to the ear flick reflex shown by many mammals (Hackley, 2015). This view is supported by our results showing that the electromyographic response measured in the post-auricular area of the guinea pig is a short-latency potential that can be used to demonstrate GPIAS in the same way as the ear flick (Preyer) reflex. The PAMR has a low threshold for activation and should be present in participants with moderate hearing loss (Thornton, 1975;Yoshie and Okudaira, 1969). Its main advantage is that it shows almost no sign of habituation, even after thousands of repeats (Hackley et al., 1987) and it is becoming more widely used in psychology for measuring behaviour such as appetitive responding and PPI (Hackley et al., 2017;Sandt et al., 2009). Comparison of the eye-blink response and PAMR for measuring changes in the acoustic startle In the first part of the study we wanted to directly compare the eye-blink response and PAMR traces as alternative ways of measuring the acoustic startle reflex in a way that might be relevant in the clinic. Both the eye-blink and the PAMR responses are modulated by the emotional state of the participant, with aversive states, such as fear, potentiating the eye-blink and suppressing the PAMR while pleasant or appetitive states potentiate the PAMR and suppress the-eye blink (Benning et al., 2004;Vrana et al., 1988). A few participants found it unpleasant to have to maintain their eye gaze in a fixed side-ward position and two started to feel nauseous towards the end of a trial. Thus, we wanted to keep the test periods to a minimum and use recording sessions that could be completed in less than an hour where there was less chance of an aversive state building up than with a longer session. We never used more than 30 repeats in one continuous test block and this is less than would normally be used for recording the PAMR (O'Beirne and . Despite this data reduction, we were still able to detect a PAMR in 68% (19/28) of our participants and we found it easier using the PAMR than the eye-blink response to record a response that is suitable for averaging across trials. The raw PAMR trace generally had a single prominent peak which produced a smooth clear potential when these peaks were aligned and averaged across trials. We recommend that this adjusted waveform is better suited for directly comparing the response amplitude across gap and no-gap conditions. By contrast, the raw eye-blink trace was composed of multiple myogenic potential peaks. This meant that aligning one peak from each trace for averaging could misidentify valid responses and include them in the background activity, thus increasing the standard deviation of the background. Even after appropriate filtering to smooth the trace, the resultant averaged waveform peak would be broader and potentially have a lower signal-to-noise ratio than the PAMR peak recorded under the same conditions. This may make the eye-blink response sub-optimally sensitive to small changes in the peak amplitude produced by gap-induced inhibition. Thus, if the GPIAS led to a smaller number of muscle units being activated this might lead to a sharpening of the averaged response rather than a significant reduction in amplitude. By aligning the traces according to the largest peak in each trial, there might not be any reduction in amplitude until some of the trials had no motor units responding at all. In our hands, 68% of participants showed a measurable PAMR and this is a bit lower than the 80% or more of participants that have been shown to have a PAMR in previous studies Sandt et al., 2009) which generally used larger numbers of repeats (100 trials or more). Previous studies also showed that the background electromyographic activity in the post-auricular muscle could be potentiated by increasing the activity in other cranial muscle groups. Thus flexing the neck or smiling can increase the tonic activity and increase the amplitude of the PAMR (Dus and Wilson, 1975). Similarly activation of the oculomotor units involved in moving and holding eye gaze towards the side of the acoustic stimulus has been shown to increase the amplitude of the PAMR . We confirmed that finding, but were unable to show that the larger PAMR response was associated with a larger percentage GPIAS. This may just mean that the inhibition is a proportional effect. In other words, it does not matter what is the absolute amplitude, the magnitude of the change is a constant proportion. Validation of the GPIAS method as an objective test for tinnitus in animals In the original description of the GPIAS method for detecting tinnitus in rats (Turner et al., 2006), it was suggested that tinnitus acts to fill the gap in the background noise when its pitch and approximate bandwidth has been matched with the tinnitus percept. However, psychoacoustic attempts to confirm this mechanism using human subjects have been unsuccessful (Boyen et al., 2015;Campolo et al., 2013). When the tinnitus pitch was matched to the background noise in humans, there was no evidence of a greater effect on GPIAS of narrowband noise matched to the tinnitus pitch compared to noise centred at a well-separated pitch (Fournier and Hebert, 2013). Furthermore, direct measures of conscious gap detection in tinnitus patients failed to show any deficits that would significantly affect the 50-ms gap typically used in demonstrating GPIAS (Fournier and Hebert, 2016), although, as we have previously indicated, there are likely fundamental differences between gap-induced reductions of a reflex response and absolute gap detection thresholds (Berger et al., 2017). Despite the lack of support from current human studies, the GPIAS test does give results that are consistent with the presence of tinnitus in many animal studies (Galazyuk and Hebert, 2015;Turner and Larsen, 2016). This implies that tinnitus may be affecting the unconscious neural processing of GPIAS in the brainstem rather than through altering conscious gap detection. The effect seems to be specific for the gap, as the effect of a brief noise pre-pulse, in reducing the response to a startle pulse, is not changed in animals where tinnitus has been induced (Dehmel et al., 2012). In both cases an alteration in the gain control of the output from the cochlear nucleus might be enough to change the strength of GPIAS. However, to validate the GPIAS method for use in animals it will be necessary to demonstrate a reduced level of GPIAS in tinnitus patients compared with an age and hearing loss matched control population. Until this is done the link between GPIAS and tinnitus will remain uncertain. Development of an objective test for tinnitus in humans One of the challenges for tinnitus research is that, even though there is great variety in the methods used for identifying tinnitus in animals, all are fundamentally different from the mainly questionnaire-based methods of the clinic. Human studies of tinnitus have involved measuring spontaneous oscillations in the cortical EEG activity (Adjamian, 2014) and more recently cortical evoked potentials (Han et al., 2017), but these have been of limited usefulness because it has not been practical to measure the activity in a single subject before and after the onset of tinnitus. The GPIAS method has been used in humans, in an attempt to detect tinnitus, by measuring the eye-blink reflex as a component of the general startle response (Fournier and Hebert, 2013;Shadwick and Sun, 2014). Although there were deficits in gap detection ability, they were not specific for a background noise matched to the tinnitus frequency. Furthermore, it is difficult to measure the electromyographic response associated with the eye-blink in awake animals where it would be possible to induce tinnitus experimentally (Servatius, 2000). It is thought that the PAMR is a di-synaptic pathway with neurons of the cochlear root nucleus projecting directly to the facial nucleus without involving the ventral pontine reticular nucleus, which is the hub of the acoustic startle reflex (Hackley, 2015;Lee et al., 1996). The cochlear root nucleus is subject to PPI (Gomez-Nieto et al., 2010) and it is possible that modulation of the PAMR occurs at the level of the cochlear root nucleus rather than the pontine reticular nucleus where most PPI is thought to occur (Lingenhohl and Friauf, 1994). It would be useful to check whether the enlarged PAMR produced by activation of the neck and facial muscles also failed to produce any increase in the strength of GPIAS. Another factor that affects the size of the PAMR is the ear of stimulation, with contralateral acoustic stimulation sometimes producing a PAMR that is two or three times the size of the response produced by ipsilateral stimulation (Dus and Wilson, 1975) and binaural stimulation producing an even bigger response (Doubell et al., 2018). The effect of unilateral compared to bilateral stimulation should also be quantified with respect to GPIAS. A potential limitation in the present study is that we only showed a clear eye-blink response in about 28% (2/7) of our participants and this is much lower than previous studies (Fournier and Hebert, 2013;Shadwick and Sun, 2014). This was presumably due to the small number of repeats but might also have been because we did not optimise the recording conditions for the eyeblink response (Blumenthal et al., 2005). Having the eye gaze to the right may have adversely interfered with the eye-blink response, and keeping the lights on in the recording booth may have increased the background activity in the orbicularis oculi muscle thus potentially partially masking the response. In addition the use of a monaural stimulus may have reduced the amplitude of the eye-blink response as binaural stimuli are usually used. Conclusion In conclusion, the similarity between the PAMR and the ear-flick response shown in rodents means that it may be possible to use the human PAMR to validate the GPIAS technique that has been used to detect tinnitus in guinea pigs (Berger et al., 2013;Coomber et al., 2014). The present results show that the PAMR is subject to GPIAS using similar parameters of gap and background noise to those used in rodents and with the human eye-blink (Fournier and Hebert, 2013;Shadwick and Sun, 2014;Turner et al., 2006). We are currently measuring the PAMR response in participants with tinnitus to determine if there are significant differences from an age-matched population when GPIAS is measured.
225595390
s2orc/train
v2
2020-07-09T09:09:04.669Z
2020-07-08T00:00:00.000Z
Control charts for monitoring drip irrigation with different hydraulic heads This study monitored a drip irrigation system with different hydraulic heads, using control charts. The study included 25 tests, and was conducted at the Experimental Nucleus of Agricultural Engineering of the State University of Western Paraná, located in the municipality of Cascavel, Paraná. The drip irrigation system was operated by gravity, and had four hydraulic heads (10, 11, 12 and 15 kPa). The uniformity of the system was determined based on uniformity distribution. Uniformity monitoring was performed using Shewhart and exponentially weighted moving-average (EWMA) control charts. An increase in the hydraulic head increased uniformity. The use of 12 and 15 kPa hydraulic heads yielded good performance, whereas 10 and 11 kPa yielded regular performance. The use of control charts proved to be efficient; the Shewhart control chart was more robust, whereas the EWMA control chart, which indicated trends and deviations not shown by Shewhart control charts, was more sensitive. INTRODUCTION Drip irrigation requires high investment in construction and equipment for water collection, conduction, control and distribution, in addition to energy and labor costs (Da Silva et al., 2003). Hence, the use of drip irrigation is limited for small rural producers who do not have the required financial resources. One technique that reduces the initial cost and the variable cost of drip irrigation is gravity irrigation. In this technique, reservoirs are raised to a minimum height of 1m for the supply of water in small areas, thus eliminating the use of hydraulic pumps (Souza et al., 2009). The possibility of performing drip irrigation without electricity and the low cost of the dripper make this tool more attractive, as it can contribute to the development of small rural producers. One of the main parameters used in the evaluation of drip irrigation systems is the uniformity of water application over the irrigated area (De Souza et al., 2006). Uniformity characterizes an irrigation system based on the difference in water volume applied by emitters and directly affects irrigation management, efficiency, cost, as well as crop quality and productivity (Azevedo and Saad, 2012). Further, irrigation control prevents physiological and phytosanitary problems, thus reducing unnecessary losses of water, energy and nutrients (Trevisan et al., 2016). Control charts are most frequently used to monitor the performance of processes over time (Vieira, 2014). A control chart is a graphic representation of sample measurements of a given process and indicates the need to investigate and adjust a process according to the size of deviations presented. Shewhart's and exponentially weighted moving-average (EWMA) control charts are among the best-known and the most frequently used ones (Frigo et al., 2016). The success of Shewhart's control chart is owing to its simplicity, in which the ease of the decision rule is based only on examining the last observed point. However, this is also a major disadvantage, as any information provided by the previous sequence of points is disregarded, which renders the Shewhart control chart relatively insensitive to minor changes in the process (Walter et al., 2013). Minor variations in a process cannot be perceived by the Shewhart control chart; in this case, it is advisable to use the EWMA control chart. This control chart is more sensitive in detecting minor deviations from the average of a process. Therefore, such a method offers high speed and credibility in identifying minor mismatches in the process. Vilas Boas (2016) reported that the use of control charts in irrigation provides several benefits: compliance with irrigation quality standards, monitoring of systematic errors in the irrigation process, provision of information regarding the status of the irrigation process, calculation of measurement uncertainty in irrigation, provision of objective evidence for demonstrating quality of measurements, and provision of a source of historical data on the measurement process in irrigation. Andrade et al. (2017a) concluded that micro irrigation uniformity can be analyzed through control charts. Irrigation systems are commonly monitored using control charts; however, they are rarely monitored using hydraulic heads. Therefore, this study was conducted to evaluate the 3 Control charts for monitoring drip irrigation with … Rev. Ambient. Água vol. 15 n. 4, e2554 -Taubaté 2020 uniformity of a drip irrigation system with different hydraulic heads through Shewhart and EWMA control charts. MATERIAL AND METHODS The study was performed at the Experimental Nucleus of Agricultural Engineering of the State University of Western Paraná, located in the municipality of Cascavel, Paraná, Brazil, with geographical coordinates of 24º58' S and 53º27' W. The system consisted of flat drip tubes (SIPLAST TM , Model P1) with a diameter of 16 mm, an inlet filter with an area of 7.5 mm 2 , and a total of eight holes. There was a 0.20 m space between the drippers, and potential flow equation = 0.19.pressure 0,52 . Considering a main line and four lateral lines, the system included 75 drippers per line, thus totaling 300 drippers. To reduce clogging, a 120-mesh screen filter was installed close to the reservoir. For data collection, the methodology proposed by Keller and Karmeli (1975) was used, which involved determining the flow in four emitters per lateral line; the first dripper, drippers located at 1/3 and 2/3 of the lateral line length and the last dripper in four lateral lines. The system was pressurized by gravity. Figure 1 shows the experimental set-up and the data collection technique. After assembling the system, four different hydraulic heads were evaluated: 10, 11, 12 and 15 kPa. A total of 25 tests were performed for each hydraulic head, and the number of samples recommended by Montgomery (2016) for quality control tests was used. Furthermore, a descriptive statistic was performed to measure the central tendency. To assess the uniformity of the irrigation system, the distribution uniformity (UD) proposed by Merrian and Keller (1978) was used, as expressed in Equation 1. Where: UD: uniformity distribution, (%); Q25: average of the ¼ smaller flow rates of the drippers, (L h -1 ); ̿ : arithmetic mean of flows (L h -1 ). The following classifications were used to classify the UD data, which are listed in Table 1. To monitor UD, a Shewhart control chart was prepared to investigate the parameters during the tests. It is necessary to prepare the control charts to determine the upper control limit (UCL) and lower control limit (LCL) using Equations 2 and 3, respectively. Where: UCL: upper control limit; LCL: lower control limit; ̿ : data average; ̿̿̿̿̿ : average of data amplitudes; 2 : constant equal to 1.128 for n = 2, considering individual measures (Montgomery, 2016). In addition to the Shewhart control chart, the EWMA control chart was used, which detected minor variations in behavior and provided a new estimate of the new process average, which might change the desired quality characteristics. This control chart accumulates successive information, weighs the samples, and provides more weight to the most recent information. The EWMA control chart consisted of plotting Zi versus sample number i (or time), which can be calculated using Equation 4, according to Roberts (1959). The variance of the variable Z is expressed as Equation 5. Where: : standard deviation of the data in relation to the mean; : weight assigned to each sample; : order of sample used. Roberts (1959) reported that the UCL and the LCL of the EWMA control chart can be calculated by Equations 6 and 7, respectively. Where: ̿ : average of the data; λ: weight assigned to each sample, which varies from 0 to 1; L: number of standard deviations to control the mean to be detected; : order of sample used. In this study, 0.25 is the weight constant of the sample, and for the width of the λ limits the factor is L = 2. RESULTS AND DISCUSSION An exploratory analysis was performed to provide a general characterization of the irrigation process. Table 2 summarizes the descriptive statistics for UD using different hydraulic heads. The greatest uniformity was achieved using 15 kPa hydraulic head (89.91%), and the 12 kPa hydraulic head performed relatively well (87.12%). However, the 11 kPa hydraulic head indicated a regular uniformity (79.11%), whereas the 10 kPa hydraulic head demonstrated the lowest uniformity (77.00%). Gris et al. (2012) used hydraulic heads measuring 1.5 and 2.0 m in a drip irrigation system with cassava wastewater and obtained UD values exceeding 90%. Souza et al. (2009) evaluated drip irrigation systems by gravity using microtubes and obtained an average UD value of 87%. It is observed that the hydraulic head is proportional to uniformity; the higher the hydraulic head, the greater the uniformity. A situation finding has been discovered by Klein et al. (2013), who used 15, 18 and 20 kPa hydraulic heads in irrigation and fertigation systems, and consistently obtained greater uniformity at 20 kPa. According to Hermes et al. (2018), the hydraulic head affects the flow rate during irrigation. The distribution uniformity generally increases with the hydraulic heads (Ella et al., 2009). Shewhart control charts for different hydraulic heads are shown in Figure 2. It was observed that the hydraulic head classified as regular in terms of uniformity (10 and 11 kPa), presented points outside the control limits, indicating points outside of the statistical control process, whereas the 12 and 15 kPa hydraulic heads, whose uniformity was classified as good, were under statistical control, with no trend or no point was outside the control limits. In the study by Andrade et al. (2017b), although most processes were simple, they were classified as controlled by the Shewhart control chart and considered significant for the evaluation of irrigation. The 10 kPa hydraulic head presented an isolated point outside the control lines, which can be caused by factors such as low pressure (Saraiva et al., 2014), clogging of drippers, energy fluctuations, pressure variations, and climatic factors (Justi and Saizaki, 2016). During the first tests to 10 and 11 kPa, the process was under control (1st to 13th tests). Over time, there was a decrease in the quality of process that is attributed to non-controllable factors such as the clogging of emitters, temperature and pressure (Chinchilla et al., 2018). By analyzing the EWMA control chart for the UD variable (Figure 3), it is evident that the drip irrigation process is out of statistical control, because in addition to presenting points outside the UCL and LCL (10 and 11 kPa), descending sequences also decreased from the 13th test (10 and 11 kPa) until the end, characterizing a decrease in the uniformity of the system. Sequences of six points in the descending order characterize an out of statistical control process (Montgomery, 2016). The EWMA control chart is the most suitable for micro irrigation assessment, as it detects minor variations in the process (Siqueira et al., 2018). The 12 and 15 kPa did not present points outside the control limits. With the increase in the hydraulic head, system pressure, and water speed, the clogging of drippers decreased, and the system uniformity increased (Silva et al., 2017). Consequently, eight consecutive sequences were obtained on the same side of the central line, characterizing the process as out of statistical control (Montgomery, 2016 CONCLUSIONS The increase in hydraulic head increased the uniformity of the drip irrigation system. The use of 12 and 15 kPa hydraulic heads demonstrated good performance, with uniformities exceeding 80%, whereas the 10 and 11 kPa hydraulic heads demonstrated regular performance. The use of control charts was effective in monitoring the system uniformity of drip irrigation based on different hydraulic heads. The Shewhart control chart was more robust, whereas the EWMA control chart, which indicated trends and deviations not shown by the Shewhart control chart, was more sensitive. In sum, it is recommended that the drip irrigation system by gravity be used, with hydraulic heads greater than 12 kPa.
225046000
s2orc/train
v2
2020-10-22T23:12:46.903Z
2020-09-20T00:00:00.000Z
Influence of induction heating of injection molds on reliability of electrical connectors Electrical and electronic parts are nowadays integral components of mechanical engineering, vehicles and devices. Given that those parts are often responsible for the user’s safety, they are required to have a high reliability [28]. Because of continuous pursuit of miniaturization, the amount of problems related to maintaining the reliability of these elements still increases [25, 30]. The authors, B. Sun, A. Wymysłowski et al. examined the exploitative problems of electronic components which are caused straight by miniaturization. The miniaturization problems do not only concern electronics. Limitation of weight and overall dimensions acts as a barrier for housings constructors and contractors, which are usually made of polymer material using injection molding technology [24]. Due to its insulating properties, ease of molding and low price, plastics have become the basic materials used in the production of housings precisely in injection molding technology. Of all the injection molding parameters, the key is the temperature of the forming surfaces when filling the cavity with the flowing material [1, 2, 20, 27]. In a publication [17] there was shown an effect of induction heating on thin-walled, plastic products manufacturing process. Likewise, Chang et al. proved, that the use of high mold’s cavity temperature allows to produce elements with good aesthetic [5-8]. During conventional process of injection molding, the constanttemperature injection molds are used. The difference in temperature between the flowing plastic stream and the forming surface causes, that the polymer melt cools down and its viscosity increases with the distance travelled. On the other hand, in a publication [26] it was noticed, that the heat loss of injected polymer melt decreases with an injection velocity increase, what prove about importance of that parameter in a context of the conventional process. The creation of frozen layers reduces the cross-section of the mold cavity, what prevents filling the forming areas which are the furthest from the injection point. Problems with incomplete filling of the mold cavity appear in particular while processing plastic materials with increased viscosity or containing various types of filling agents (strengthening fibers, magnetic powders, talc, flame retardants, etc.) [32]. Very often this phenomenon is accompanied by microstructure mapping errors and Keywords Introduction Electrical and electronic parts are nowadays integral components of mechanical engineering, vehicles and devices. Given that those parts are often responsible for the user's safety, they are required to have a high reliability [28]. Because of continuous pursuit of miniaturization, the amount of problems related to maintaining the reliability of these elements still increases [25,30]. The authors, B. Sun, A. Wymysłowski et al. examined the exploitative problems of electronic components which are caused straight by miniaturization. The miniaturization problems do not only concern electronics. Limitation of weight and overall dimensions acts as a barrier for housings constructors and contractors, which are usually made of polymer material using injection molding technology [24]. Due to its insulating properties, ease of molding and low price, plastics have become the basic materials used in the production of housings precisely in injection molding technology. Of all the injection molding parameters, the key is the temperature of the forming surfaces when filling the cavity with the flowing ma-terial [1,2,20,27]. In a publication [17] there was shown an effect of induction heating on thin-walled, plastic products manufacturing process. Likewise, Chang et al. proved, that the use of high mold's cavity temperature allows to produce elements with good aesthetic [5][6][7][8]. During conventional process of injection molding, the constanttemperature injection molds are used. The difference in temperature between the flowing plastic stream and the forming surface causes, that the polymer melt cools down and its viscosity increases with the distance travelled. On the other hand, in a publication [26] it was noticed, that the heat loss of injected polymer melt decreases with an injection velocity increase, what prove about importance of that parameter in a context of the conventional process. The creation of frozen layers reduces the cross-section of the mold cavity, what prevents filling the forming areas which are the furthest from the injection point. Problems with incomplete filling of the mold cavity appear in particular while processing plastic materials with increased viscosity or containing various types of filling agents (strengthening fibers, magnetic powders, talc, flame retardants, etc.) [32]. Very often this phenomenon is accompanied by microstructure mapping errors and Keywords This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/) injection mold, induction heating, maintenance and reliability of moldings, electrical connectors. Continuous increase in demand for electricity causes that electrotechnical industry is relentlessly under pressure of technological development. It is necessary to reduce costs while increasing a reliability of manufactured products. Common miniaturization of products mounted in land vehicles, vessels and airplanes along with limitation of their weight requires the use of innovative production methods. This publication presents the problem of exploitation related with reliable assembly and disassembly of rail-mounted electrical connectors. In order to improve the reliability of injected electrical connector housings, the authors proposed the selective induction heating technology as a heating method of injection mould. To reveal the origin of the problem, in case of this work, the simulation studies of filling the mould cavity were carried out. They show an incorrect localization of polymer streams weld line. Then the results of the simulation and induction heating experiment are presented. They were necessary for the proper design and make of the injection mould. In the final stage, the experimental tests of the manufactured housings assembly and disassembly were performed in conditions corresponding to the actual conditions. The obtained results show, that selective induction heating technology has significantly improved the reliability of rail-mounted electrical connector housings. The publication presents the problem of an exploi-• tation of the rail-mounted electrical connectors. Experimental studies and obtained results are • shown. Selective induction heating process resulted in a • decrease in damaging of the plastic parts. defects related to incorrect shaping of flowing polymer melts' weld lines [3,13]. Defects which are caused by too low mold temperature and increased injection pressure can be removed in additional processes. It is appropriate to take into account, that from an economic and ecology manufacturing point of view, conducting complex production included in one operation of injection molding is more beneficial. In the case of dynamic mold temperature change technology, mold cavity doesn't have one, constant work temperature. The temperature in the mold is changed intentionally in the way synchronized with the work of the injection machine, in accordance with the profile selected by the technologist. At the moment of injection, forming surfaces are heated to the temperature near to the injected polymer melt's temperature. After injection the intense mold cooling process begins. Thanks to that, it is possible to produce elements with high gloss, without deformations or visible flow lines [31] and thin-walled elements, which are often subjected to incomplete filling of the mold cavity. The rheology of the polymer materials, due to their non-newtonian character, is directly related to the processing temperature [33]. In opposite to newtonian liquids, in isobaric conditions the viscosity of the flowing melt is not a constant value, but it changes with the shear rate [34]. Techniques of the cyclic regulation of mold's cavity temperature gives the manufacturer a possibility of aware influence on waveform and distribution of temperatures in the cavity. According to Huang et al. study, the process of premature polymer melt cooling is stopped by increased temperature of the forming walls, which enable complete filling of the cavity and provide high quality of microstructure mapping [14,19]. It helps to obtain more accurate map of the surface with significantly lower mold filling resistance during the injection process [29]. The quality of surface reproduction is nowadays a growing challenge [11]. However, during the holding phase there is better pressure propagation in the whole volume of a molded piece. This means that there are smaller pressure gradients between the injection point and the furthest areas from it in the plastic flow path. In accordance to the analysis carried out by Liparoti et al., it results in a decrease of frozen stresses in molded part and lower values of shrinkage and its various orientations [15]. This benefit becomes noticeable especially when the cyclic regulation of mold's cavity temperature involves both halves of the mold. Of the many rapid heating methods, the induction heating offers many advantages [9]. The main advantage of high-frequency induction heating is, first of all, possibility of surface heating (without large volumes), which is called the skin effect. There are many publications describing the influence of induction heating on the quality of molded parts and the production process [22,23]. Mechanisms for the formation of weld lines of flowing plastic streams are also widely tested [3,4]. However, the literature is devoid of studies on the impact of induction heating on the exploitation and reliability of plastic products. As part of the work, the authors undertake to define the problem of exploitation of the rail-mounted electrical connector housings. The study contains simulation tests of housings manufacturing process, induction heating of chosen forming areas and experimental tests involve production and exploitation of a group of 1000 test samples (mold parts). Problem definition In the figure 1 the process of electrical connector assembly and disassembly on the rail was shown. The connector housing was fitted with a closed flexible clip, which is widely used in the electrotechnical industry. Pressing the housing to the rail causes temporary elastic deflection of the foot ( fig. 1a). Disassembly is done by pulling the clip away with a screwdriver or other tool ( fig. 1b). As can be seen ( fig. 1c), the highest stresses are accumulated in the foot area (stress simulation tests were presented as a guide). The problem is that the place of joining of flowing plastic streams becomes the area of stress concentration. In case of too cool fronts of polymer melt meet, it is impossible to create sufficiently strong polymer bindings. It results in cracking of the clip while disassembling, what, in effect, disqualifies the product from a further use ( fig. 1d). The mold part shown in the figure 2 is a prototype of electrical connector housing. A dense ribbing allows to obtain high stiffness of the part and enables the assembly of conductive elements. The product was designed to reduce its weight as much as possible. The upper part of the molded piece is fitted with a mounting unit. The flexible installation foot allows the connector to be clipped in and out on the mounting rail. Simulation tests for the filling of the mold cavity In order to correctly read the problems related to the assembly and disassembly of the electrical connectors, it was necessary to simulate the flow of the plastic melt inside the mold cavity. To make that, the Cross-WLF model was used. Melted polymers are non-newtonian liquids and their properities are strictly dependent on the temperature [33]. Contrary to newtonian liquids, polymer melts viscosity η is not constant and it depends on shear stress τ and shear rate  γ (1): The viscosity of polimers η decreases with growth of the shear rate  γ [10,34]. To determine the viscosity, the Cross model was used To determine the relation between the viscosity η and the temperature T, the Williams-Landel-Ferry equation was used (3): Simulation studies were conducted using Autodesk Moldflow Insight 2013 program. The parameters for simulation tests of the mold cavity filling are shown in Table 1. The plastic flowing streams at first fill the body of the molded part, and in the final phase is the installation foot formed. The foot is a closed loop, which means that the two streams merge in its circumference ( fig. 3). Because this process takes place in the last phase of injection, their fronts are that much cool, that there is no possibility to create strong polymer bindings. As this area is exposed to a stress concentration during assembly and disassembly of the product, it is often cracked. In theory, the solution of this problem would be to locate the gate directly on the installation foot, but for technological reasons this is economically unjustified. This type of electrical connectors is produced in a such big amounts, that the cold channel injection system has been displaced by the direct (hot channel) injection system. For economic reasons, a cold channel system was used for the experiment. On the basis of the carried out simulation studies, it was found that a local increase in temperature in the forming area of the installation foot caused the weld line to move outside the stress concentration area. There has been a slight reduction in pressure, which is due to the small proportion of the area with increased temperature to the entire area of the mold cavity. The results show, that exploitation of the electrical connectors should be improved after heating the installation foot forming area to 170°C. Induction mold heating tests The design and building processes of injection mold dedicated to produce electrical connector housings ( fig. 2) required simulation and experimental tests of the induction heating process. The simulation analysis was carried out using the Finite Element Method (FEM) implemented in the QuickField 6.3.1 package. All tests were performed in 2D space (xy) in AC Magnetics modules (electromagnetic analysis) to determine current density, magnetic field strength and magnetic flux density on the surface of the metal insert, followed by Transient Heat Transfer (non-stationary thermal analysis) to determine surface temperature changes as a function of time. The 2D model is a cross-section through a heated insert and induction coil in a plane perpendicular to the direction of current flow through the coil. Due to the multitude of non-linear relationships between individual parameters, it is difficult to predict the exact temperature value during induction heating [21]. Moreover, not only material properties have an influence on the induction heating process. Stresses caused by machining and heat treatment can influence on the induction heating process significantly. The basis for solving problems with electromagnetic phenomena are Maxwell equations (4-7) [12,16,18]: where E is the intensity of the electric field, B is the density of the magnetic flux, H is the intensity of the magnetic field, J is the density of the current, D is the density of the electric flux, J s is the source current vector, J e is the induction vector, ρ is the density of the electric charge. Induction coil in shape of closed loop was positioned in the way, which determine the forming insert as a heated core ( fig. 4a). It is the most effective method of induction heating, because the magnetic flux penetrates in direction perpendicular or close to perpendicular to the surface of heated insert. As assumed, very high temperature increases were achieved in a short time (fig. 4b). The average velocity of heating after 2,5 s was 210°C/s. In 1 s the character of Fig. 3. Simulation tests of filling the mold cavity for electrical connector housing, a) conventional process with constant mold temperature, b) temperature of induction heated area is 130°C, c) temperature of induction heated area is 150°C, d) temperature of induction heated area is 170°C, 1 -injection point, 2 -the place of joining of flowing plastic streams, 3 -the mold area heated by induction heating heating curve has changed. The obtained waveform is clearly close to the linear relationship. To verify the results obtained by the simulations, before developing the construction and building the prototype of the injection mold, the experimental tests were carried out. For this purpose, a special test stand was prepared, which has included an EFD Minac 6/10 induction generator, two interchangeable induction coils, FLIR T620 thermal imaging camera, a computer, PSG temperature regulation system and a casing with interchangeable heated inserts ( fig. 5). Parameters used in the analyses are collected in Table 3. The coil connected to the power supply and cooling system of the generator was positioned in relation to the heated surfaces according to the scheme shown in Figure 4. The thermal imagining camera, connected to results recorder which was installed on the computer, was placed on a tripod 700 mm from the heated surfaces. The injection mold body made from 1.2343 steel and interchangeable inserts made from the same material were covered with special chalk to obtain the same coefficient of emission on all measuring surfaces. The cooling system, which consisted of two rows of drilled channels with 6 mm diameter, was connected to temperature regulation system. Before the induction heating process, the body and the insert were heated to 50 °C. This temperature corresponds to the processing conditions for polyamide. The heating process, temperature and water flow through the coil rate were controlled from the generator's control panel. The temperature and the coolant flow through the body rate were determined from the PGS cooling system. The research process was carried out in the injection molding hall under conditions corresponding to actual production conditions (Fig. 5). During the experimental studies, the measurement time was extended from 2.5 to 4.5 s to take into account the heat loss occurring during the withdrawal of the induction coil and mold closure, assuming that this time will be 2 s. The photographs of the measuring station are shown in Figures 5b and 5c. The images recorded by the camera were sent to a PC on which the FLIR ResearchIR MAX software was installed. This program enables displaying of the results in temperature to time graph form. Similar to QuickField program, there is a possibility to select the points, for which values of the temperature increases are supposed to be read. The exemplary thermogram and the heating process graph for two reference points: P 1placed straight on the heated wall and P 2 -placed 1 mm into the material from the heated surface are shown in the Figure 6. Fig. 5. Experimental tests of induction heating process carried out on the insert which forms the molded part's clip: a) a diagram b), c) the test stand Contrary to experimental studies, simulation studies allowed to determine the temperature distribution in the cross-section of the heated insert. Therefore, the obtained graph (Fig. 4) shows maximum values of temperatures in the volume of the insert, while during experimental tests maximum surface temperatures were recorded (Fig. 6). In simulation tests, the cooling process of the insert, which takes place after the heating process was completed, was not included. However, this process was recorded during experimental studies. After 2,5 s the temperature close to 600°C was obtained during simulation tests. To avoid damaging of the forming insert, the heating time was limited to 0,5 s during the first measurement. The 1.2343 steel is tempered at the temperature of 400 -550°C, because of that in this purpose the maximum work temperature was set at 300°C. The second criterion was to obtain the temperature close to 150°C after 2 s from the end of the heating process. The assumptions were met for the heating time of 1 s, with maximum temperature of 278°C and 178,5°C after 2 s from the moment of switching off the inductor. Small temperature differences in the simulation and experimental studies most probably result from the fact, that the theoretical model does not take into account the material parameters related to the previous heat and mechanical treatment. The non-uniformity of the crystallographic structures resulting from dislocations, residual stresses and physico-chemical composition defects hinder the movement of domain walls under the influence of changes in magnetic field. These phenomena cannot be determined in a simulative way, and use of the experimental methods to carry them out require the use of specialist laboratory equipment. However, the experimental studies reflect the relationships obtained by the simulation and a temperature above 270°C was reached after a time of 0.7 s. Experimental studies On the basis of simulation studies, a design and construction of the injection mold for the production of rail-mounted electrical connector housings was developed and built. The mold allows to manufacture products in conventional technology -with a constant temperature of the mold cavity or with use of the selective induction heating. The mold cavity was designed to enable heating only of the selected forming area of the flexible clip to the desired temperature. The experimental samples were made on a Demag 35 injection molding machine with a screw diameter of 22 mm. The technological parameters are presented in Table 4. For research purposes, 1000 molding pieces were made (in 4 groups of 250 pieces, Fig. 9). The first group consists of housings manufactured in conventional technology using the constant temperature of the mold equal to 50°C (Fig. 9c). The second group is 250 parts made with use of the selective induction heating to temperature of 130°C in the forming area of the clip (Fig. 9d). In two subsequent groups, the clip forming area was heated to the temperatures of 150 and 170°C respectively (Fig. 9e, 9f). The products obtained, despite the use of different temperatures, did not show significant visual differences (Fig. 9c -9f). All products were conditioned at 25°C and 90 % humidity for 72 hours. The next step was to conduct the experimental tests on the assembly and disassembly of parts from the rails in conditions reflecting the actual exploitation of electrical connectors. For this purpose, a 35x7x500 mm rail fixed at both ends and a 6x1.5 mm flathead screwdriver were used. Each part was placed on a rail and disassembled. This cycle was repeated until the clip broke or 10 repetitions were achieved. During disassembly, each clip was deformed to the same extent, which is ensured by the design of the product and the way the screwdriver is supported at two points in the final stage of and 170°C. It involves improving the conditions under which the two flowing plastic streams join together (Fig. 4). Increasing the temperature stops the material from cooling down and increasing its viscosity. It is confirmed by marginal decrease of the injection pressure for each temperature rise in clip forming area. The drop in damage to 4 per 250 specimens (1.6 %) in the latter case is also due to the moving of the weld lines outside the range of the greatest clip deformation occurring during disassembly. Figure 10e shows a total damage for all tested groups. The highest value of clip defects was recorded in the fourth disassembly attempt. On the basis of the last graph (Fig. 10f), a statement can be risk that tests at higher temperatures would result in an asymptotic distribution of clip damage. The usage of induction heating has significant changed the distribution of damaged parts in relation to the number of repetitions of disassembly cycles. It is directly related to a reallocation of localization of creating the weld line in the clip. Summary and conclusions This publication presents the problem of an exploitation of the rail-mounted electrical connectors in terms of their reliability during assembly and disassembly. This topic was considered in response to manufacturing problems of one of the biggest producers of the electrical connectors in Europe. The structural element of the connector that was analysed was its prototype housing fitted with a flexible clip. To show the genesis of the problem, in the first part of the study results of simulation studies of filling the mold cavity which formed the housing are presented. The attention was focused on the forming area of the clip, which is the key element determining the reliability of the housing. The obtained results showed, that the temperature increase of this area allows to move the place of creating weld lines of the polymer melt streams outside the arm of the clip, which is the most loaded place during the disassembly. After specifying the temperature as a 170°C the place of the two polymer melt streams meeting was moved from the critical product area. Additional, desired effect was to inhibit the increase in viscosity of the polymer melt on the stream front. As expected, this should contribute to improving the quality of the plastic weld lines and reducing damage during product disassembly. Before the mold was made, the design of the mold based on simulation and experimental studies of the induction heating was developed. On the basis of series of measurements, the characteristics of the heating process were obtained and the results of experimental studies confirmed the results obtained by simulation. After 0.7 s the temperature on the clip forming surface was 272°C. After 2 s from the beginning of the heating process, the temperature was 175°C after disconnecting the inductor in 0.7 s. Using the prototype of the injection mold, 1000 molded parts in 4 groups with various clip forming surface temperature were made. Experimental studies and obtained results show, that the selective induction heating process resulted in a decrease in damaging of the housings during disassembly. The dynamics of the induction heating shows, that higher mold temperatures can bring about a decrease in molded parts failure rate with a slight increase in the cycle time. A higher mold temperature will not influence the cooling time of the mold because only the surface layers of the mold cavities are heated, which directly results from the skin effect of the induction heating process [18]. movement. The obtained results show that the use of selective induction heating has had the effect of reducing damage to the housings during disassembly. During assembly, when the product was placed on the rail, there was no damage to the clip. This is the effect of less deformation of the clip when assembling the product than when disassembling it. In the case of the first group of products, made in conventional technology (Fig. 10a), there is a noticeable upward trend in damage with each subsequent assembly cycle. While testing of the first group of products, 23 clips were damaged, which is 9.2 %. The use of the induction heating and obtaining the temperature of 130°C in the clip area allowed to decrease the damage to 13 out of 250 tested samples (Fig. 10b). Also, the drop in damage can be observed at 150
235484870
s2orc/train
v2
2021-06-19T16:17:21.924Z
2021-06-01T00:00:00.000Z
Cyclic Adenosine Monophosphate Eliminates Sex Differences in Estradiol-Induced Elastin Production from Engineered Dermal Substitutes Lack of adult cells’ ability to produce sufficient amounts of elastin and assemble functional elastic fibers is an issue for creating skin substitutes that closely match native skin properties. The effects of female sex hormones, primarily estrogen, have been studied due to the known effects on elastin post-menopause, thus have primarily included older mostly female populations. In this study, we examined the effects of female sex hormones on the synthesis of elastin by female and male human dermal fibroblasts in engineered dermal substitutes. Differences between the sexes were observed with 17β-estradiol treatment alone stimulating elastin synthesis in female substitutes but not male. TGF-β levels were significantly higher in male dermal substitutes than female dermal substitutes and the levels did not change with 17β-estradiol treatment. The male dermal substitutes had a 1.5-fold increase in cAMP concentration in the presence of 17β-estradiol compared to no hormone controls, while cAMP concentrations remained constant in the female substitutes. When cAMP was added in addition to 17β-estradiol and progesterone in the culture medium, the sex differences were eliminated, and elastin synthesis was upregulated by 2-fold in both male and female dermal substitutes. These conditions alone did not result in functionally significant amounts of elastin or complete elastic fibers. The findings presented provide insights into differences between male and female cells in response to female sex steroid hormones and the involvement of the cAMP pathway in elastin synthesis. Further explorations into the signaling pathways may identify better targets to promote elastic fiber synthesis in skin substitutes. Introduction Mature elastic fibers are a key component of the dermis that is lacking in most skin substitutes and is reduced in aging skin. Elastin is essential to elastic fibers; it is an insoluble protein that makes up approximately 2-4% of the human dermal extracellular matrix [1,2]. It confers both mechanical and cell signaling properties upon tissues in which it is incorporated. As elastin has a half-life of approximately 70 years [3], cells past the early neonatal period generally do not produce elastin [4]. In the cases of injury when elastin is produced, the small amounts synthesized are insufficient to replace what was lost and the fibers are generally disorganized [5]. Loss of elasticity as part of the healthy aging process, [6,7] and disorders such as cutis laxa that occur due to defects in mature elastin [8][9][10][11], demonstrate the need for elastin in engineered skin substitutes to fully recapitulate normal healthy skin. While different methods have been attempted to stimulate elastin synthesis in the skin during wound healing or to incorporate elastin into scaffolds for engineered skin substitutes, none to date have been able to form a fully functional network of mature elastic fibers. This structural omission can lead to mobility and other quality-of-life issues following treatment; therefore, a better understanding of elastin synthesis is necessary to facilitate more favorable outcomes post-injury. 2 of 10 Estrogen is a regulator of elastin synthesis in the skin and other tissues as demonstrated by changes that occur post-menopause when estrogen levels drop [12][13][14][15]. The literature has already established that 17β-estradiol (E 2 ), the most potent of the estrogens, can stimulate elastin synthesis in the skin by increasing transforming growth factor-β (TGFβ) [16][17][18]. Similarly, increases in elastin due to culture with E 2 have been found in aortic smooth muscle [19] and adipose-derived mesenchymal stem cells [20]. Application of E 2 also results in increased elastin content in the carotid artery [21], vagina [22], vocal fold [23,24], and skin [25]. Most of these studies utilized animal studies or human cells from aged participants. The conditions were often driven by trying to restore normal physiologic levels. While TGF-β has been identified as one mechanism that can lead to increased elastin synthesis, other mechanisms are likely. We hypothesized that cyclic adenosine monophosphate (cAMP) may also play a role. Studies have shown that cAMP does influence elastin synthesis; however, the results are limited and inconsistent [26,27] as elastin synthesis due to the addition of cAMP can vary based on concentration [28]. The current study was undertaken to investigate methods for increasing elastin synthesis in dermal substitutes, which may improve functional outcomes after transplantation to burn wounds or other wounds requiring skin grafts. Our goal was to have more generalizable data by studying the effects on both sexes in a large range of ages. The long-term goal is to identify methods to stimulate elastin synthesis in any adult dermal fibroblasts for skin graft and wound healing applications. Patient Demographics Equal numbers of male and female patient samples were used in this study. There were no significant differences between the sexes in terms of age, race, ethnicity, or biopsy location. The patient demographics for the cells that were used are summarized in Table 1. The data below represents the demographics of the samples used for this study; where replicates were used, the samples were averaged and used as a single value for analysis. Elastin Contents in Engineered Dermal Substitutes due to Steroid Hormone Culture After culture, the elastin content of the tissues was measured and the results were normalized to DNA content. The fold changes in elastin content for treatment versus vehicle control were compared ( Figure 1A). The elastin content was significantly increased after culture with E 2 in the female engineered dermal substitutes (EDS) but not in the male EDS though the amounts of elastin in the EDS were still relatively low, 14.13 ± 2.88 mg/µg DNA in male EDS and 24.11 ± 4.91 mg/µg DNA in female EDS cultured with E 2 . To determine which receptors were involved in the increased elastin content after culture with E 2 , agonists specific to ER-α, ER-β, and GPER-1 were used in the culture medium of the EDS. Agonists for ER-α and ER-β led to significant increases in elastin in the female EDS only ( Figure 1B). The amount of elastin after culture with the agonists was similar to that found after treatment with E 2 . This also confirms that the increase in elastin synthesis observed with the addition of E 2 was indeed a function mediated via activation of the estrogen receptors. DNA in male EDS and 24.11 ± 4.91 mg/µg DNA in female EDS cultured with E2. To deter mine which receptors were involved in the increased elastin content after culture with E2 agonists specific to ER-α, ER-β, and GPER-1 were used in the culture medium of the EDS Agonists for ER-α and ER-β led to significant increases in elastin in the female EDS only ( Figure 1B). The amount of elastin after culture with the agonists was similar to that found after treatment with E2. This also confirms that the increase in elastin synthesis observed with the addition of E2 was indeed a function mediated via activation of the estrogen re ceptors. Receptor Densities in the Human Dermal Fibroblasts The initial receptor densities were determined on isolated cells alone prior to being incorporated into the EDS to determine if differences in receptor number may be involved No significant difference in receptor densities was observed between male and female human dermal fibroblasts (hDF) for any of the receptors ( Figure 2). Final receptor densi ties were also quantified to determine if there were any changes to the receptor densitie after culture with E2 or P4. Again, there were no significant differences in the quantity o ER-α after culture with E2 nor changes in the PR after culture with P4 in either the male o female EDS. In both cultures, the ER-β receptor densities significantly decreased in both male and female EDS after culture with E2 alone. This reduction was compared to vehicle controls which contained 6.20 ± 3.07 ng ER-β/µg DNA for male EDS and 6.09 ± 2.96 ng ER β/µg DNA for female EDS. Receptor Densities in the Human Dermal Fibroblasts The initial receptor densities were determined on isolated cells alone prior to being incorporated into the EDS to determine if differences in receptor number may be involved. No significant difference in receptor densities was observed between male and female human dermal fibroblasts (hDF) for any of the receptors ( Figure 2). Final receptor densities were also quantified to determine if there were any changes to the receptor densities after culture with E 2 or P 4 . Again, there were no significant differences in the quantity of ER-α after culture with E 2 nor changes in the PR after culture with P 4 in either the male or female EDS. In both cultures, the ER-β receptor densities significantly decreased in both male and female EDS after culture with E 2 alone. This reduction was compared to vehicle controls which contained 6.20 ± 3.07 ng ER-β/µg DNA for male EDS and 6.09 ± 2.96 ng ER-β/µg DNA for female EDS. with the addition of E2 was indeed a function mediated via activation of the estrog ceptors. Receptor Densities in the Human Dermal Fibroblasts The initial receptor densities were determined on isolated cells alone prior to incorporated into the EDS to determine if differences in receptor number may be invo No significant difference in receptor densities was observed between male and f human dermal fibroblasts (hDF) for any of the receptors ( Figure 2). Final receptor d ties were also quantified to determine if there were any changes to the receptor den after culture with E2 or P4. Again, there were no significant differences in the quant ER-α after culture with E2 nor changes in the PR after culture with P4 in either the m female EDS. In both cultures, the ER-β receptor densities significantly decreased in male and female EDS after culture with E2 alone. This reduction was compared to v controls which contained 6.20 ± 3.07 ng ER-β/µg DNA for male EDS and 6.09 ± 2.96 n β/µg DNA for female EDS. Cellularity of the EDS E 2 is known to be mitogenic and as such could be a potential confounding factor when studying elastin content [29,30]. To ensure that the differences in the elastin content were due to elevated expression of elastin rather than increased cell number, the DNA content was determined for all EDS. The elastin content was normalized to DNA content before determining fold change. To look at overall cellularity, the DNA contents were normalized to protein content to account for any differences in tissue size. There were no significant changes in DNA content for any conditions when evaluated as content alone or as fold change compared to vehicle control. The female EDS and male EDS cultured with the vehicle control contained 3.51 ± 1.67 and 3.86 ± 1.31 µg DNA/mg protein content, respectively. When cultured with E 2 , the contents were 3.56 ± 1.18 and 3.72 ± 0.74 µg DNA/mg protein content, which represents a change of 1.01 ± 0.11-fold and 1.03 ± 0.25-fold compared to the vehicle for female and male EDS, respectively. TGF-β1 and cAMP Concentrations in the Culture Medium after Culture with E 2 The TGF-β1 concentration in the spent medium was investigated as a possible mechanism for the differences in elastin synthesis ( Figure 3A). There were no significant changes in the TGF-β1 concentration due to E 2 treatment when compared to control for either the female or male EDS. There was a significant difference between the sexes with the male EDS producing significantly more TGF-β1 than the females which does not correspond with the changes in elastin synthesis. E2 is known to be mitogenic and as such could be a potential confounding factor when studying elastin content [29,30]. To ensure that the differences in the elastin content were due to elevated expression of elastin rather than increased cell number, the DNA content was determined for all EDS. The elastin content was normalized to DNA content before determining fold change. To look at overall cellularity, the DNA contents were normalized to protein content to account for any differences in tissue size. There were no significant changes in DNA content for any conditions when evaluated as content alone or as fold change compared to vehicle control. The female EDS and male EDS cultured with the vehicle control contained 3.51 ± 1.67 and 3.86 ± 1.31 µg DNA/mg protein content, respectively. When cultured with E2, the contents were 3.56 ± 1.18 and 3.72 ± 0.74 µg DNA/mg protein content, which represents a change of 1.01 ± 0.11-fold and 1.03 ± 0.25fold compared to the vehicle for female and male EDS, respectively. TGF-β1 and cAMP Concentrations in the Culture Medium after Culture with E2 The TGF-β1 concentration in the spent medium was investigated as a possible mechanism for the differences in elastin synthesis ( Figure 3A). There were no significant changes in the TGF-β1 concentration due to E2 treatment when compared to control for either the female or male EDS. There was a significant difference between the sexes with the male EDS producing significantly more TGF-β1 than the females which does not correspond with the changes in elastin synthesis. The cAMP levels in the EDS homogenates were determined ( Figure 3B). These data are presented as the fold change of the cAMP concentration normalized to DNA content. The male EDS homogenates had significantly increased cAMP levels in response to E2 compared to vehicle and the female EDS. The average concentration of the vehicle control for male EDS was 2.66 ± 0.54 fmol/µg DNA and the average for the female EDS was 2.41 ± 0.26 fmol/µg DNA. The presence of cAMP in the male tissues treated with E2 was significantly higher than that of females and the vehicle control. n = 5; * indicates p < 0.05 compared to vehicle control; # indicates p < 0.05 compared to the opposite sex. Elastin Content When Exposed to Steroid Hormones and cAMP The addition of cAMP alone did not lead to an increase in elastin synthesis; however, when combined with E2 and P4 elastin synthesis significantly increased in both male and female EDS (Figure 4). The cAMP levels in the EDS homogenates were determined ( Figure 3B). These data are presented as the fold change of the cAMP concentration normalized to DNA content. The male EDS homogenates had significantly increased cAMP levels in response to E 2 compared to vehicle and the female EDS. The average concentration of the vehicle control for male EDS was 2.66 ± 0.54 fmol/µg DNA and the average for the female EDS was 2.41 ± 0.26 fmol/µg DNA. Elastin Content When Exposed to Steroid Hormones and cAMP The addition of cAMP alone did not lead to an increase in elastin synthesis; however, when combined with E 2 and P 4 elastin synthesis significantly increased in both male and female EDS (Figure 4). Discussion This study has confirmed that E2 has a stimulatory effect on elastin produ EDS containing female hDF, and further shows this is via activation of the nucl primarily ER-β. The elastogenic effect of E2 treatment was absent in male EDS an levels in the EDS in response to E2 were significantly higher in the male EDS tha female or vehicle. The sex differences in elastin synthesis could be eliminated w EDS were cultured with cAMP, E2, and P4. The elastogenic effects of E2 on hDF ha previously demonstrated in vivo and in vitro [16,18,31]. Several of these studies h found correlations between TGF-β1 levels and the increased elastin synthesis in r to E2 [16,32]. Unlike these studies, TGF-β1 levels were not altered in our EDS cu response to E2, suggesting other pathways are involved. The effects of sex hormones on gene expression are dependent on the presen steroid hormone receptors within the tissue. It is now well recognized that the ac estrogen are mediated via interaction with the two nuclear estrogen receptors (E ERβ) which function as transcription factors within the nucleus, and/or the me estrogen receptor (GPER-1), which activates specific second messenger signalin ways. Due to the different molecular mechanisms by which estrogen action is mo upon activation of these receptors, estrogen signaling can be classified as genom α/β) or non-genomic (GPER-1). Similar to the ERs, the progesterone receptor func a classic nuclear receptor and mediates gene expression via genomic signalin bound to progesterone. The male and female EDS were shown to express the same initial density of h receptors; however, their level of activity in response to the same treatment co with respective hormones/agonists was dramatically different between male and EDS. Our findings indicate that treatment with E2 increases elastin content selec females. A similar effect could be achieved with ER selective agonists, confirming increase in elastin synthesis was a result of increased activation of the ERs in resp E2 in females. When the GPER-1 selective agonist was used, no change in elastin f female or male EDS was observed. Because the primary effect of GPER activation ulation of adenylyl cyclase, which subsequently catalyzes the production of cA next sought to determine if cAMP levels were altered in EDS in the presence Discussion This study has confirmed that E 2 has a stimulatory effect on elastin production in EDS containing female hDF, and further shows this is via activation of the nuclear ERs, primarily ER-β. The elastogenic effect of E 2 treatment was absent in male EDS and cAMP levels in the EDS in response to E 2 were significantly higher in the male EDS than in the female or vehicle. The sex differences in elastin synthesis could be eliminated when the EDS were cultured with cAMP, E 2, and P 4 . The elastogenic effects of E 2 on hDF have been previously demonstrated in vivo and in vitro [16,18,31]. Several of these studies have also found correlations between TGF-β1 levels and the increased elastin synthesis in response to E 2 [16,32]. Unlike these studies, TGF-β1 levels were not altered in our EDS cultures in response to E 2 , suggesting other pathways are involved. The effects of sex hormones on gene expression are dependent on the presence of sex steroid hormone receptors within the tissue. It is now well recognized that the actions of estrogen are mediated via interaction with the two nuclear estrogen receptors (ERα and ERβ) which function as transcription factors within the nucleus, and/or the membrane estrogen receptor (GPER-1), which activates specific second messenger signaling pathways. Due to the different molecular mechanisms by which estrogen action is modulated upon activation of these receptors, estrogen signaling can be classified as genomic (ER-α/β) or non-genomic (GPER-1). Similar to the ERs, the progesterone receptor functions as a classic nuclear receptor and mediates gene expression via genomic signaling when bound to progesterone. The male and female EDS were shown to express the same initial density of hormone receptors; however, their level of activity in response to the same treatment conditions with respective hormones/agonists was dramatically different between male and female EDS. Our findings indicate that treatment with E 2 increases elastin content selectively in females. A similar effect could be achieved with ER selective agonists, confirming that the increase in elastin synthesis was a result of increased activation of the ERs in response to E 2 in females. When the GPER-1 selective agonist was used, no change in elastin for either female or male EDS was observed. Because the primary effect of GPER activation is stimulation of adenylyl cyclase, which subsequently catalyzes the production of cAMP, we next sought to determine if cAMP levels were altered in EDS in the presence of E 2 . In contrast to the ERs, we found that cAMP levels were significantly increased in response to E 2 in the male EDS, but unaltered in the female compared to vehicle. This indicates that although E 2 treatment did not lead to increased elastin production, E 2 did induce signaling via nongenomic mechanisms in males. Collectively, these findings demonstrate sex-specific differences in signaling mechanisms in which E 2 signals via the nuclear ERs selectively in females, and we hypothesize that it signals via GPER-1 selectively in males. The theory that estrogen can function as a sex-specific biased agonist presents a novel and sophisticated mechanism by which hormones may regulate gene expression, and specifically in the current study, underly the difference in elastin expression observed between the female and male EDS. Furthermore, the differential outcome in elastin synthesis between male and female EDS following culture with E 2 could be attributed to the relative density levels of the GPER and the ERs. While the density of hormone receptors did not differ between sexes, there were significant differences in the density levels among the receptor types. If E 2 selectively signals through ERs in females and GPER-1 in males, then the higher receptor density, and thus signal response, would support why E 2 alone could increase elastin in females only. While culture with E 2 or P 4 alone did not alter elastin content in male EDS, there was a non-significant, but observable increase in elastin when cultured in presence of both E 2 and P 4 in the male cells. One explanation for this could be an increase in GPER density, thus signaling, in response to P 4 . We did not measure receptor levels following coculture with E 2 and P 4 , but this notion is supported by results from previous studies that demonstrate progestin up-regulates GPER-1 expression [33]. This P 4 -induced increase in GPER-1 expression paralleled an increase in receptor-bound estrogen, which also correlated to increased cAMP production [34]. When exogenous cAMP was also added to culture treatment with E 2 and P 4 , a further increase in elastin was seen in males, resulting in a significant increase in elastin content compared to vehicle in both male and female EDS. Co-expression of ER-α and/or ER-β with GPER-1 suggests the possibility of interactions between these receptors and likely involves cross-talk between their signaling pathways. The observation that the addition of P 4 with E 2 treatment increased elastin while also eliminating the E 2 -induced increase elastin via ER activation in females raises the possibility of coordinated hormonal control of GPER-1 and the ERs by P 4 in hDF. We believe that this increase in elastin synthesis is likely due to upregulation of GPER-1 in males, and that this response will also be present in other cells expressing these receptors. By estrogen acting as a GPER-1-biased ligand in males, a trend of increasing elastin content could be proposed from the collective findings presented here. While there was an increase in elastin content in our EDS, no elastic fibers were found and the overall production of elastin was still low. Because this was a relatively short-term study, a longer culture may lead to different results, as elastic fibers are not seen in wound repair until weeks to months after injury [35][36][37]. We believe that these results are still significant, as tropoelastin synthesis, the pro-form of elastin, is a limiting factor in elastic fiber production in engineered skin [38]. Additionally, we have discovered a proposed mechanism for sex-specific signaling and a method by which these sex differences could be eliminated. By further investigating the pathways involved, new methods to stimulate the production of elastin by hDF in skin substitutes may be found. Mechanical forces have also been shown to induce elastin synthesis and result in elastic fiber formation in blood vessel substitutes. The combination of mechanical forces and hormones may result in elastic fiber formation and a better understanding of the specific mechanisms involved. A major limitation of this study is the sample size along with the presence of confounding factors. The confounding factors of sex, age, and race in combination with the sample size increased the sample variability. Age is known to have an impact on elastin synthesis, in response to E 2 treatment older hDF synthesize more elastin than younger hDF, especially in females [16,37]. Race is also known to influence elastin synthesis; Black subjects were found to produce more TGF-β and, thus, elastin than their White counterparts [39]. This study aimed to find more generalizable data; however, these factors need to be considered. A larger sample size may be needed or, if it can be justified, a smaller range of ages in subjects should be considered. This study demonstrated that there are culture conditions by which we can induce elastin synthesis in both male and female hDF. This is a first step towards determining methods for stimulating elastin synthesis and assembly by hDF used in fabrication of engineered tissue substitutes. By identifying pathways, additional targets can also be identified for wound healing applications, where drugs can be developed and applied topically or through drug eluting scaffolds to promote proper elastin synthesis and assembly. Tissue Sources De-identified, discarded human skin was obtained from elective plastic and reconstructive surgery procedures at the University of Cincinnati Medical Center and Shriners Hospitals for Children-Cincinnati. Isolation and Culture of Primary Dermal Fibroblasts Isolation of hDF was performed following the method described in McFarland et al. [40]. Briefly, the full-thickness skin was incubated overnight at 4 • C in Dispase II (Sigma-Aldrich, St. Louis, MO, USA), after which the epidermis was mechanically separated from the dermis. The dermis was minced and digested with type II collagenase (Worthington, Lakewood, NJ, USA), with periodic agitation. Cells were cultured with medium consisting of Dulbecco's modified eagle medium (DMEM, Gibco, Grand Island, NY, USA) supplemented with 5% v/v fetal bovine serum (FBS, Gibco), 10 ng/mL epidermal growth factor (Peprotech, Rocky Hill, NJ, USA), 0.5 µg/mL hydrocortisone (Sigma), 5 µg/mL insulin (Sigma), 0.1 M L-ascorbic acid 2-phosphate sesquimagnesium salt hydrate (AA2P, Sigma), and 1% v/v antibiotic-antimycotic (Gibco). The hDF were expanded and utilized between passages 1 and 3. The methods described for isolation have been used extensively and results in >98% purity of the dermal fibroblasts; identification for this study was primarily based on morphology. Cells from each batch are archived. Fabrication and Culture of Engineered Dermal Substitutes EDS were fabricated by mixing hDF with fibrinogen (Sigma), 25 U/mL thrombin (Sigma), and tissue culture medium for a final concentration of 2 × 10 6 cells/mL and 2 mg/mL fibrin. After fibrillogenesis, additional tissue culture medium consisting of DMEM supplemented with 10% FBS, 1% antibiotic-antimycotic, 2 mg/mL ε-amino-ηcaproic acid (ACA, EMD Millipore, Burlington, MA, USA), and 50 µg/mL AA2P was added. ACA inhibits plasminogen activation and thereby inhibits fibrin degradation. Over the first 2 weeks, the ACA content of the medium was gradually reduced to a final concentration of 1.25 mg/mL at which time the EDS were rinsed in phosphate-buffered saline (Sigma) and placed in hormone-containing medium for an additional 2 weeks. The hormone medium consisted of phenol red-free DMEM (Gibco) supplemented with 10% charcoal stripped-FBS (Innovative Research, Novi, MI, USA), 1% antibiotic-antimycotic, 1.25 mg/mL ACA, 50 µg/mL AA2P, and the desired hormones, agonists, or appropriate vehicle control. The steroid hormones E 2 (Sigma) and P 4 (Sigma) were solubilized in 10% ethanol (Sigma) and used alone or in combination at concentrations of 10 nM and 100 nM, respectively. Dibutyryl cAMP (Sigma) was used at a concentration of 500 µM. To determine which estrogen receptors were involved, 10 µM propyl pyrazole triol (PPT, Tocris Bioscience, Minneapolis, MN) and 10 µM diaproprionitrile (DPN, Tocris), or 1 µM G1 (Cayman Chemicals, Ann Arbor, MI, USA) were used. These are agonists for ER-α, ER-β, and GPER-1 respectively. The ER agonists were solubilized in dimethyl sulfoxide (Sigma) which was used as the vehicle control for the agonist studies. The EDS were cultured for an additional 2 weeks with hormones or agonists. Protein and DNA Quantification ELISAs were used to quantify the concentrations of ER-α, ER-β, GPER-1, and the progesterone receptors (all from Aviva Systems Biology, San Diego, CA, USA) and cAMP (Enzo Life Sciences, Inc., Farmingdale, NY, USA) in the tissue homogenates. The spent medium was assayed using an ELISA for TGF-β1 (R&D Systems, Minneapolis, MN, USA).
1245310
s2orc/train
v2
2014-10-01T00:00:00.000Z
2000-05-15T00:00:00.000Z
Role of tetanus neurotoxin insensitive vesicle-associated membrane protein (TI-VAMP) in vesicular transport mediating neurite outgrowth. How vesicular transport participates in neurite outgrowth is still poorly understood. Neurite outgrowth is not sensitive to tetanus neurotoxin thus does not involve synaptobrevin-mediated vesicular transport to the plasma membrane of neurons. Tetanus neurotoxin-insensitive vesicle-associated membrane protein (TI-VAMP) is a vesicle-SNARE (soluble N-ethylmaleimide-sensitive fusion protein [NSF] attachment protein [SNAP] receptor), involved in transport to the apical plasma membrane in epithelial cells, a tetanus neurotoxin-resistant pathway. Here we show that TI-VAMP is essential for vesicular transport-mediating neurite outgrowth in staurosporine-differentiated PC12 cells. The NH2-terminal domain, which precedes the SNARE motif of TI-VAMP, inhibits the association of TI-VAMP with synaptosome-associated protein of 25 kD (SNAP25). Expression of this domain inhibits neurite outgrowth as potently as Botulinum neurotoxin E, which cleaves SNAP25. In contrast, expression of the NH2-terminal deletion mutant of TI-VAMP increases SNARE complex formation and strongly stimulates neurite outgrowth. These results provide the first functional evidence for the role of TI-VAMP in neurite outgrowth and point to its NH2-terminal domain as a key regulator in this process. Introduction Elongation of axon and dendrites, so-called neurite outgrowth, is a crucial event in neuronal differentiation and maturation during development of the nervous system (Prochiantz, 1995). Neurite outgrowth relies primarily on the transport and addition of new components to the plasma membrane but little is known about the vesicle targeting and fusion machinery involved in this process (Futerman and Banker, 1996;Bradke and Dotti, 1997). Membrane traffic can be envisioned as a succession of vesicle budding, maturation, vectorial transport, tethering, docking, and lipid bilayer fusion events. Vesicular transport to and fusion at the plasma membrane, i.e., exocytosis, is responsible for the release of soluble compounds, such as neurotransmitters in the extracellular medium, and for surface expression of plasma membrane proteins and lipids. Overwhelming evidence accumulated over the last years shows that soluble N -ethylmaleimide-sensitive fusion protein (NSF) 1 attachment protein (SNAP) receptors (SNAREs) are key proteins of membrane traffic, most likely involved in lipid bilayer fusion (Weber et al., 1998;Nickel et al., 1999;Parlati et al., 1999;Bock and Scheller, 1999). Clostridial neurotoxins (NTs) carry a proteolytic activity, which selectively cleaves defined SNAREs. Hence they have been extensively used to demonstrate the involvement of NT-sensitive SNAREs in vesicular transport (for review, see Johannes and Galli, 1998). Surprisingly, several exocytotic pathways are resistant to NTs, particularly to tetanus neurotoxin (TeNT), which cleaves several members of the synaptobrevin (also called vesicle-associated membrane protein, VAMP) family of SNAREs. TeNT resistance of the transport to the apical plasma membrane, in epithelial cells, was originally interpreted as the occurrence of SNARE-independent exocyto-sis (Ikonen et al., 1995;Simons and Ikonen, 1997). A breakthrough resulted from the cloning of synaptobrevin-like gene 1 (D'Esposito et al., 1996) and the finding that its product is insensitive to TeNT and Botulinum NTs (BoNTs) B, D, F, and G . This protein, called TeNT-insensitive VAMP (TI-VAMP) or VAMP7, forms apical SNARE complexes and mediates fusion of vesicles with the apical plasma membrane Lafont et al., 1999). It is also present in the degradation pathway of EGF in fibroblasts (Advani et al., 1999). TI-VAMP is a likely candidate for vesicle SNARE (v-SNARE) of NT-resistant exocytotic pathways. Interestingly, neurite outgrowth is resistant to TeNT, thus does not involve synaptobrevin and synaptic vesicles (SVs; Osen-Sand et al., 1996;AhnertHilger et al., 1996). Genetic evidences confirm this. First, in fly and nematode, elimination of the neuronal synaptobrevin leads to severe impairment of neurotransmitter release but has no effect on neurite outgrowth (Deitcher et al., 1998;Nonet et al., 1998). Second, neurite outgrowth is normal in a PC12 clone lacking synaptobrevins 1 and 2 (Leoni et al., 1999). In a previous study, we have shown that TI-VAMP-containing vesicular compartment excludes synaptobrevin 2 and other markers of wellcharacterized exocytic and endocytic compartments and it concentrates in the leading edge of axonal and dendritic processes in hippocampal neurons in primary culture (Coco et al., 1999). In this paper, we show that TI-VAMP fulfills the criteria to be the v-SNARE implicated in neurite outgrowth. Overlay Assay The corresponding GST fusion proteins and GST alone were produced and purified as described . 6 ϫ his-tagged SNAP25A (6 ϫ hisSNAP25, bacterial strain was a generous gift from G. Schiavo, ICRF, London, UK) was purified as described (Weber et al., 1998). 6 ϫ hisSNAP25 was run on SDS-PAGE and Western blotted onto Immobilon-P membrane (Millipore). The amount of 6xhisSNAP25 corresponds to 1.25 g/mm of membrane. 4-mm strips of the membrane were cut and incubated in 150 mM NaCl, 5% nonfat dry milk, 50 mM phosphate, pH 7.5, for buffer for 1 h at room temperature. The strips were then incubated with 10 nM of the GST fusion proteins overnight at 4 Њ C in buffer B (3% BSA, 0.1% Tween 20, 20 mM Tris, pH 7.5) containing 1 mM DTT. The strips were rinsed three times in buffer B at room temperature, incubated with anti-GST antibodies in buffer B for 1 h, rinsed in buffer B three times and incubated with alkaline phosphatase-coupled sheep anti-mouse antibodies. The detection was carried out simultaneously for all the strips, for the same time, using a kit from Promega. Cell Transfection PC12 or HeLa cells were trypsinized, washed, and resuspended at a density of 7.5-10 ϫ 10 6 cells/ml in Optimix (Equibio). Electroporation was performed with 10 g DNA in a final volume of 0.8 ml cell suspension using a Gene Pulser II device (Bio-Rad) with one shock at 950 F and 250 V. When GFP was cotransfected with TeNT or BoNTE for monitoring the transfected cells, the plasmid carrying the GFP gene was added at double concentration in order to ensure that all the cells that uptake it also uptake the plasmid carrying the toxin. Immediately after electroporation, cells were washed with 5 ml of complete medium before plating them for immunoprecipitation or immunofluorescence microscopy analysis. 5 h later, the outgrowth medium was removed and replaced with fresh medium containing 100 nM staurosporine (Sigma-Aldrich). PC12 and HeLa cells were processed 24 or 48 h after transfection, respectively. For enhanced expression of the exogenous proteins, 5 mM sodium butyrate was added in all the cases during the last 6 h before processing the cells. Antibody Uptake Assay PC12 cells processed as indicated above were incubated in the presence of 5 g/ml anti-GFP antibody in culture medium for 15 min on ice, 15 min on ice then 15 min at 37 Њ C, or 15 min on ice then 60 min at 37 Њ C, 24 h after transfection with GFP-TIVAMP or TIVAMP-GFP. The cells were then washed twice with culture medium and twice with PBS, fixed with PFA, and processed for immunofluorescence. Immunocytochemistry Cells were fixed with 4% PFA and processed for immunofluorescence as previously described (Coco et al., 1999). Optical conventional microscopy was performed on a Leica microscope equipped with a MicroMax CCD camera (Princeton Instruments). Confocal laser scanning microscopy was performed using a TCS confocal microscope (Leica). Images were assembled without modification using Adobe Photoshop. Neurite Outgrowth Assay Cells were fixed 24 h after transfection. Between 20 and 100 randomly chosen fields for each condition were taken with a MicroMax CCD camera (Princeton Instruments), resulting in the analysis of at least 50 GFPpositive cells. A neurite was defined as a thin process longer than 5 m. Using the Metamorph software (Princeton Instruments) two parameters were scored in each case: the number of neurites per cell (from 0 to 4 or more neurites), and the length of each neurite, from the cell body limit until the tip of the process. The obtained data were analyzed for their statistical significance with SigmaStat (SPSS, Inc.). All the recordings and the Metamorph analysis were done in blind. Videomicroscopy Living PC12 cells transfected and treated with staurosporine as described above were placed in complete medium in an appropriate chamber equilibrated at 37 Њ C and 5% CO 2 . Cells were monitored with a MicroMax CCD camera (Princeton Instruments) for as much as 9 h, taking images both through phase contrast and FITC fluorescence every 2 min or every 15 s. Images were assembled using Metamorph (Princeton Instruments). Immunoprecipitation Immunoprecipitation from rat brain was performed using a Triton X-100soluble membrane fraction prepared as follows: two adult rat brains were homogenized with a glass/teflon homogenizer (9 strokes at 900 rpm) in 25 ml of 0.32 M sucrose containing a protease inhibitor cocktail. All the steps were carried out at 4 Њ C. After 10 min centrifugation at 800 g the supernatant was centrifuged at 184,000 g for 1 h, obtaining a cytosolic and a membrane fraction in the supernatant and the pellet, respectively. The pellet was resuspended in TSE (50 mM Tris, pH 8.0, 0.5 mM EDTA, and 150 mM NaCl) containing 1% Triton X-100 for 30 min and finally the insoluble material was removed by centrifugation at 184,000 g for 1 h. Immunoprecipitation with anti-SNAP25 antibodies and mouse control IgGs was performed from 2 mg of proteins from the soluble extract. Immunoprecipitations from transfected PC12 or HeLa cells were performed using a total Triton X-100-soluble fraction prepared as follows: after two washes with cold TSE, cells were lysed for 1 h under continuous shaking with TSE containing 1% Triton X-100 and protease inhibitors. The supernatant resulting from centrifugation at 20,000 g for 30 min was used for immunoprecipitation. After overnight incubation of the brain and cell extracts with the antibodies, 50 l of magnetic beads (Dynabeads; Dynal) were added for 2-4 h. The magnetic beads were washed four times with TSE containing 1% Triton X-100, eluted with gel sample buffer and the eluates were boiled for 5 min and run on SDS-PAGE gels (Schagger and von Jagow, 1987). Online Supplemental Material To better visualize GFP-TIVAMP dynamics in staurosporine-differentiated PC12 cells, we advise the reader to consult the supplementary video available online at http://www.jcb.org/cgi/content/full/149/4/889/DC1. This video corresponds to the same GFP-TIVAMP-expressing cell as presented in Fig. 2, shown here during a longer period of time (6 h), with images taken every 2 min (8 images/s). The movie shows the dynamics of GFP-TIVAMP (bottom) in the course of neurite outgrowth (as seen by transmission light [TL], top). Note that most movements of GFP-TIVAMP-containing vesicles are anterograde. TI-VAMP Dynamics in Staurosporine-treated PC12 Cells Differentiation of neurons and nerve growth factor (NGF)-induced neurite outgrowth of PC12 cells take several days (Luckenbill-Edds et al., 1979). On the contrary, staurosporine, a protein kinase inhibitor, induces maximal neurite outgrowth in 24 h of treatment in PC12 cells (Yao et al., 1997). Our neurite outgrowth assay is based on treating PC12 cells with 100 nM staurosporine for 24 h. These experimental conditions do not induce apoptosis in PC12 cells (Yao et al., 1997;Li et al., 1999). Fig. 1 shows that synaptobrevin 2, TI-VAMP, SNAP25, and synaptotagmin I had a normal subcellular localization in staurosporinetreated PC12 cells (Fig. 1). Synaptobrevin 2 concentrated in the perinuclear region and in neuritic tips. TI-VAMPpositive vesicles were scattered throughout the cytoplasm and concentrated at the leading edge of extending neurites. Synaptotagmin I appeared almost exclusively in neurites and varicosities and SNAP25 was present throughout the plasma membrane. This pattern of immunostaining was similar to that observed in NGF-treated PC12 cells Coco et al., 1999), demonstrating the validity of this cellular model to study neurite outgrowth. We produced TI-VAMP carrying a GFP tag fused to the NH 2 -terminal end (GFP-TIVAMP, see Fig. 4 B). Upon transfection of this construct in PC12 cells, GFP staining was indistinguishable from that of endogenous TI-VAMP by confocal microscopy (data not shown), thus discarding the possibility that fusion of the GFP tag could alter TI-VAMP trafficking. We then observed TI-VAMP dynamics by time-lapsed videomicroscopy in staurosporine-treated PC12 cells, which had been previously transfected with GFP-TIVAMP (Fig. 2). Fast growing neurites were recorded every 2 min over periods of 3-9 h, 5 h after the onset of staurosporine treatment. Fig. 2 A displays transmission and fluorescent light images recorded every 24 min during 2 h 2 min (see also accompanying movie). High magnification view of a neurite growing towards the bottom right of the image is shown in the inset. At each time point, GFP-TIVAMP-containing vesicles distributed along this growing process, up to the leading edge of the growth cone ( Fig. 2 A). Most movements of GFP-TIVAMP-containing membranes were anterograde (Fig. 2 B). We then constructed another form of fluorescent TI-VAMP by introducing a GFP tag at the COOH terminus (TIVAMP-GFP, see Fig. 4 B). In this case, the GFP tag is exposed to the extracellular medium after exocytosis of TI-VAMP-containing vesicles. TIVAMP-GFP-transfected PC12 cells were labeled with monoclonal antibodies directed against GFP while they were placed on ice, before fixation. The labeling was often concentrated at the tip of the growing neurite (Fig. 3). When the cells were allowed to internalize the antibody at 37 Њ C, we observed a fast, time-dependent uptake. After 15 min at 37 Њ C, the anti-GFP immunoreactivity was seen in peripheral structures, very close to the plasma membrane with a low degree of overlap with the green signal emitted by the bulk of TIVAMP-GFP. After 60 min, most of the immunoreactivity colocalized with TIVAMP-GFP, indicating that the anti-GFP antibody had reached the entire TIVAMP-GFP compartment. We did not detect any plasma membrane labeling nor GFP antibody internalization in GFP-TIVAMP-transfected or untransfected cells (Fig. 3) thus demonstrating the lack of capture of the antibody by fluid phase uptake. Altogether, these studies demonstrate that the dynamics of TI-VAMP-containing vesicles very closely accompany the growth of neurites and that the protein recycles at the neuritic plasma membrane. The NH 2 -terminal Domain of TI-VAMP Inhibits SNARE Complex Formation Because TI-VAMP is resistant to NT treatment, new experimental approaches had to be developed to study its function in living cells. Towards this goal, we searched for mutated forms of TI-VAMP that would have impaired SNARE complex formation activity. We first identified SNAP25 as a main physiological target SNARE (t-SNARE) of TI-VAMP. SNAP25, a neuronal plasma membrane Q-SNARE, formed abundant SNARE complexes with TI-VAMP as seen by coimmunoprecipitation experiments performed from brain extracts. Cellubrevin, a v-SNARE that is expressed in glial cells but not in neurons , did not associate with SNAP25 thus showing that the SNARE complexes were not formed during solubilization of brain membranes (Fig. 4 A). Protein sequence analysis of TI-VAMP shows that the protein has an original NH 2 -terminal (Nter) domain of 120 amino acids, located upstream of the coiled-coiled domain (also called R-SNARE motif; Galli et al., 1998;Jahn and Sudhof, 1999). This Nter domain includes three regions predicted to be ␣ helical by Hydrophobic Cluster Analysis (Callebaut et al., 1997) and Jpred (Cuff et al., 1998;data not shown). This is reminiscent of the Nter domain of syntaxin 1, which comprises 3 ␣ helices (Fernandez et al., 1998) and inhibits lipid bilayer fusion . The Nter domain of Sso1p, the yeast homologue of syntaxin 1, inhibits the rate of SNARE complex formation (Nicholson et al., 1998). Similar Nter domains are present in the other plasma membrane but not in intracellular syntaxins (Fernandez et al., 1998), indicating that this function may be specific for exocytosis. This led us to prepare the following GST fusion proteins: full cytoplasmic domain of TI-VAMP (GST-Cyt-TIVAMP), coiled-coiled domain alone (GST-CC-TIVAMP), and Nter domain alone (GST- Figure 3. TI-VAMP recycles at the neuritic plasma membrane. PC12 cells transfected with TIVAMP-GFP or GFP-TIVAMP and treated with staurosporine for 20 h were placed on ice, incubated with monoclonal antibody anti-GFP (5 g/ml) for 15 min, and directly fixed (15Ј/4ЊC) or further incubated at 37ЊC for 15 min (ϩ15Ј/37ЊC) or 60 min (ϩ60Ј/37ЊC) before fixation. Note the dense labeling of the neuritic plasma membrane in the 15Ј/4ЊC and ϩ15Ј/37ЊC conditions. Full loading of the GFP-TIVAMP compartment is reached in the ϩ60Ј/37ЊC condition. Bar, 5 m. Nter-TIVAMP; Fig. 4 B), and to measure the binding of the corresponding proteins to immobilized 6xhis-SNAP25 in an overlay assay. GST-CC-TIVAMP bound very efficiently immobilized his-SNAP25 whereas GST-Cyt-TIVAMP bound very poorly. As controls, GST alone and GST-Nter-TIVAMP did not bind immobilized his-SNAP25 (Fig. 4 C). To perform in vivo experiments, we constructed the following GFP-tagged forms of TI-VAMP: TI-VAMP deleted of its Nter domain (GFP-⌬ Nter-TIVAMP) and Nter domain alone (GFP-Nter-TIVAMP; Fig. 4 B). HeLa cells do not express endogenous SNAP25, so we used them to study the association of SNAP25 with GFP-TIVAMP, GFP-⌬ Nter-TIVAMP, GFP-Nter-TIVAMP (Fig. 4 B), or GFP, in vivo, after cotransfection. We measured the amount of GFP-tagged proteins coimmunoprecipitating with SNAP25 from Triton X-100-soluble extracts. GFP-⌬ Nter-TIVAMP formed more abundant SNAP25-containing SNARE complexes than GFP-TIVAMP. As controls, GFP and GFP-Nter-TIVAMP did not bind SNAP25 (Fig. 4 D). Altogether, we propose that the Nter domain exerts an intramolecular inhibition of the SNARE complex formation activity of TI-VAMP's coiled-coiled domain. TI-VAMP Mediates Neurite Outgrowth An assay was set up to measure the effect of transfection of NTs and TI-VAMP mutants on staurosporine-induced TI-VAMP forms a complex with SNAP25 in Triton X-100 extract of rat brain. Immunoprecipitation with anti-SNAP25 antibodies was performed from Triton X-100-soluble extract of rat brain as described in Materials and Methods and immunoprecipitated proteins were detected by Western blot analysis with the indicated antibodies (Sb2, synaptobrevin 2; Cb, cellubrevin; U, unbound; B, bound to anti-SNAP25 immunobeads). The bound fraction corresponded to a 65-fold enrichment compared with unbound. The SNAP25-TI-VAMP complex seemed more abundant than the SNAP25-synaptobrevin 2 complex but this may only reflect a lower expression level of TI-VAMP compared with synaptobrevin 2 in the adult brain. Note that cellubrevin did not coimmunoprecipitate with SNAP25. (B) Structure of TI-VAMP and TIVAMP-derived constructs. TI-VAMP is composed of three domains: the Nter domain (amino acids 1-120), the coiled-coiled domain, also called R-SNARE motif (CC, amino acids 121-180), and one comprising the transmembrane domain and a short luminal domain (TM, amino acids 181 to 220). These domains were tagged with GFP and GST as depicted. (C) The Nter domain of TI-VAMP inhibits binding of TI-VAMP to SNAP25. The binding of GST, GST-Cyt-TIVAMP, GST-Nter-TIVAMP, or GST-CC-TIVAMP was measured by overlay over immobilized 6ϫhis-SNAP25 (indicated by the arrow). GST-CC-TIVAMP bound efficiently to immobilized 6ϫhis-SNAP25. Little binding of GST-Cyt-TIVAMP and none of GST and GST-Nter-TIVAMP was observed. As positive control, a strip was revealed with anti-6ϫhistidine antibodies. (D) A TI-VAMP mutant lacking the Nter domain coimmunoprecipitates with SNAP25 more efficiently than full-length TI-VAMP. HeLa cells cotransfected with SNAP25 plus GFP-⌬Nter-TIVAMP, GFP-TIVAMP, GFP-Nter-TIVAMP, or GFP were lysed and subjected to immunoprecipitation with mouse monoclonal anti-SNAP25 antibodies as described in Materials and Methods. The immunoprecipitated proteins were then detected by Western blot with anti-GFP or anti-SNAP25 rabbit polyclonal antibodies. The bound fraction corresponded to a 100-fold enrichment compared with the starting material (SM) in the case of the GFP blot and to a 10-fold enrichment in the case of the SNAP25 blot. Note that neither GFP-Nter-TIVAMP nor GFP coimmunoprecipitated with SNAP25. neurite outgrowth in PC12 cells. First, we showed that when cells were electroporated with two plasmids, virtually all cells expressed both transgenes. This was demonstrated by transfection with GFP-cellubrevin (GFP-Cb) alone, TeNT alone, or both. Cotransfection of TeNT with GFP-Cb resulted in total proteolysis of GFP-Cb (not shown). Second, the activities of transfected TeNT and BoNT E were demonstrated by complete proteolysis of endogenous synaptobrevin 2 and SNAP25, respectively (not shown). In a first set of experiments, PC12 cells were transfected with GFP alone, GFP plus TeNT, GFP plus BoNT E or GFP-Nter-TIVAMP. The cells were then treated with staurosporine and fixed after 24 h. Fig. 5 A shows a representative field observed in each condition. Neurites from cells transfected with GFP or GFP plus TeNT were similar to neurites from untransfected cells. Neurites from cells transfected with GFP plus BoNT E or GFP-Nter-TIVAMP were fewer and shorter. The length of neurites and the number of neurites per cell were measured in each GFP-positive cell, in each condition. GFP plus TeNT had no effect on neurite number and length compared with GFP alone. BoNT E reduced by 45% the number of neurites longer than 20 m and strongly increased the number of cells without neurites (Fig. 5, B and C). Expression of the Nter domain of TI-VAMP had an effect that was similar to that of BoNT E. GFP-Nter-TIVAMP reduced by 42% the number of neurites longer than 20 m and strongly increased the number of cells without neurites (Fig. 5, B and C). The effects of GFP plus BoNT E and GFP-Nter-TIVAMP were statistically different from GFP alone with P ϭ 0.027 and 0.017 (Student's t test), respectively. The effects of BoNT E and GFP-Nter-TIVAMP were not additive (not shown), indicating that they act on the same exocytotic mechanism. In a different set of experiments, we measured the effect of GFP and the cytoplasmic domain (Nter and coiled-coiled domains) of TI-VAMP fused to GFP (GFP-Cyt-TIVAMP, Fig. 4 B). GFP-Cyt-TIVAMP (neurites longer than 20 m: 50.2% Ϯ 0.25) had no effect on neurite length compared with GFP (neurites longer than 20 m: 50.7% Ϯ 3.5). GFP-Cyt-TIVAMP had no effect on the number of neurites per cell (not shown). These results demonstrated that neurite outgrowth in staurosporine-treated cells is insensitive to TeNT but sensitive to BoNT E as in neurons. The fact that GFP-Nter-TIVAMP inhibited neurite outgrowth as strongly as BoNT E suggests that TI-VAMP plays a major role in neurite outgrowth. We then checked that GFP-Nter-TIVAMP expression did not have a deleterious effect. Fig. 6 shows a gallery of double immunofluorescence experiments performed in GFP-Nter-TIVAMP-transfected cells. We observed no ef- fect on the localization of syntaxin 1, a plasma membrane SNARE, syntaxin 6, a Golgi apparatus SNARE (Fig. 6), and SNAP25 (not shown) when compared with untransfected or GFP-transfected cells. Synaptobrevin 2 appeared both in the perinuclear region and in the shorter neurites emerging from GFP-Nter-TIVAMP cells (Fig. 6 and compare with Fig. 1). These cells showed a lower level of expression of synaptotagmin I. Synaptotagmin I was the vesicular marker which was the most enriched in the tip of the neurites in untransfected cells (Figs. 1 and 6) so our result may suggest that synaptotagmin I reached the neuritic tip by a TI-VAMP-dependent pathway. These results showed that the Nter domain of TI-VAMP had a specific inhibitory effect on neurite outgrowth. We then tested the effect of GFP-⌬Nter-TIVAMP expression and compared it with that of GFP-TIVAMP on neurite outgrowth. We observed the occurrence of unusually long neurites with an increased number of filopodia. Staining of actin filaments with fluorescent phalloidin showed that the neurites of GFP-⌬Nter-TIVAMP-transfected cells showed cortical actin localization similar to GFP-TIVAMP-transfected cells (Fig. 7 A). The pattern of staining of tubulin, synaptobrevin 2, synaptotagmin I, SNAP25, and syntaxin 1 was the same in GFP-⌬Nter-TIVAMP as in GFP-TIVAMP-transfected and in untransfected cells (data not shown). The effect of GFP-⌬Nter-TIVAMP was quantified as in the case of GFP-Nter-TIVAMP. GFP-⌬Nter-TIVAMP expression doubled the number of neurites longer than 30 m and multiplied by 5 the number of neurites longer than 50 m when compared with the expression of GFP-TIVAMP (Fig. 7 B). GFP-TIVAMP had no effect on neurite length and number per cell compared with GFP alone (not shown). We observed no effect of GFP-⌬Nter-TIVAMP on the number of neurites per cell (not shown). We checked that GFP-⌬Nter-TIVAMP formed more abundant SNARE complexes with endogenous SNAP25 by measuring the amount of SNAP25 and syntaxin 1 that was coimmunoprecipitated with GFP-⌬Nter-TIVAMP, GFP-TIVAMP, and GFP-Sb2. GFP-⌬Nter-TIVAMP-SNAP25 complex was 2.5 times more abundant than GFP-TIVAMP-SNAP25. Accordingly, GFP-⌬Nter-TIVAMP coimmunoprecipitated more syntaxin 1 than GFP-TIVAMP (Fig. 7 C). These results showed that a form of TI-VAMP, which had a higher SNARE complex formation activity, strongly enhanced neurite outgrowth. Discussion This study demonstrates that TI-VAMP-mediated vesicular transport is essential for neurite outgrowth. Expression of the NH 2 -terminal domain of TI-VAMP inhibits neurite outgrowth as strongly as BoNT E, which abolishes the expression of SNAP25, a plasma membrane SNARE partner of TI-VAMP. On the contrary, activation of neurite outgrowth and increased SNARE complex formation were observed when the NH 2 terminus deletion mutant of TI-VAMP was expressed in PC12 cells. A main conclusion from our work is that TI-VAMP is involved in neurite outgrowth in PC12 cells. Our finding that TI-VAMP interacts with SNAP25 in PC12 cells and in the brain is consistent with the involvement of SNAP25 in neurite outgrowth (Osen-Sand et al., 1993. The TI-VAMP-dependent vesicular transport mediating neurite outgrowth in PC12 cells likely corresponds to the outgrowth of axons and dendrites in developing neurons. In- Figure 6. Morphology of GFP-Nter-TIVAMP-expressing cells. PC12 cells transfected with GFP-Nter-TIVAMP and treated with staurosporine as in Fig. 5 were fixed, processed for double fluorescence by combining direct GFP fluorescence detection with indirect immunofluorescence detection using the indicated antibodies. Representative GFP-Nter-TIVAMP-transfected cells without or with short neurite(s) are shown in horizontal confocal sections. Syntaxin (Stx) 1 and 6 and synaptobrevin 2 (Sb2) have a localization similar in untransfected as in GFP-Nter-TIVAMPexpressing cells. Synaptotagmin I immunoreactivity was weaker in GFP-Nter-TIVAMP-transfected cells than in untransfected cells. Bar, 10 m. deed, TI-VAMP concentrates in the leading edge of axonal and dendritic growth cones of hippocampal neurons in primary culture (Coco et al., 1999). In support of this conclusion, preliminary experiments have shown a decreased number of neurites in young hippocampal neurons, which were microinjected with anti-TIVAMP antibodies (Coco, S., M. Matteoli, and T. Galli, unpublished observations). Neurite outgrowth may be also very active in differentiated neurons because it may participate to post-synaptic morphological changes related to plasticity and learning (MaleticSavatic et al., 1999). A role for SNAP25 in neuronal plasticity and learning has been proposed (Catsicas et al., 1994;Boschert et al., 1996). Therefore, the TI-VAMP-and SNAP25-dependent vesicular transport mechanism described here could also mediate activity-dependent exocytosis involved in dendrite elongation and post-synaptic receptor expression at the plasma membrane in mature neurons (MaleticSavatic et al., 1999;Noel et al., 1999;Shi et al., 1999). This could account for the distribution of TI-VAMP-containing vesicles throughout the dendrites (Coco et al., 1999) and of SNAP25 in the dendritic plasma membrane of mature neurons. In a previous study, we proposed that TI-VAMP defines a novel tubulovesicular compartment, which excludes SV and endosomal markers, partially overlaps with CD63 and could correspond to a constitutive-like secretory compartment in neuronal cells (Coco et al., 1999). Interestingly, CD63 was recently found in Weibel-Palade bodies, which secrete von Willebrand factor and transport P-selectin, in endothelial cells (Kobayashi et al., 2000). In fibroblasts, TI-VAMP partially overlaps with lysosome-associated membrane protein 1 (LAMP1) and antibodies against TI-VAMP inhibit the degradation of EGF (Advani et al., 1999). These findings together with the present data showing that TI-VAMP mediates neurite outgrowth could be in favor of the involvement of TI-VAMP in constitutive-like secretion in neurons, a pathway related to secretory lysosomes in non-neuronal cells. Indeed, some of the constitutive secretory proteins are targeted to immature secretory granules in neuronal cells. Then, they are removed from maturing granules and sent to immature secretory granule-derived vesicles, together with lysosomal enzymes. Immature secretory granule-derived vesicles reach the plasma membrane and release their content in the extracellular medium thus defining a constitutive-like secretory pathway in neuronal cells (Thiele et al., 1997). Future studies should aim to determine which cargo proteins and The mean values (ϮSE) of the percentage of neurites longer than 30 or 50 m from three independent experiments is shown. *** Indicates P Ͻ 0.001 (Student's t test). (C) GFP-⌬Nter-TIVAMP enhances formation of SNARE complexes. A Triton X-100-soluble extract was prepared from PC12 cells transfected with GFP-TIVAMP, GFP-⌬Nter-TIVAMP, or GFP-Sb2 and subjected to overnight immunoprecipitation with monoclonal anti-GFP antibodies. Immunoprecipitated proteins were resolved in SDS-PAGE followed by Western blot analysis with the indicated antibodies. Note the increased coimmunoprecipitation of endogenous SNAP25 with GFP-⌬Nter-TIVAMP compared with GFP-TIVAMP. The histogram in the right side shows the quantification of the amount of endogenous SNAP25 immunoprecipitated normalized to the amount of GFP fusion protein immunoprecipitated from two independent experiments. **P Ͻ 0.01 (Student's t test). Bar, 10 m. lipids TI-VAMP-containing vesicles transport in neurons. According to our working hypothesis, the proteic and lipidic map of TI-VAMP vesicular compartment is likely to identify factors, which are important for neurite elongation both in developing and mature neurons. The purification of TI-VAMP vesicular compartment will also be important to determine which other proteins are involved in this pathway, particularly rab proteins that have been shown to play a role in neurite outgrowth (Ayala et al., 1990;Huber et al., 1995). The mechanism of action of the NH 2 -terminal domain of TI-VAMP has not been yet fully resolved but it is reminiscent of the inhibitory effects of NH 2 -terminal domains of Sso1p and syntaxin 1. NH 2 -terminal deletion mutant of Sso1p has an increased SNARE complex formation rate. The NH 2 -terminal domain of Sso1p binds to its SNARE motif and inhibits SNARE complex formation in vitro, thus acting as an intramolecular inhibitor of the SNARE motif (Nicholson et al., 1998). Removal of the NH 2 -terminal domain of syntaxin 1 decreases SNARE-dependent liposome fusion half time from 40 to 10 min. In this case, no effect is observed on SNARE complex formation rate . We found that the cytoplasmic domain of TI-VAMP, which comprises the NH 2 -terminal domain plus the R-SNARE motif, had no effect on neurite outgrowth, whereas the NH 2 -terminal domain alone strongly inhibited it. This demonstrates that the full cytoplasmic domain is inactive in vivo. The coiled-coiled domain of TI-VAMP bound more efficiently SNAP25 than the cytoplasmic domain by overlay assay. Therefore, our observations would favor a model in which the NH 2 -terminal domain of TI-VAMP inhibits the capacity of the R-SNARE motif to form SNARE complexes and promote fusion, maybe because the NH 2 -terminal domain folds over the R-SNARE motif or by a yet unknown mechanism. Cytosolic or membrane proteins can be expected to act on the NH 2 -terminal domain of TI-VAMP to permit fusion at maximal rate. The inhibitory effect on neurite outgrowth resulting from expression of the NH 2 -terminal domain of TI-VAMP could be due to the sequestration of such factor(s). Conversely, the activatory effect of the ⌬Nter-TIVAMP could be explained by the fact that it bypassed control by such factors. Hence, identifying the signal transduction pathway(s) and factors, able to activate TI-VAMP, will be of crucial importance to further understand how neurite outgrowth is controlled. Finally, our finding that the NH 2 -terminal domain of TI-VAMP plays an important function in the control of neurite outgrowth, suggests that this protein is a potential target of pharmacological agents that could modulate the activity of TI-VAMP by releasing the inhibition of this domain. Such agents could specifically activate TI-VAMP-mediated exocytosis thus stimulate neurite outgrowth. Once identified, such drugs could be used in the treatment of nerve traumatisms such as spinal cord injury.
242043950
s2orc/train
v2
2021-11-04T15:16:38.722Z
2021-09-01T00:00:00.000Z
Anti-Aging Activities of Asparagus Gel Ethanol Extract in Cosmetic Gel Agent for Facial Skin Asparagus is a vegetable that contains phenolic compounds with antioxidant properties that scavenges aging-triggering free radicals. This study aimed to investigate the components and anti-aging potentials of Ethanol Extract form Asparagus (EEA). The study was performed in February 2020 at the Pharmacy Laboratory, University of North Sumatera. The EEA was obtained through maceration using 96% ethanol. An antioxidant assay was performed and the total phenol and flavodoid content were determined using the spectroscopic method. Three gel formulas with different concentrations of EEA was prepared (F1: 1.5%, F2: 2.5%, and F3: 3.5%), and F0 was used as control. The parameters evaluated were moisture, oil content, texture, collagen, wrinkle, pigment, sensitivity, and pore. The result showed that asparagus had a moderate antioxidant activity (IC50: 118,992) with the total phenol and flavonoid contents of 15,9407 mg GAE/g and 3,2286 mg QE/g extract, respectively. The highest aging activities was seen in F3 (3.5%), followed by F2 (2.5%) and F1 (1.5%). The percentage of moisture, oil, texture, collagen, wrinkle, spot, sensitivity, and pore recovery were found to be 40.15%, 49.73%, 71.76%, 17.70%, 70.93%, 49.34%, 42.56% and 25.31%, respectively. Hence, it can be concluded that the EEA Gel at the highest concentration (3.5%) has a high content of phenol and flavonoid which can improve the skin moisture, oil content, texture, collagen, wrinkles, spots, sensitivity, and pores, which promotes anti-aging activities. Introduction Aging is a natural process that cannot be avoided by humans due to anatomical and physiological damages starting from blood vessels and other organs to the skin. The extrinsic aging (photoaging) of the skin is mainly affected by ultraviolet (UV) rays, and the exposure to UV radiation from sunlight is the biggest factor contributing to 90% of premature aging symptoms. The thinning of skin layers due to sun exposure and clumping of pigments (melanocyte cells) causes spots and dry skin. 1 Photoaging causes 80% of skin aging problems by activating cytokines and metalloprotein collagenases and stimulating free radicals. Collagen and elastin (ELN) form cross-link in the skin, causing loss of elasticity, thinning the epidermal layer and wrinkles. 2 Furthermore, collagen is the largest part of the dermis, which contributes about 70% of the skin dry mass; hence its damage is a major cause of wrinkling, loss of elasticity, and sagging. The two main regulators of collagen formation by fibroblast cells are transforming growth factor (TGF-β) and activator protein (AP-1). TGF-β is a cytokine that stimulates collagen production, while AP-1 is a transcription factor that inhibits collagen production and stimulates collagen breakdown. Intrinsic aging plays a role in decreasing TGF-β and accumulation of Reactive oxygen species (ROS), while extrinsic aging, which is mainly caused by UV radiation (photoaging), causes an increase in ROS production in the dermis layer. Furthermore, ROS triggers a series of chain molecular reactions, thereby increasing the formation of AP-1, which stimulates the transcription process of Matrix metallopeptidase (MMP) enzyme in collagen degradation and inhibits collagen synthesis by inhibiting the type 2 receptors of TGF-β. 3 Antioxidants can prevent aging by acting as an antidote to free radicals from photoaging by working synergistically to protect cells and organ systems from damage. Asparagus contains phenolic compounds with antioxidant properties that cleanse toxic and acne trigger substances from photoaging on the face. 4,5 Natural ingredients have identified benefit for the dermatologic disorder, and it has been used traditionally over the last 20 years. This active natural ingredient can be formulated into cosmetics that can be used safely and have lower side effects than synthetics cosmetics. 6 This category of cosmetic preparation is a gel, which is non-sticky, easy to wash, leaves no oil on the skin, and has stable viscosity during storage. Skincare for aging problems is best carried out at the earliest opportunity for a healthy and well-maintained facial skin. 7 Total phenolic and total flavonoids are positively correlated with antioxidant activity.8. Therefore, this study aimed to investigate the antioxidant activity, total phenolic, and flavonoid content from the ethanol extract of Asparagus (EEA) and its antiaging potential. Methods This study was performed in February 2020 at the Pharmacy Laboratory, University of North Sumatera. This study evaluated antioxidant activity from the EEA by DPPH (2,2-diphenyl-1-picrylhydrazyl) as well as total phenolic and flavonoid content Folin-Ciocalteau and Aluminum chloride. Furthermore, EEA was formulated into a gel (F0=Gel Base, F1=Gel of 1.5% EEA, F2=Gel of 2.5% EEA, and F3=Gel of 3.5% EEA), and efficacy of gel was evaluated by double-blinding clinical trial against 12 volunteers who have been informed about the purpose and procedure of this study. These volunteers, as the sample was limited by inclusion and exclusion criteria. Inclusion criteria were healthy women or men, productive age (20-25 years), no history of allergy-related illness, and willing to receive treatment using gel for 4 weeks, twice daily (day and night). Exclusion criteria were irritation of the gel, history of an allergy-related illness, and in the care of another dermatologist. The evaluated parameters included moisture, oil content, texture, collagen, wrinkles, pigments, sensitivity, and pores, which were evaluated every week for a month. This clinical trial procedure has been approved by the Health Research Ethics Commission, Universitas Prima Indonesia with Letter No. 022/KEPK/UNPRI/I/ 2020. Preparation of EEA was begun by washing and drying the asparagus at room temperature. The dried Simplicia was then blended into Simplicia powder and extracted. The extraction was performed by maceration using 96% ethanol for 7 days at room temperature. The filtrate from the maceration evaporated by a rotary evaporator and obtained a concentrated form of EEA. 9 The obtained EEA undergo a phytochemical screening, according to the Indonesian Herbal Pharmacopeia. These phytochemical was screened by some reagent, such as dragendorf, Mayer, bouchardat for alkaloids, AlCl 3 for flavonoids, FeCl 3 for tannins, Lieberman Burchard for steroids/triterpenoids, sulfuric acid for saponins, Shinoda test (magnesium, concentrated chloride acid, amyl alcohol), glycoside test with glacial acetic acid, ferric chloride, sulphuric acid 10 After that, the DPPH scavenging assay was performed against EEA. Amount of 2 mL DPPH solution (200 µg/mL in methanol) was mixed with 0 mL, 0.4 mL, 0.8 mL, 1.2 mL and 2 mL of EEA (500 µg/mL), respectively to form concentrations of 0, 60, 70, 80, and 90 µg/ mL. The comparison used was vitamin C, with a concentration of 2, 4, 6, and 8 µg/mL. Each solution was vortexed and incubated in the dark at room temperature for 30 minutes, and the absorbance was measured at a wavelength of 516 nm against the blank. Furthermore, the DPPH radical inhibition of sample (%) was calculated by dividing the difference between blank absorbance and sample absorbance against the blank absorbance. 11 The obtained percentage of free radical scavenging activity was continued to plot by linear regression model for IC 50 of DPPH. Moreover, Total phenol content was carried out using gallic acid standard, and the calibration curve used a concentration of 0, 31, 25, 62, 5, 125, and 250 g/mL gallic acid. The ethanol extract of asparagus was prepared in a concentration of 1000 ppm, and as much as 0.1 mL of each solution was mixed with 7.9 mL of distilled water and 0.5 mL of Folin-Ciocalteau and was vortexed for ±1 minute. Furthermore, 1.5 mL of 20% Na 2 CO 3 was added to the solution and incubated for 90 minutes. The absorbance was measured at a wavelength of 775 nm 1. Meanwhile, total flavonoid content was carried out using quercetin standard with a concentration of 0, 6, 14.5, 19, and 23.5 µg/mL. The ethanol extract of asparagus was prepared in a concentration of 1000 ppm, and as much as 2 mL of each solution was mixed with 0.1 mL of AlCl 3 , 0.1 mL CH 3 COONa, and 2.8 mL distilled water and incubated for 40 minutes. The absorbance was measured at a wavelength of 440 nm. 1,12 Total Phenolic Content and Total flavonoid content was determined by multiple concentration (µ/mL), volume, and dilution factor, then it was divided against the mass of the sample. On the other hand, the obtained EEA was also formulated as a gel, and the formulation base on this study used standard formulation, as shown in Table 1. Hydroxypropyl methylcellulose (HPMC) was developed in hot water with a ratio of 1:20 at a temperature of 70°C and left for about 30 minutes. The added propylene glycol, glycerin, and methylparaben have been dissolved in hot distilled water. It was crushed until homogeneous, and the remaining water was added. 13 The active ingredient EEA with variations of 0 (F0), 1.5% (F1), 2.5% (F2), and 3.5% (F3) was added gradually into the gel base while grinding until homogeneous. Moreover, before gel application, irritation skin was conducted on volunteer skin. The gel was applied to the forearm with a diameter of ±3 cm, and changes were observed in the form of redness, itching, and skin roughening for 24 hours. 14 A skin analyzer measured the efficacy of anti-aging. All data were analyzed using SPSS (Statistical Product and Service Solution) 21. The result of phytochemical screening was expressed as a qualitative scale. Moreover, antioxidant assay, total phenolic content, and total flavonoid content were expressed as µg/ml, GAE/g extract, and QE/g extract. Meanwhile, the parameter of anti-aging was expressed as a percentage (%) and analyzed by the Kruskal Wallis test. Results Ethanol Extract of Asparagus (EEA) contains some phytochemicals as a secondary metabolite, including glycoside, steroid/triterpenoids, flavonoid, tannin, and saponin. On the other hand, the result of the antioxidants assay against the EEA as sample and vitamin C as positive control by DPPH methods was expressed as IC50 (µg/mL) and shown in Table 2. The IC 50 of EEA was 118.992 µg/mL, while vitamin C as a positive control had an IC 50 value of 2.693 µg/mL. This indicated that the component in EEA had a moderate antioxidant activity (100-150 μg/mL ), while vitamin C had a very strong antioxidant activity (less than 50 μg/mL ). 9 Moreover, EEA underwent to determine total phenolic content and Total Flavonoid Content. Total phenolics and flavonoid content were expressed as Gallic Acid Equivalent (GEA) and Quercetin Equivalent (QE) for each gram of extract. Total phenolic and flavonoid content from EEA was shown in Table 3. After that, EEA was formulated into a gel for the clinical trial. The irritation test showed no sign of a reaction, such as redness, itching, and skin roughness among volunteers. Therefore, EEA gel preparations are safe to be used. Hence, all volunteers could apply the gel for 4 weeks, and the analysis of the parameter was shown in Table 4, and the percentage of recovery after using AAE gel was shown in Figure. Based on Figure, percentage of recovery from all parameters was shown a similar pattern. FIII formulation shown the highest percentage of recovery for moisture (40.1%), oil content (49.7%), texture (71.7%), collagen (17.70%), wrinkles (70.93%), spots (49.3%), sensitivity (42.6%), and pore (25.3%) than other formulation, as the opposite the lowest percentage of recovery was revealed by F0. Discussion The result of this study indicates various pharmacology properties, not only increase skin moisture and collagen but also reduce oil content, texture, wrinkles, spot, sensitivity, and pore. This pharmacology properties were due to Ethanol Extract of Asparagus (EEA) having moderate antioxidant activity by scavenging DPPH. The antioxidant activity of EEA was supported by the total phenolics and flavonoid content, which were 15,9407 GAE/g extracts and 3,2286 QE/g extracts, respectively. The in vitro study was to show potential anti-aging properties from EEA. Moreover, the EEA was evaluated as gel preparation in 3 different formulations (F1, F2, and F3), and these gels of EEA were also showed an improvement in the facial skin parameter after 4 weeks (p-value <0.05). The gel penetration through the skin occurred by percutaneous absorption that entered the bloodstream. Meanwhile, drug penetration through the skin occurred via the transdermal route (stratum corneum) and the transfollicular route (sweat and sebum gland pores). Propylene glycol is an enhancer that interacts with stratum corneum lipids and water to increase hydration in skin tissue, thereby increasing the delivery of hydrophilic and lipophilic drugs, influencing drug solubility in the stratum corneum and affecting the carrier partition into the membrane. In addition, increased penetration in gel preparations accelerates the effectiveness of medicinal ingredients 15. The antioxidant compounds in asparagus were flavonoids and tannins, with a moderate antioxidant activity as indicated by the IC 50 value of ethanol extract by 118.992 µg/mL, which was supported by the phenol and flavonoid content. Phenolic compounds are a source of natural antioxidants. Phenol and flavonoid compounds have a linear contribution to antioxidant activity; therefore, the higher the levels, the better the antioxidants. 16 Before applying the gel, the patient's skin condition was dry epidermisdermis, oily, perfect texture, sufficient collagen fiber, no wrinkles, spots, sensitive, and no serious pores. After 4 weeks of gel application, there was an improvement in skin condition, which became moist, oily balance, perfect texture, sufficient collagen fiber, no wrinkles, normal spots, normal facial skin sensitivity, and a decrease in pore size. Asparagus keeps facial skin moist by maintaining sebum production in the stratum corneum and removes fat in oily skin. 2,17 The use of 3.5% asparagus ethanol extract gel (FIII) provided the best effect in all parameters (Figure 2) Antioxidants and flavonoids work to stimulate the formation and production of skin collagen, prevent collagen degradation. In addition, it maintains and improves facial skin texture by preventing the increase in ROS in the dermis layer, thereby inhibiting the formation of AP-1 and the MMP enzyme. Increased collagen maintains skin elasticity, flexibility, and smoothness. 18 The development of asparagus in a gel preparation based on Hydroxypropyl methylcellulose, which is a cellulose derivative increases the stimulation of growth factors, such as epidermal growth factor (EGF), Fibroblast growth factor (FGF), and Platelet-derived growth factor (PDGF). Growth factors play an important role in regulating normal growth and development by stimulating cell division, maintaining the tissue repair phase, accelerating skin regeneration, and stimulating collagen formation. 16,19 Furthermore, flavonoids from EEA inhibit the pigmentation process or the appearance of spots by directly inhibiting tyrosinase activity in the melanogenesis process . Antioxidants can keep facial skin from overreaction that interferes with its health and prevents irritation and allergies. In addition, enlarged pores may be reduced by regular exfoliation and collagen formation, which improves skin. 20 Hence, it can be concluded that Asparagus not only have moderate antioxidant activity by scavenging DPPH due to the presence of phenolic and flavonoid. Moreover, the highest concentration of EEA Gel (3.5%) shown the highest percentage of skin moisture, oil content, texture, collagen, wrinkles, spots, sensitivity, and pore, which promotes an anti-aging activity.
8423340
s2orc/train
v2
2018-04-03T00:00:38.996Z
2012-07-30T00:00:00.000Z
Sarcoidosis Presenting as Massive Splenic Infarction Sarcoidosis is a multisystem granulomatous disease of unknown aetiology. Granulomatous inflammation involving the spleen is common and associated with splenomegaly. However, massive splenomegaly is a rare occurrence. Infrequently massive splenomegaly can result in splenic infarction. Massive splenic infarction in sarcoidosis has, to our knowledge, not been previously reported. We present a case of a woman presenting with massive splenic infarction and sarcoidosis confirmed by granulomatous inflammation of the liver. Introduction Sarcoidosis is an idiopathic chronic systemic granulomatous disease. It commonly presents with pulmonary granulomatous disease but extrapulmonary manifestations also occur with varying frequency. Granulomatous infiltration of the spleen is common in sarcoidosis but is often asymptomatic. Splenomegaly is unusual, and massive splenomegaly leading to splenic infarction is very rare. Case Presentation A 53-year-old lady was admitted under the surgical team with a six-month history of progressive left-sided abdominal pain, associated with anorexia and lethargy. She reported no other systemic symptoms of note. Her past medical history consisted of mild depression and recurrent sinusitis over a 20-year period. Clinical examination revealed skin pallor and a palpable left upper quadrant mass. Investigations including a full blood count and biochemical profile were normal. Her chest radiograph was normal. An abdominal ultrasound revealed a well-defined heterogeneous solid mass with flecks of calcification in the left upper quadrant. Abdominal CT scan revealed a large well-defined, solid thick-walled necrotic mass with calcific foci within the left upper quadrant. The mass measured approximately 13 cm by 11 cm in size ( Figure 1). There was no lymphadenopathy, nor focal liver abnormality. The patient underwent a laparotomy, which revealed the mass was a grossly enlarged spleen. A splenectomy was performed. The liver appeared macroscopically abnormal. A liver biopsy was undertaken. The patient had an unremarkable postoperative recovery. Histological examination of the spleen revealed massive global parenchymal infarction with some periarteriolar fibrosis. The liver biopsy, showed multiple noncaseating granulomas. Further histological assessment excluded any evidence of fungal or mycobacterial infection. Given the appearance of florid granulomatous change in the liver, the patient was referred to our rheumatology department. Further investigations showed a normal serum ACE, negative TB ELISPOT, and negative ANA, ENA, and ANCA. A normal ferritin and serum immunoglobulin levels and negative serology for hepatitis A, B, and C were noted. Serology for brucellosis, histoplasmosis, and leishmaniasis were negative. A gallium scan revealed intense uptake solely within the liver. A diagnosis of sarcoidosis of the liver with associated massive splenic infarction was made. The patient was not commenced on any treatment. She is under regular review and remains asymptomatic two years after presentation. Follow up ultrasound scans of her liver and liver function tests remain normal. Discussion Sarcoidosis is a multisystem granulomatous disease of unknown aetiology, most commonly affecting the lungs in 90% of cases and lymph nodes (particularly intrathoracic), followed by liver 50-80%, skin 25%, and eyes 11-83%; other organs are less frequently involved [1]. Granulomatous infiltration of the spleen is common in sarcoidosis, but splenic enlargement is unusual and massive splenomegaly is rare. In a large review by Fordice et al. of 6074 cases of sarcoidosis, 628 patients had quantifiable splenomegaly and only 3% had massive splenomegaly [2]. There have been a few cases reported in the literature of massive splenomegaly in sarcoidosis [3][4][5]. Thirty to 60% of cases of splenic involvement in sarcoidosis are asymptomatic. Computed tomography of the abdomen is very useful in evaluating splenic sarcoidosis, which typically manifests as homogeneous organomegaly. However, there are a few reported cases of nodular splenic sarcoidosis and still fewer having massive splenomegaly with low-attenuation nodules [6]. Scintigraphy with gallium-67 scanning provides a better way of assessing granulomatous lesions in sarcoidosis not revealed by traditional methods of investigation. A study by Beaumont et al. evaluated the usefulness of gallium-67 scanning in 54 patients with sarcoidosis. They found that gallium-67 scan was effective in detecting and assessing lesions particularly those affecting the mediastinum, spleen, and salivary glands [7]. Splenic infarction occurs as a result of vascular compromise to the organ. Common causes include thromboembolism and infiltrative haematological diseases that cause congestion of the splenic circulation with abnormal cells. The mechanism of massive splenic infarction in our patient is unknown. Patients with sarcoidosis have been shown to have impaired vascular endothelial function and increased arterial stiffness according to a study by Siasos et al. of eightyseven patients with sarcoidosis [8]. Conceivably, chronic perivascular inflammation resulting in the periarteriolar fibrosis seen on histology may have compromised vascular supply, leading to visceral ischaemia and infarction. Our case demonstrates a rare presentation of massive splenic infarction in a patient with liver sarcoidosis. This contributes to the heterogeneity of clinical manifestations of this disease of unknown aetiology.
253908950
s2orc/train
v2
2022-11-26T16:29:59.815Z
2022-11-23T00:00:00.000Z
Coding Dancing Figural Animations: Mathematical Meaning-Making Through Transitions Within and Beyond a Digital Resource We investigate three 8th-grade students’ mathematical meanings developed in the context of using linked representations to generate animations of figural models tuned in musical rhythm in “MaLT2,” a programmable Turtle Geometry in 3D resource affording dynamic manipulation of variable values. We adopted a modified version of the UDGS (Using, Discriminating, Generalizing, Synthesizing) model, introduced by Hoyles and Noss in 1987, in order to frame and analyze students’ mathematical meaning-making process involving setting out goals; posing conjectures; using mathematical ideas to test them; and exploring, generalizing, and expanding these ideas. This dynamic process was contextualized and connected to a flow of two different types of transitions: (1) transitions within the different representations of MaLT2 and (2) transitions beyond MaLT2, among the representational contexts of the digital microworld, artistic ideas, and abstract mathematics. In our analysis, we use this theoretical concept to trace the kind of mathematical meanings connected to multidisciplinary notions embedded in dance and music, such as synchronicity, symmetry, periodicity, and harmony, emerging from this learning context. We also look into the way these mathematical meanings were gradually evolved from being implicitly integrated in digital and artistic ideas to being reflected on and generalized. In this article, we discuss 8th-grade (aged 14-15) students' learning while they were engaged in an activity involving open artistic creations with a digital resource. We studied the way their mathematical meanings evolved through a flow of transitions at two levels: (a) among different representational contexts of this specific resource which integrates Logo programming, 3D Turtle Geometry, and dynamic manipulation of variable values; (b) among digital, artistic, and mathematical representations of ideas around music and dance. To do this, we found ourselves needing to disengage from assumptions inherent in some pre-virtual cultures regarding stagnant curriculum structures, lack of student agency, and the process of mathematical meaning-making. The emergence of digital technology in education has enabled and cultivated new epistemological and pedagogical perspectives on learning and doing mathematics. It has opened up empowering possibilities for students to engage with mathematical thinking and construct mathematical meanings (diSessa, 2018;Drijvers et al., 2010;Kynigos, 2015). Kaput (1999) has argued that digital technology heralded a new kind of culture in education-the "virtual culture"-that provides qualitatively different affordances than former kinds. This novel culture brought out new representational forms and allowed students to engage in creative approaches within abstract mathematical situations by incorporating socio-cultural contexts (Kaput, 1989;Kynigos & Diamantidis, 2021;Shaffer & Kaput, 1998). diSessa (2018) discusses the way computers could fundamentally change perceiving and learning of mathematics by means of a profoundly influential "computational literacy" that would outweigh textual literacies in the near future. Nevertheless, as supported by relevant reviews, the wave of transformation raised by this new virtual computational paradigm in education has been confronted with a platonic vision of mathematical knowledge, which is strictly organized in the stabilized corpus of school curriculum (Bray & Tangney, 2017;Forsström & Kaufmann, 2018;Hegedus & Moreno-Armella, 2014;Hoyles & Noss, 2003;Kynigos, 2019). This confrontational situation regarding the challenges posed by the virtual culture led us to adopt an approach allowing considerations for restructurations of established mathematical curriculum infrastructures and a deliberate reconsideration of learning and teaching possibilities. Hence, we recognize the need to investigate the nature of mathematics emerging from students' own use of digital technology, without taking curriculum infrastructures for granted. We consider the significance of the adaptation of curriculum structures to data emerging from this new transformative wave and not the other way around (Hoyles et al., 2020;Wilensky & Papert, 2010). Designing activities for students should focus on fostering a progressive intercourse between the digital medium and the learner, embracing new ways of using, and expressing mathematics (Hegedus & Moreno-Armella, 2014). For this study, we framed this type of intercourse between students and a digital resource, in terms of transitioning between different representations of mathematized situations. The components of music and dance were integrated into our design for various reasons. There is a wide range of existing research that supports the pedagogical affordances of embedding musical concepts such as rhythm, harmony, melody, and tempo-which own a deep mathematical status while providing means of application for abstract mathematical objects-into mathematical learning contexts (Bamberger, 2013;Bamberger & diSessa, 2003;Courey et al., 2012;da Silva, 2020). However, studies focusing on the potentiality of the combination of digital resources and artistic contexts in mathematics education remain limited. We suggest that this integration is a fertile ground for investigating students' mathematical meaningmaking process. In addition, including music and dance aimed at provoking the transitioning processes and, at the same time, bringing out an aspect of human culture from within mathematics. In this way, the system of representations would be extended by linkages between abstract mathematical concepts and an external practical-artistic context closer to students' personal sensibilities. Finally, we developed a theoretical construct, inextricably linked with the digital resource, that we based on the UDGS model (Hoyles & Noss, 1987). Our goal was to capture the potential impact of transitioning among different representational contexts on students' meaning-making process as well as to investigate the type of their mathematical meanings shaped within this learning context. Building on Constructionist Theoretical Constructs: Revisiting the UDGS Model The theoretical foundation of this study relies on some long-standing ideas originating from the pedagogical movement of constructionism (Kynigos, 2015;Papert & Harel, 1991). The main theoretical concepts that we adopted have their roots back in the 1980s and 1990s, when Richard Noss and Celia Hoyles made some first attempts to conceptualize and describe students' mathematical meaning-making processes while using expressive computational tools, such as Logo programming (Hoyles & Noss, 1987, 1992Noss & Hoyles, 1996). They were based on the key principle of constructionism, according to which students' mathematical ideas are both shared and progressively shaped while interacting with technological tools in order to construct or tinker digital artefacts within a "microworld" (Hoyles & Noss, 1992;Kynigos, 2007;Papert, 1980). The concept of microworld was introduced by Papert as a self-contained computational world where students can "learn to transfer habits of exploration from their personal lives to the formal domain of scientific construction" (Papert, 1980, p. 177). Microworlds were later on described as "computational environments embedding a coherent set of scientific concepts and relations designed so that, with an appropriate set of tasks and pedagogy, students can engage in exploration and construction activity rich in the generation of meaning" (Kynigos, 2007, p. 337). Due to their highly editable nature, microworlds can provide viewable and analyzable links between students' interactions within them and their meanings of mathematical concepts in use. In 1987, Noss and Hoyles introduced the UDGS theoretical model and provided an articulated way to frame the progressive development of students' mathematical meanings in terms of their activity within a microworld. This model was used for conceptualizing the phases of mathematical meaning-making, starting from an empirical, intuitive level and progressively evolving to conscious appreciation of generalized relationships among the mathematical concepts in use along with the digital tools (Hoyles & Noss, 1992;Kynigos, 2015;Laborde et al., 2006). In this framework, a mathematical meaning is the way that a student uses and thinks of a certain mathematical concept. The UDGS model ( Fig. 1) involves the following dynamically related components of: • Using: where mathematical concepts are used-without much attention to their actual meaning-as tools for functional purposes to achieve particular goals. • Discriminating: where the different parts/elements of mathematics used as a tool are progressively distinguished and become explicit. • Generalizing: where mathematical patterns in properties or relations of the tools are consciously extended and expressed. • Synthesizing: where the generalized ideas used in the tools are consciously integrated with other contexts of application or representation-including pure mathematical ones (e.g., algebraic expression in paper-and-pencil). According to Hoyles and Noss (1987), students' meaning-making process can be mapped to moving towards all phases of the UDGS model through a circle of conjecturing, testing, and debugging actions. They start from an empirical, instinctive mode of activity, where mathematical concepts are implicitly used and transferring to a reflective one. This shift in the activity modes is signified by progressively discriminating the mathematical relations and concepts underpinning the behavior of the tools and generalizing them locally to the situations from which they emerged. At the final phase of this model, students get to synthesize their generalizations in different representational registers outside the specific technology used, which can be either different contexts of application-technological or physical-or abstract, pure mathematics, i.e., devoid of any applicable context. The main learning goal in this context is to raise the implicit mathematical concepts and relations to conscious awareness by involving in a series of the UDGS phases through a bottom-up trail. The UDGS model fits rather well in technology-based environments where students are both engaged in the construction of executable symbolic representations and in receiving informative feedback and naturally progress across the UDGS stages (Hoyles, 1986;Hoyles & Noss, 1987). In addition, Hoyles and Noss (1987) Fig. 1 The UDGS model as presented by Hoyles and Noss (1987) supported that the programming environments imply the need for formalization, due to its symbolic nature, which can foster the transition from using to discriminating or from using to generalizing. Even though this model provided a strong theoretical tool for analyzing mathematical meaning-making, the cycle of research attention it accrued was rather short. It was before long replaced by different models, whose emphasis diverged from analyzing the shaping of mathematical meanings and, instead, gained a multidisciplinary or programming-oriented viewpoint for analysis (Benton et al., 2016). This could be ascribed to technological limitations of that era, which resulted in the temporal distancing of mathematics education away from programming (DeJarnette, 2019). As a consequence, the challenge of "transfer" from the computer-based situation to abstract mathematics remained unresolved, while the way that students move towards reflective phases of acting within a digital resource was not considerably empirically supported. As computational thinking and coding have recently re-emerged as wide-ranging educational trends, we support that revisiting this long-standing challenge and reflecting on ideas around its theoretical foundations within contexts of new, emerging technologies and novel educational settings are more relevant than ever (DeJarnette, 2019;diSessa, 2018;Kynigos, 2015). Therefore, we consider important to look deep into the way mathematical meanings are formed and evolved within novel technological programming-based contexts that afford higher level of expressivity and multiple interconnected representations of mathematical concepts. The UDGS model seemed appropriate for framing our research objectives, since mathematical meanings are placed in the center of analysis. However, when starting analyzing the collected data, we found ourselves needing a modified version of it, one that would help us convey students' dynamic process of mathematical meaning-making, as influenced and supported by the integration of different contexts of application and representation of mathematical ideas-within and outside the digital resource of MaLT2. Thus, we made some conceptual modifications for adjusting it to the results of our study and the digital environment used (Fig. 2). First of all, we incorporated a type of "fluidity" inside the model, in the sense that students can navigate among each phase of the model multiple times. The UDGS model, as presented by Hoyles and Noss (1987), underlies a kind of linearity, with students starting with the using phase and gradually moving to the generalizing and synthesizing ones. We do not entirely oppose to this linear trail, since the overall meaning-making process entails a bottom-up flow towards generalizing. However, the proposed version enables the analysis of more possible routes, aiming at providing an articulate image on how students construct generalized meanings and capture their progression. Second, we place the using and synthesizing components in the center of the model and further conceptualize them in terms of transitioning among different representational contexts. Within this extended network of possible routes, we adjusted the conceptualization of each phase of the model within a broader perspective, as follows: • The using component involves students using mathematical concepts as tools in either an intuitive or a reflective way, while transitioning among the different representations within the digital resource, in order to achieve particular goals. Students can therefore begin in this phase, but can also re-encounter it multiple times during their attempt to achieve their initial goal, even after discriminating and generalizing mathematical concepts. For this reason, arrows originating from all the other components point to using, representing the use of a mathematical concept at each different level of the meaning-making process, e.g., after a mathematical relation is discriminated or generalized. These arrows are double-sided, since the converse route can be also taken, representing, for example, generalizing a relation after intuitively using it within the microworld. We pay special attention to this phase, since it provides the "clearest window" for viewing students' meanings under construction-it is when these concepts are in use through the digital tools that reveal how students think of them. • The discriminating component is viewed as locally distinguishing and identifying a mathematical concept or relation initially interwoven within a specific part of the digital tools used. It thus involves consciously recognizing the mathematical concepts in use, responsible for the dynamic or visual parts of the artefact (figural or animated) in the microworld, without necessarily identifying the way/ rule under which they affect it. It is tightly connected to the using phase, since it usually emerges after observing tools where mathematics is implicitly in use, as well as the synthesizing one, in case a connection to abstract mathematics leads to such recognition. It can also lead to generalizing the distinguished relation, without further using digital tools, or even emerge as a result of a generalized meaning. • The synthesizing component in this version plays a wider, profound role for analyzing students' process of meaning construction. It involves connecting the mathematical concepts used/represented in a specific digital resource with different contexts outside the digital representations. In our case, apart from abstract mathematics (i.e., abstract mathematical notions and relations), the artistic domains of dance and music were integrated as external contexts in the design of the task. We conceptualize the synthesizing phase as transitioning from one context to another. Unlike the UDGS model, students are anticipated to move to the synthesizing phase at any stage of their meaning-making process-even at the very beginning, when ideas for how to use digital tools are connected to an external context, e.g., the art of dance, and mathematical concepts behind them still being vague. In such cases, students can synthesize different aspects of mathematics, before having started using the digital tools and discriminating mathematical properties. Thus, synthesizing is linked to all the other phases of the model with double-sided arrows representing all possible routes towards-or originating from-the artistic contexts or abstract mathematics. • The generalizing component is conceived in a similar way as in the original UDGS model, as extending and expressing mathematical relations and being able to recognize, use, and exploit them through the digital tools. Conceptually, it involves acknowledging generalized relationships among the mathematical concepts in use. It is connected to using and synthesizing via double-sided arrows, representing all possible routes. On one hand, generalizing is closely linked to abstract mathematics through synthesizing, as one can build on previously formed meanings of mathematical concepts for extending them in order to be applicable for using in a specific situation within a digital resource. On the other hand, just using mathematical relations-implicitly incorporating in a digital representation-could also lead to generalizing, through attempts to interpret unanticipated behavior of the tools. Generalizing is considered to be the most abstract phase of the model. Thus, it is seen as a precondition for one's meaning-making process corresponding to a mathematical concept, relation, or property used within the digital tools to be considered well-rounded in order to be part of our analysis. In the analysis, we center our focus on the using and synthesizing phases and further conceptualize them in terms of transitions within and transitions beyond the digital resource, respectively. Transitions Within and Beyond MaLT2 As already mentioned, this study is based on the assumption that transitioning among different representational contexts of a mathematized-at any level of abstractionsituation either within or external to the digital tools for achieving an artistic-oriented goal would enable a physical flowing among the UDGS phases towards generalizing mathematical meanings. Here, transition is conceived in two different ways: as moving between representations of the same mathematical object or property within a digital resource (transition within) or between the digital resource and the external contexts of art (music or dance) and abstract mathematics (transitions beyond) (Fig. 3). MaLT2, the digital resource used in this study, integrates the representational contexts of variation tools for dynamic manipulation of parameter values together with Logo programming and Turtle Geometry. Thus, it provides a network of interconnected digital representations for modeling mathematical objects and relations that could support transitions within for the creation of a figural dynamic artefact. In addition, the supplemental contexts of music and dance, where mathematical ideas can get an esthetic and practical form, could also support the transitional process beyond MaLT2. This conjecture also relies on a parallelly growing argumentation claiming that "music, when facilitated by multiple, intuitively accessible representations, can become a learning context in which basic mathematical ideas can be elicited and perceived as relevant and important" (Bamberger & diSessa, 2003, p. 123). This theoretical construct can help us view and conceptualize each student's meaningmaking process as a "flow of transitions" among the different contexts. We use this term to define a series of transitions within and beyond MaLT2 that enable students' navigation throughout the UDGS phases. It can, thus, provide a tool for potentially discussing the way their mathematical meanings are progressively shaped and generalized. We further elaborate on each type of transitions in the next subsections. Transitions Within the Microworld in MaLT2 Transitions of this type involve students' moving among symbolic, visual, and dynamic virtual representations of mathematical objects or relations while using them to construct or tinker an artefact. They are represented through arrows in the left area of Fig. 3. They are held within the digital resource as a result of its three interconnected representational contexts: a) The Logo programming editor (up right area in Fig. 4) b) The variation tool (bottom right area in Fig. 4) for dynamic manipulation of each parameter value, corresponding to a parametric procedure, through dragging of its sliders c) The 3D scene (left middle area in Fig. 4), where two-or three-dimensional figures, as well as their dynamic transformation while using the variation tool, can be represented The Logo programming editor can provide an open authoring system for users to code figural models. It supports Logo movement commands (see Table 1) and repetition/conditional commands, as well as parametric procedures and sub-/ upper-procedure construction. By running movement commands, one can control the moving condition (orientation or displacement) of the avatar-whose default form is a sparrow. As the avatar moves, it leaves a colored trace which results in the construction of figures in the scene. The variation tool appears when a parametric procedure is created in the editor. For example, in Fig. 4, the parametric procedure unfoldcube is written and run by the user. It has two parameters: a and b that in terms of the Euclidean geometry correspond to the length of each square's side and the degrees of each turn between two consecutive squares, respectively. In the variation tool, two sliders have appeared: one for each parameter. By dragging one slider, the values of the corresponding parameter change and, simultaneously, the visual figural transformations of the avatar's trace are shown in the 3D scene. Thus, a sense of dynamic "behavior" of a figural model is created. The user can interact with these dynamic figural transformations by either dragging the sliders (using the mouse or the right left arrows of the keyboard), or changing the lower and upper value limits of a parameter or the step value at which a parameter is incremented or decreased (Fig. 5). While constructing or tinkering a figural model in MaLT2, transitioning among the three representational contexts physically emerges (Diamantidis et al., 2015(Diamantidis et al., , 2019Grizioti & Kynigos, 2021;Kynigos & Diamantidis, 2021;Kynigos & Latsi, 2007). Both the editor and the variation tool provide interactive fields affording expression and experimentation, where students can use mathematical ideas as tools for constructing or tinkering an artefact. The scene, on the other hand, offers instant feedback providing reflection on these ideas. Transitioning within these three different representations of mathematical concepts in use is assumed to cultivate a continuous circle of formulating and testing conjectures, by using a mathematical idea in the interactive parts of the digital tools-in the editor or the variation tool-receiving feedback from the scene and debugging or extending the initial idea. This, consequentially, is assumed to lead to a physical fluidity among the UDGS phases, with the potential of provoking the generalization of these ideas. Transitions Beyond MaLT2 This type of transitions corresponds to the synthesizing phase of the UDGS model and involves building of connections between the representational fields of MaLT2 and two external contexts: (a) the artistic context of music and dance, where mathematical concepts get an aesthetic and practical form and a physical (acoustic, visual, or dynamic) interpretation; (b) abstract mathematics, consisting of the abstract mathematical notions and relations, referring to abstract entities, which reside behind their context of application or representation. Abstract mathematics does not necessarily correspond to the mathematical content and structure as it is inserted to the school curriculum, but could rather be an extension or a restructuration of it, or even connected to a different mathematical area outside the school curriculum. There are three types of synthesizing in this model, represented with double-sided arrows in the right area of Fig. 3 and are conceptualized as follows: • Transitioning between MaLT2 and the artistic context; a student can transition from visioning an artistic idea, expressed in words, drawings, or physical gestures, to expressing it in MaLT2 by either interacting with the editor or the variation tool. Reversely, a student can transition from observing a figural model with subjectively intriguing behavior in the MaLT2 scene to forming or expanding an artistic idea represented in words, drawings, or gestures. • Transitioning between MaLT2 and abstract mathematics; a student can either transition from forming a mathematical idea in words or notes to expressing it in MaLT2 by either interacting with the editor or the variation tool, or, reversely, transition from observing a figural model with subjectively intriguing behavior in MaLT2 to forming or extending a mathematical idea represented in a symbolic notation through words or notes. • Transitioning between abstract mathematics and the artistic context. A student can transition from one representational context to the other, without using The variation tool with four fields of interaction: sliders, lower limit, upper limit, and step values the technological tools, through paper-and-pencil investigation communicated through words, notes, drawings, and gestures. Synthesizing is thus viewed as transitioning between two different contexts that represent a mathematical idea in a different way, notation, and level of consciousness and abstractness. In order for students' meaning-making process to progress towards discriminating and generalizing phases, transitions beyond MaLT2 to the context of abstract mathematics are essential. This should not degrade the importance of transitioning to the artistic context, since it is assumed to be vital for grounding and guiding the whole process. Music and dance, in particular, embed multidisciplinary notions such as periodicity, symmetry, harmony, and synchronicity, which possess both an artistic and a mathematical aspect and can be easily represented with computational tools. Thus, they can afford synthesizing through establishing links between their different aspects, either abstract or in use. We presume that this artistic context, as being closer to students' intuition, sensibilities, and interests, would provide a source of motivation and creativity fostering the natural fluidity throughout the model guided by their own agency. Research Questions Relying on the theoretical lenses described in the previous sections, we set out two main research aims: to shed light on (a) the way transitions among different contexts of the modelized network influence students' meaning-making process in terms of the UDGS model and on (b) the nature of mathematical meanings that would emerge from this open artistic, digitally, and programming-based activity. Thus, we pose the following research questions: • What role do transitions within and transitions beyond MaLT2 play in the progression of the students' meaning-making process? • What kind of mathematical meanings are derived out of the artistic contexts of music and dance through these transitions? Design of the Research In order to answer the above questions, we engaged in design research through an implementation with one small group of students, which took place in a school classroom in Athens, Greece. The group consisted of three Grade 8 (aged 14-15) students (Mary, Nikos, and Chris) 1 who volunteered in an after-school setting. A former pilot implementation of the study provided feedback for small changes in the current one, as part of the design-based research (DBR) framework that we adopted (Bakker, 2018). We consider these two small implementations as the beginning of a circle of design experiments in a bigger scale, where we aim at re-adjusting the theoretical and design ideas of this study. The main activity designed for this study was titled "Dancing Animations." We prepared a list of fourteen song extracts, 2 each one of which being considerably cropped in order to have the same, steady musical rhythm throughout its duration. The aim of this activity was for students to construct an animated figural model of their own ideas in the MaLT2 microworld and synchronize it to the rhythm of a song of their option, in order to create a dancing animation. The animating feature can be carried out in MaLT2 through dragging of the sliders of the variation tool that creates a sense of dynamic "behavior" to a figural model constructed by a parametric procedure. Their final creation would be controlled by them through the sliders of the variation tool, 3 while it would be captured with a screen recording application. After the completion of the research, the song audio sound would be added to the video in order to create the final product. No video mixes or trimmings could be made. Due to Covid-19 restrictions, each student was working on a different lap-top, using headphones and keeping notes in a separate paper sheet. However, they were encouraged to discuss ideas or asking for advice within their group. All students had some previous experience with MaLT2, but for revising reasons, an introductory task was embedded into the activity in order for them to recall MaLT2 commands and functionalities. It was named as a MaLT2 "warming-up" and was structured by the following three questions: 1. Can you make a rectangle move in the plane? 2. Can you make ten rectangles move in the plane simultaneously? 3. Can you make ten rectangles move in the 3D space simultaneously The results taken from this initial task will not be discussed in this paper, since its purpose was preparatory. After its completion, students were encouraged to engage in the main task of creating and synchronizing the dancing animation. The task had a profound level of freedom so as to stir up students' own agency and provide authentic data corresponding to the openness of our research questions. The whole activity was planned to last for 3 h. The second author was the researcher who participated in this setting and took on the role of facilitating the progress of the activity, helping with functional problems, and provoking students to express their ideas out loud. The questions used for this reason were not pre-structured, but were rather adjusted to each student's flow of transitions. Methodology The data for this study consisted of students' discourse, actions in the digital resource, MaLT2 saved files-including the final artefact of each studentand paper-and-pencil notes. The recording tools were a screen video and audio recording application for both the computer screen-showing the MaLT2 environment-and its audio outputs. The latter was used for capturing the sound of the song, which was played in an audio player application, controlled by each student's initiative. The same recording application was used for capturing external sounds (mainly for students' discourse). The output of this application, which was the main data unit for analysis, was a 3-h-long video. Another source of data was students' gestures and facial expressions throughout the activity, which were monitored and noted down by the attendant researcher. We analyzed the data adopting a qualitative approach, from which the modified version of the UDGS model was emerged. Students' actions and expressionseither linguistic, gestural, noted down, or captured through their activity within the digital resource-were initially put into categories. Each category corresponded to one out of the five representational contexts (see Fig. 3), in order to capture their flow of transitions within and beyond MaLT2, forming three separate trails. Transitions within were traced directly through the video recording by following students' actions with the digital tools, whereas transitions beyond were mostly traced through oral, noted down, or gestural expressions. Consequently, we identified instances of their flow as being mapped to phases of the UDGS model, in the way they were described in the "Theoretical Background" section. As a result, each student's overall captured activity was analyzed into phases of the UDGS model according to the way they were acting or communicating the intension of their action. For example, the using phase was more easily detected since it was directly connected to students' decisions and actions mirrored through their transitions within MaLT2. It was initially assumed that every action in MaLT2, either intuitive or reflective, was linked to the-either subconscious or intentionaluse of mathematical concepts, given the inherent highly mathematized way of using MaLT2 tools and functionalities. In the case of using mathematical concepts intuitively, the researchers interpreted students' actions based on their subsequent more intentional actions and by employing their own insights and agency. On the other hand, tracing the discriminating, synthesizing, and generalizing phases was connected to a deeper level of analysis focusing on students' discourse, notes, and gestures accompanying their transitions within MaLT2. The synthesizing phase included students distancing themselves from the computer by observing, discussing, and reflecting on the outcome, or writing down algebraic or geometrical notation, or even by making gestures linked to artistic-mainly dancing-gestures, and vice versa; and students returning back to using the digital tools after reflection. The discriminating and generalizing phases were both mirrored in students' discourse, mathematical notes, and their way of using the programming language in the MaLT2 editor. Discriminating corresponded to clearly recognizing the mathematical concepts in use by concretely referring to them, while generalizing was matched to expressing acknowledgement of generalized relationships and their utility in the digital construction. In some cases, students' intensions remained quite vague until they achieved or gave up their initial goal and communicated the reasons that led them to do so. As a result, coding and categorizing their actions was a long back-and-forth process, the full details of which would exceed the space limitation of this paper. In the following section, we present instances of four different flows of transitions generated from the students. The reason these specific flows of transitions were chosen was that they included all four UDGS, forming a well-rounded corresponded to different mathematical concepts. Each flow was labeled after a specific artistic idea which was set out as a guiding goal from the student and incorporated different kinds of mathematical ideas. Setting Up-MaLT2 "Warming Up" Task The data analyzed for this study begun from the point where each student had completed the initial "warming up" task, which lasted 30 min, having constructed two procedures: (a) one that creates a rectangle with two parameters used for its sides; (b) one that creates ten rectangles by including the previous procedure. These rectangles were moving in 3D space while dragging the sliders of the variation tools. This was achieved through the use of one or more parameters in turn commands, i.e., "right" or "left" and "up," "down," "roll_right," or "roll_left." Each student used similar commands for the first procedure and quite different commands for the second one, which led to four individual animations. Examples of Nikos's and Mary's artefacts are shown in Fig. 6. A main realization drawn out of this task and shared by all three students was that, in order for an artefact to be animated, the use of the variation tool and, consequently, the use of parametric procedures are necessary. The constructed artefacts were the starting points for each student to begin with for making their figural dancing animation. They were free to make any changes to their already made artefact, either slightly ones or completely changing it and starting their creation from scratch. During the main activity, they had their picked-up song playing and pausing in the background and putting on and taking off their headphones occasionally. Fig. 6 Instances of two artefacts made by two students (Nikos and Mary) constructed at the initial "warming up" task: the code was translated in English, since these students were writing programming commands in Greek Flow of Transitions for Making a Periodic Spinning All four students followed a similar initial pattern of transitions within and beyond MaLT2, starting from dragging the sliders of the variation tool and observing the visual outcome at the screen to picturing a dancing move while listening to their song. Thus, all flows of transitions begun with intuitively using implicit mathematical ideas underpinned in the use of the variation tool, which was directly connected to the synthesizing phase, by connecting the visual outcome to an artistic idea. After dragging the sliders of the variation tool (using the keyboard left and right arrows) with the initial (default) input values, they tried different lower and upper limit values to each parameter while starring at the animated artefact. Mary's flow of transitions was founded on an initial synthesizing phase, where she set the artistic goal of making dancing move resembling to a periodic spinning. She shaped and gradually generalized meanings around periodicity of angles. Mary's way of dragging seemed spontaneous at first, but gradually turned into a periodic dragging to the right and to the left, from the lower to the upper limit and vice versa (Fig. 7). She was simultaneously listening to the selected song extract ("Milky Chance-Stolen Dance"). A question posed by the researcher provoked her to communicate her thoughts: Researcher: How could you make a nice dancing move? What do you think of that? Mary: I like the way it spins when I drag the bar of the r. It is kind of, like dancing! To the right, and to the left. Nanana, nanananananana, nanananananana. It suits the song! Mary transitioned beyond MaLT2 by discussing the "style" of her animation being suited to the selected song, while dragging the slider of the parameter r. She intuitively used the notion of periodicity integrated in the "right:r" command and the use of the corresponding slider, by repeating dragging it to the right and to the left periodically while repeating the words "right" and "left." As her actions and words indicate, she discriminated that this parameter stands for the value of degrees that the avatar turns to the right, after constructing each rectangle, creating a pleasing dynamic outcome. She changed its upper limit from 60 to 100. She repeated dragging the slider from the left (r = 0) to the right until the upper limit (r = 100). She then changed it again to 120 and repeated the procedure. Mary intuitively approached the intrinsic periodicity of the parameter r of the command "right:r." After transitioning beyond MaLT2, towards the artistic context, she set out the goal of creating a spinning dancing move. She proceeded by seeking for a way to achieve it through transitioning within MaLT2. As it was shown by her subsequent actions, she conjectured that by changing the upper value of the parameter r, she could increase the spinning duration. She started gradually increasing the number at the upper value input box from 100 to 180, from 180 to 200, from 200 to 300, and then from 300 to 360 (Fig. 8). Every time she made a change, she was testing the result through transitions within, from using the variation tool to getting feedback from the dynamic outcome at the scene and vice versa. After a while, she noted down "1 circle = 360." The following dialog reveals how her transitions within MaLT2, where she intuitively used the concepts of angle and circle, led her to transitioning beyond MaLT2, to the abstract mathematics context where already formed mathematical meanings were recalled. She progressively discriminated the mathematical properties of the "right" command connected to the concepts of angle and circle. Researcher: How did you find this value? (showing the value of 720 as upper limit). Can you describe your way of thinking? Mary: I tried different values. I put right 180 because I thought 180 degrees is the bigger turn. While it goes from 0 to 180 it was like starting spinning from the left and then slowly turning to the right while opening like a flower. But then I tried 200 and saw that it turns even more. This is not actually an angle, it is a turn. It is different. (...) When I put 300 it starts spinning again to the left while closing. And when it becomes 360 it ends where it begun! Researcher: How did you find the exact value 360? Mary: I put 300 and show that it almost got its initial form. So, I thought of a circle, a full circle, which is 360 degrees. (...) 360 just makes sense. It is a whole turn, starting from this shape, spin all around and return to this state. Now it's like it did a whole circle of spinning. After using and experimenting with different values, she ended up synthesizing digital and mathematical aspects of the artefact's behavior, by making the connection between the dynamic manipulation of the parameter r and the mathematical concepts of angle and circle. This transition beyond came up naturally after continuous transitions within MaLT2, through a circle of conjecturing and testing on the variation tool and getting feedback from the dynamic outcome in the scene. She also intrinsically discriminated the periodic property of angles and consciously used it to create a periodic motion. Mary further expanded and generalized mathematical meanings around periodicity of angles by posing the following question, which set another round of transitions within and beyond (Fig. 9). Mary: What would happen if I put more than 360 here? She tried the value of 400 in the upper limit of the parameter r in the variation tool. She dragged its slider and took a confirmatory expression. Then, she noted down the calculation: "360 + 180 = 540." Mary after reflecting on the feedback received through MaLT2 3D graphical representation, transitioned beyond MaLT2, to the context of abstract mathematics, in order to mathematically interpret the behavior of her artefact. She was led to the generalization that the period of the "turn right" function, connected to the concept of angle, is 360°. She also synthesized this generalization with its artistic aspect by transitioning beyond MaLT2, to the artistic context, and appreciated the physical dynamic outcome. She extended her generalized meanings around mathematical properties emerging from the periodicity of angles, as the following sharing of her thoughts indicates: Mary: It is like from 0 to 180 is goes one way, then from 180 to 360 it goes the reverse way. Researcher: What do you mean it goes the reverse way? Mary: I mean the same way it spins while it opens, the same way it spins while it closes. I don't know how to explain it. Now, look, when r equals 119, it looks like this, like a windmill. It looks exactly the same but at the opposite side. (Fig. 10 She then used two more pairs of values whose sum is 360 and verified her conjecture (Fig. 10). Mary further synthesized the mathematical, digital, and artistic aspects of the generalized meanings around periodicity. The following part of a dialog reveals her respective transitions beyond: Mary: The dancing move now has three circles of spinning like this and reversely...if I start dragging from 0 to 1080. (...) I have to find how many circles are there in the song. Researcher: What do you mean by circles? Mary: I mean how many repeats are there in the song. Because both the song and the dance should have the same repeats. (She listened to the song extract three times.) Each circle ends with the lyric "we don't talk about it". The song has four whole circles. So I extended the value here by 360. 1080 plus 360 is 1440. Now the dancing circles are also four. At this point, Mary synthesized her meanings around periodicity by transitioning among its mathematical, digital, and artistic representations. She established connections between the period of the song, the period of the dancing move, and the period of angles. She consciously expressed that the concrete number of periods of the song and the animation should be jointly taken into account in order for them to match together. She finally used the synthesized meanings in order to synchronize her dancing animation to the musical rhythm. Flow of Transitions for Tuning the Periodic Spinning into the Musical Rhythm Building on the previous flow of transitions within and beyond MaLT2, Mary set out a new goal based on the initially intrinsic idea of synchronicity. She listened to the song extract, while dragging the slider of the parameter r to the right. The following dialog indicates her way of thinking: Researcher: How can you say if your animation is in the musical rhythm? Mary: It is not. I can say it. Because it is too slow. I want it a bit faster, but I don't know how. Researcher: Why faster? What do you want to achieve by that? Mary: I want ... every time the song says "we don't talk about it", I want the dancing move to be completed and restarting. Because that is when the song circle ends. I want to make them repeat together. (...) So that the four circles will match together. Mary transitioned from using the variation tool and observing the animated outcome (transitions within) to the artistic context (transition beyond) for setting out the goal to tune her animation into the musical rhythm, in terms of speed. She posed the problem that the animation is "too slow" for being in tune with the song and explicitly expressed the need to match the "music circle" to the animation one. Without using the formal terms, she implied that she wanted to adjust the periods of the song and the dancing animation by reducing the latter. Thus, an implicit transition beyond MaLT2, between the artistic and the mathematical context, was made. She continued by consistently listening to the song and dragging the slider of the parameter r, transitioning from the digital dynamic representation to the musical rhythm (transition beyond). Researcher: How much faster do you need it to be? Mary: I don't know. Not much faster. The song circle finishes when r equals 250. Researcher: So, you want the animation circle to finish at 250 too, right? What do you think you should change? Something at the variation tool or the procedure's code? Where? Mary: The code. I will change the command 'right r'. (...) It has to make a full circle when r becomes 250. Mary went through a circle of transitions within MaLT2, where she intuitively used mathematical notions around synchronicity and rhythm. She went from forming a conjecture by using mathematical notions of additive relationship and proportion; to experimenting this conjecture within MaLT2 representational fields; and finally transitioned from MaLT2 to the artistic context (transition beyond) for testing its validity according to its level of tuning. The way of using mathematical notions within the digital tools gradually turned into a conscious intentional one, leading to discriminating the mathematical ideas. As shown in Fig. 11, she changed the command "right:r" to "right:r + 110," then to "right:r + 300," then to "right:r*2," then to "right:r*1.5," and lastly to "right:r*1.4." When the researcher returned for discussion, she had noted down the calculations "360-250 = 110" and "360/250 = 1.44." Researcher: Did you make it? Mary: Yes, this time I'm sure! Researcher: Can you explain your thinking? (...) How did you end up writing r times 1.4? Why did you do these calculations? Mary: At first, I wrote :r + 110 because I thought that the circle would end at 250. Because 250 plus 110 equals 360. But then I saw that when the circle starts (...) when r equals 0 it is not at its initial form. Closed like this. It was not a full circle. So, then I thought that no matter what I would add to r, it doesn't get any faster, it just starts at another state. (...) Then I tried to multiply it with 2 and saw that it does start at its initial form and that it was a lot faster! But it was too fast. Its circle was ending at 180, not 250. So, I tried different values and I found 1.4 is the right one. Researcher: How did you find it so accurately? Did you try so many numbers? Mary: I tried some numbers and I was getting closed and then I calculated that 360 divided by 250 equals 1.44. And it worked! Researcher: Do you have any idea why it worked? Mary: Because when r becomes 250, it turns right 250 times 1.44; which is 360. So, a full circle! But now each circle is shortened. (...) This is why it is faster now The above dialog reveals the way Mary's transitions beyond MaLT2, where she synthesized meanings among digital, mathematical, and artistic representations of a period, led her to discriminating and generalizing phases. While transitioning within the representational contexts of MaLT2, she discriminated the functional relationship between the input values of the slider and the right turn as means of adjusting the animation's speed. She used the additive relationship, transitioning from mathematics to expressing and testing it within MaLT2. After disproving her conjecture, she alternatively used the proportional relationship, testing different values of the multiplying parameter in order to adjust the period of the animated artefact. Transitioning within MaLT2 was catalytic for her to find the appropriate value. As she stated later, she found the value of 1.4 empirically and afterwards transitioned beyond MaLT2, to the abstract mathematical context, to making the division 360 over 250 and typically support her empirical choice. Thus, she finished this flow of transitions by expressing a generalization of this value being mathematically verified and synthesized it to the notion of the "shortened" period. Flow of Transitions for Making a Periodic "Tango Move" Nikos' flow of transitions started with dragging the sliders of the parameter a of the procedure "rects" (Fig. 5) to the right and to the left repetitively, incorporating a periodic trait (Fig. 12). After continuously transitioning from the variation tool to observing the artefact for a while (transitions within and beyond), Nikos started visioning a dancing move more concretely. As shown in Fig. 12, when the parameter a equals zero (a = 0), the rectangles are aligned in the same plane, while as the values of the parameter a are reaching the upper limit (a = 100), the rectangles create a "wrapping" sense. These instances of the animated artefact were a bit surprising for him, as he seemed unsuspectingly excited for his creation. The following observation revealed Nikos transitioning beyond MaLT2 to describe his image of the dancing animation, which followed the transitions shown in Fig. 12. Nikos: It's like a tango move! Like when the couple opens their hands and then the man twists and closes the girl in his hands. While expressing this artistic idea in words, Nikos tried to imitate the tango move with his body. He extended his arms horizontally and then wrapped his right arm towards the other. The transition from the variation tool to the artistic context (transition beyond) in order to form a new artistic idea of a dancing move set out the beginning of his experimentation. He then projected his artistic idea to MaLT2 graphical representation (transition beyond), by visioning it as a rhythmically "wrapping" and "unwrapping" of the artefact. He extended his goal for improving the animation, while listening to the song extract he had selected ("The Animals: House of the Rising Sun") and simultaneously dragging the slider of parameter a. He shared the new goal that he sent out: Nikos: When I play the song along with the animation, it seems too short. I need to make the animation last longer. (...) a is a turn. I can make it turn even more by increasing this. I can put 200 instead of 100. Nikos synthesized the musical and digital contexts of his synchronization goal and used the notion of the period in order to connect the musical and animated rhythms. He transitioned beyond MaLT2, between the artistic-musical context and the digital dynamic representation. He expressed the goal of extending the animation's period through posing the problem of their mismatched duration. Then, he engaged in transitions within MaLT2, between the variation tool and the 3D scene, where the concept of period was used and investigated, from an intuitive to a gradually more conscious way. He changed the upper limit of the parameter a from 100 to 200 and observed the dynamic outcome while dragging its slider from 0 to 200 and back. Then, he changed the upper limit to 250 and repeated the process (Fig. 13). He made the following observation: Nikos: After the value 180, it starts unwrapping again, but in the opposite direction. 181 is when all the rectangles are one on top of the other. This matches the tango move, I like it. Nikos transitioned many times from the digital to the artistic context, by implicitly using mathematical ideas in his interaction with the tool, while trying to visioning his dancing move. He kept changing the upper limit of the variation tool to extend the animation's period and observing the graphical outcome (transitions within), but he always changed it back to 180, because of its artistic symmetry (transitions beyond). At some point, he discriminated the notion of the angle and gave mathematical sense to the dynamic behavior of his animation: Nikos: We know the angle of 180 degrees. Yes, 180 is the accurate number. It is a full angle. (...) From 0 to 180 it makes half a circle of dancing. Then, he listened to the song while dragging the slider and disappointingly shared that the animation was not synchronized to it: Nikos: It is out of rhythm. The song is slower. (...) I want it to be fully wrapped right before he starts singing. At the 00.11 minutes. It is almost twice as fast, because it Nikos considered the first period of the song in terms of time and compared it with the period of the animation, synthesizing the notion period in the two different contexts and started discriminating their proportional relation. Then, he used the notion of the period and went through synthesizing phases by setting out the goal to adjust the animation's period to the song's one. After a while, he changed the step input value of the parameter a at the variation tool from 1 (default) to 2. He repeated the dragging of its slider from 0 to 180 (Fig. 14, left) and backwards while listening to the song. He was transitioning from the digital to the artistic context alternatively (transition beyond) in an attempt to discriminate how the change of the step value of the variation tool would affect the length of the animation's period. He shared such instances of discrimination, after testing whether the new animated artefact is synchronized with the musical rhythm through transitions within and beyond MaLT2: Nikos then changed the step value of the parameter a from 2 to 0.5. Once again, he repeated the dragging of its slider from 0 to 180 (Fig. 14, right) and backwards while listening to the song. This time, he delightedly commented on the result: Nikos: Yes, great! It is totally matched now! Researcher:How did you manage to do it? Nikos: I changed the step value to 0.5 now. I thought that it would make the animation last longer. The more the step value decreases, the more it lasts for the dancing move to be completed. Before it was taking 180 values, but now it takes twice as much (...) it takes 360 values. Thus, it lasts twice as much time, which is almost 11 seconds. (...) So now if you listen to the song, it matches perfectly. It starts wrapping slowly during the first song's verse. And at the second one it slowly unwraps in the same rhythm. In the same direction as before. Nikos' meaning-making process started from intuitively using, then gradually discriminating, and finally generalized to some level the inverse proportional relationship between the step value and the number of input values of the parameter a, in order to extend the period of his dancing animation. He ended up establishing and using the exact arithmetic relationship between the step value, the number of input values, and the time duration. Flow of Transitions for Making a 3D Whirling Chris followed a sequence of creative actions close to Nikos's one, where he ended up synchronizing his animation to the musical rhythm of the song he had selected ("Muse: Time Is Running Out"-30 s). Same as Nikos, he changed the step value of two parameters standing as input for "right" and "roll_right" commands many times until he found the most appropriate one (0.7) in order to adjust the animation's period to the song's one. During this flow of transitions, which will not be further analyzed here, he engaged in all phases of synthesizing, using, discriminating, and generalizing of mathematical meanings around periodicity, proportional, and inverse proportional relationship. Chris also engaged in another flow of transitions where he shaped mathematical meanings on the notions of the variable and covariation of quantities. The starting point of this process was a concern shared while dragging the sliders of the procedure "rectangles" (Fig. 15). Chris: The problem is that when we will record this, I can only drag one slider. I won't have time to drag both. Researcher: Why is this a problem? Chris: Because I like both ways of moving. When I drag the slider of y, it is nice because it is turning like a wheel, like a clock. And I like that because the song says "time is running out" and it is like a clock spinning. But I also like dragging the variable x because it is more...interesting. I want this variable to change, too. (...) Can I change them both at the same time? Chris set out the goal to integrate the two different dancing moves in his animation. He transitioned from the digital dynamic representation to the artistic context (transition beyond), evaluating it with regard to the song's style. He then transitioned from interacting with the variation tool to the abstract mathematical context (transition beyond) for giving mathematical sense to his goal. Researcher: Can you think of a way? A way to change two different commands, these turns, simultaneously by dragging only one slider? Chris: (...) What if I put the same variable? The variable y to both of them? Then I will only have one slider to drag. Chris had already used one parameter for four different commands at the warming up task, where he wrote the procedure "rectangle" using only one parameter (a) in all "forward" commands, embedding the proportional relationship of one side being two times bigger than the other. He proceeded by making changes in the procedure "rectangles." He erased the parameters a and x and changed the commands "rectangle:a" and "roll_right:x" to "rectangle 50" and "roll_right:y" (Fig. 16). He recognized that the command "rectangle" in the procedure "rectangles" can take a constant number rather than a parameter, since he was not interested in changing it. He started discriminating the role of the parameter, transitioning from the programming context of application to the mathematical one (transition beyond). After changing the code, he tested the dynamic outcome by dragging the Chris: Now it does both kinds of spinning simultaneously! It is much better! Look! Researcher: Wow! Why do you think this is happening? Chris: Because the same time that the angle of rolling right changes, the angle of turning right is also changing. I can control them both at once. So, it turns like this and like this together. (He turned his hand in two different directions.) As emerging from his thoughts, Chris generalized his meaning on covariation of functions-commands with the same input. However, he did not stop his construction at that point: Chris: I could make it turn right faster than it rolls right. I wonder how it will look like. Researcher: What are you doing now? Chris: I want to test how it looks if I make it turn right twice as much as it rolls right. Chris extended his artistic idea, set a new goal expressed in mathematical terms of proportional relation (transition beyond), and continued engaging in transitions within MaLT2, among all three representational contexts. He changed the command "right:y" to "right 2*:y." He observed the dynamic outcome while dragging the slider of the parameter y from its lower to its higher limit and, reversely, while listening to the song (Fig. 17). He thus transitioned from the digital context to the artistic one (transition beyond) and was esthetically pleased with the result in combination with the song's rhythm and style: Researcher: Do you see any difference? Chris: Yes, I like it better now! It moves more smoothly than before. It is like a weird 3D whirling... It matches the song a lot! Fig. 16 The new code after Chris made changes in the procedure "rectangles" Chris: If I had time I would try more combinations. Researcher: What do you mean by 'combinations'? Chris: Between the way these two kind of turns change. I could make it even more harmonic or complicated. (...) By using only one variable that makes each command change according to its values. The rhythm remains the same. At the three main points which are 0, 180 and 360 that the song verse changes, it is at a horizontal position; same as before. (...) If I had time I would add a variable here, instead of 2 and try and find the best relationship between them. Chris made transitions from the digital resource to the abstract mathematics context and back in order to describe and explain the way that the dynamic figural motion was generated through his procedure. He started generalizing the joint variability of these two different variables-outputs of the functions "roll_right:y" and "right 2*:y." He also discriminated the mathematical role of a parameter from that of a variable by suggesting the idea of using a variable to "try and find the best" dynamic condition. Time limitations did not allow Chris to continue with his exploration and shaping of meanings around variability that would maybe have led him to more transitions between MaLT2 and abstract mathematics, providing opportunities for more abstract levels of generalization. Discussion The modified UDGS model turned out to be a supportive tool in order to capture students' meaning-making process while interacting with the digital resource. The widening and further conceptualization of the using and synthesizing phases as transitions within and beyond MaLT2 respectively provided sustainable ground for describing the nature of mathematical meanings connected to music and dance. Despite the limited data of this study, its theoretical model contributed to a concrete, Fig. 17 Instances of Chris's animation while dragging the slider of the parameter y articulated exposition of the different forms and levels of students' meaning-making process within the transdisciplinary and multi-representational context of the task. As indicated in the results, the diversity of representational systems reinforced the cultivation of meanings on the mathematical concepts in use. Out theoretical analysis also pointed out the role of both the digital tools and the artistic components to this process. Four concrete flows of transitions emerged from students' own activity, each one corresponding to one main artistic idea and the mathematical concepts around it. The artistic ideas involved the dynamic and figural aspect of their dancing animation, which was either intended to be an imitation of a real dancing move or something from their own fantasy. In this case, they used and gradually, to diverse levels, generalized mathematical concepts connected to the multidisciplinary notions of periodicity, harmony, and symmetry, such as period, circle, turn, proportional and inverse proportional relationship, variable, function, and covariation of quantities. Their engagement also involved the dynamic aspect of synchronizing their animation to the musical rhythm, where students used mathematical concepts connected to periodicity and synchronicity, such as proportional and inverse proportional relationship. The role of the variation tool was catalytic for the exploration of such dynamic concepts and the formation of mathematical meanings around them. In fact, some students' interaction with the variation tool was richer compared to the use of programming for making their animation and expressing their ideas, as the feedback gained from using the sliders was more direct. For example, as seen in their flow of transitions analyzed in subsections A and C, Mary and Nikos interacted exclusively with the variation tool within MaLT2 for achieving their goals. They generalized meanings around angle, input values of a variable, periodicity, and proportional and inverse proportional relationship between concrete quantities that helped them construct their envisioned dancing animation. On the other hand, programming was connected to higher, more reflective, levels of generalization of mathematical meanings and, at the same time, more artistically spectacular creation. For example, as described in subsections B and D, even though students' changes of the procedures coding in the editor seemed relatively minor, they signified a highly reflective way of using the generalized mathematical concepts such as periodicity, variable, and covariation that led them to the refinement of impressive animated figural dancing moves. Throughout the process, transitions within MaLT2 were influencing transitions beyond and vice versa, highlighting their joint important role to the progression of students' meaning-making process. In all analyzed cases, the figural and dynamic digital representations incorporating mathematical concepts in use, which were sometimes unanticipated, worked as inspiration for the formation and extension of artistic ideas. Students used mathematical concepts and relations, by transitioning within MaLT2 representations, at first in an intuitive, vague way and gradually in a more conscious, concrete one, in order to test their ideas. These concepts, though initially implicitly embedded within the engagement with the digital tools, progressively got discriminated and generalized in order to be promptly used in a practical, meaningful way within the digital resource. The synthesizing phase, in the widened conceptualized sense of our framework, was central for all the analyzed flows of transitions. The artistic external context provided a reference point, where mathematical relations represented and explored in the digital tools could be given an additional meaningful form of application. It also provided a strong motivational boost for students' flows of transitions that shaped the nature of the mathematical meanings and guided their overall meaning-making process. It may be interesting to consider this research as contributing to the question of what mathematical concepts, structures, and connections may present dense interesting fields for meaning-making within Kaput's virtual culture or rather enabled by tools, norms, and practices therein. Wilensky and Papert (2010) used the term "restructurations" to discuss readdressing what mathematics can now become fertile grounds for meaning-making given digital media. In this perspective, mathematical meanings originating from each flow of transition can be viewed as part of a novel mathematics curriculum structure, transformed by digital and programming affordances, and guided by open, creative engagement with artistic ideas around music and dance. Even though the range of this research is quite small, it provides an insightful glance at some main conceptual axes around which students were led, by using the digital tools of MaLT2 and following their own agency. This study allowed us to consider the notion of time, connecting mathematics with temporal issues and perceiving synchronicity as a potentially fertile field to generate meanings around periodicity, periodic functions, covariation, and proportionality. Funding Open access funding provided by HEAL-Link Greece. Data Availability The transcribed data of audio recordings and interviews have been decoded so that there are no data on student identity and they are stored in secured servers of the NKUA. Parts of them can be provided after a request and a justification of use to the authors. The data are currently available in the Greek language, but parts can be translated if necessary. Conflict of Interest The authors declare no competing interests. The software (MaLT2) that is used is open source and has been developed by Educational Technology Lab of National and NKUA. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2850440
s2orc/train
v2
2014-10-01T00:00:00.000Z
2007-09-24T00:00:00.000Z
Coal tar creosote abuse by vapour inhalation presenting with renal impairment and neurotoxicity: a case report A 56 year old aromatherapist presented with advanced renal failure following chronic coal tar creosote vapour inhalation, and a chronic tubulo-interstitial nephritis was identified on renal biopsy. Following dialysis dependence occult inhalation continued, resulting in seizures, ataxia, cognitive impairment and marked generalised cerebral atrophy. We describe for the first time a case of creosote abuse by chronic vapour inhalation, resulting in significant morbidity. Use of the polycyclic aromatic hydrocarbon-containing wood preservative coal tar creosote is restricted by many countries due to concerns over environmental contamination and carcinogenicity. This case demonstrates additional toxicities not previously reported with coal tar creosote, and emphasizes the health risks of polycyclic aromatic hydrocarbon exposure. Background Coal Tar Creosotes are distillation products of coal tar widely used for preserving wood. Creosotes pose a health risk due to carcinogenicity, and are banned from commercial use in many countries. The European Union banned the commercial sale of creosotes in 2003. Creosote volatiles are complex, consisting of almost 300 different compounds [1]. However, creosote abuse by vapour inhalation has never been described. Case Report A 56 year old aromatherapist presented with advanced renal impairment (Blood Urea 26.1 mmol/l, Creatinine 704 μmol/l). She had been fatigued with general malaise for the preceding three months, but denied any constitutional symptoms. She had developed modest nocturia, but had no other lower urinary tract symptoms. She had been previously healthy, and denied the use of any medications or herbal remedies. Physical examination revealed pallor, hypertension (Supine BP 210/110) and euvolumia, no mucocutaneous lesions, and no uveitis. Urine analysis was positive for protein on stick testing, with negative microscopy. Full Blood Count at presentation confirmed normocytic normochromic anaemia, without eosinophilia. A renal tract ultrasound was performed, and revealed normal sized unobstructed kidneys. Renal biopsy showed a 9 mm core of renal cortex in which the glomeruli and vasculature did not exhibit any significant abnormality. By contrast there was diffuse and striking cortical tubular atrophy with evidence of ongoing epithelial shedding (Fig. 1). The tubular epithelium was markedly attenuated to the point where proximal and distal tubules were not reliably distinguishable. Tubular cells showed occasional apoptosis, increased nuclear variability, scattered tubules showed cytoplasmic vacuolisation, and occasional cells showed cytoplasmic lipochrome pig-ment. Tubular basement membranes were thickened and wrinkled. The cortical tubules were widely separated by an expanded, rather pauci-cellular interstitial matrix in which sparse small lymphocytes, macrophages and occasional eosinophils were evident. No kidney was present in the immunofluorescence sample, but electron microscopy showed a glomerulus with only mildly expanded mesangial matrix, mild wrinkling of the basement membrane, no immune complex deposits or podocyte foot process fusion. No immune deposits were evident in the interstitium on electron microscopy. It was concluded that the biopsy showed a chronic low grade interstitial nephropathy that was suggestive of a toxin-induced process. Despite re-interrogation of the patient, exposure to all known nephrotoxins was denied. The patient received a trial of oral steroid therapy but failed to improve and commenced regular haemodialysis four months later. Fifteen months after commencing haemodialysis, the patient was readmitted to the emergency unit with increasing confusion. Admission was followed with three short generalized seizures terminated by intravenous diazepam, with a prolonged depressed level of conciousness post-ictally. As metabolic causes for seizures had been excluded, a Computerised Tomogram (CT) of the head was performed. This did not identify a cause for seizures, but revealed significant cerebral atrophy deemed out of keeping for the patient's age. Magnetic Resonance Imaging (MRI) confirmed cerebral atrophy, and additionally identified subtle deep white matter changes. Renal biopsy specimen obtained at the time of clinical presentation with advanced renal impairment Figure 1 Renal biopsy specimen obtained at the time of clinical presentation with advanced renal impairment. There is severe tubular atrophy and interstitial changes consistent with an advanced chronic interstitial nephropathy. A) Low power haematoxylin and eosin stain demonstrating chronic interstitial nephropathy. B) High power view indicating tubular debris within the nephron lumen (arrowed). C) High power view demonstrating marked tubular cell vacuolisation. D) PAS stain demonstrating thickened basement membrane and marked cytoplasmic lypochrome pigmentation. A B D C Following treatment with intravenous phenytoin she suffered no further seizures though she remained comatose for three days, recovering slowly thereafter. At this time, examination revealed proximal muscle weakness, ataxia, a wide-based gait, blunted affect, and evidence of memory impairment. A period of in-hospital rehabilitation was followed by a home visit with an occupational therapist, at which time the patient was witnessed to inhale creosote vapour from a container filled with coal tar creosote and kept on a kitchen shelf for this sole purpose. Upon interrogation she admitted inhaling creosote vapour daily and often for at least 6 years prior to the first hospital admission, frequently carrying a concealed supply on her person when venturing from home. Indeed, a neighbour had surreptitiously provided her with a creosote-impregnated cloth in a sealed sandwich container, facilitating continued unidentified inhalation during her inpatient stay. Further progression of her neurological condition was evidenced by the subsequent development of dementia over a period of months. Discussion Inhalant substance abuse is widespread, typically involving the inhalation of glue, spray paint, petroleum, correction fluid, nail polish remover or dry cleaning fluid by adolescents. Polycyclic Aromatic Hydrocarbons (PAHs) are common to these substances, and their inhalation has multiple toxicities. As all these compounds contain a variety of different PAHs, it is impossible to determine the individual toxicity of any particular PAH. Coal Tar Creosotes likewise contain many PAHs, and creosote vapour includes naphthalene, acenaphthene, fluorene, phenanthrene, anthracene, fluoranthene and pyrene among many others. All of these are detectable in urine following inhalation exposure, with naphthalene being the most abundant [2]. Here we describe the first case of creosote misuse by vapour inhalation. Nephrotoxicity PAH exposure is associated with a higher risk of renal dysfunction and renal cancer. A large body of evidence confirms nephrotoxicity with chronic exposure to PAHs [3], and experimental data have demonstrated both glomerular and tubular toxicity [4]. Specific data on the renal effects of creosote vapour inhalation are limited, although animal studies suggest renal toxicity. Systemic exposure by inhalation has been demonstrated by detection of the pyrene metabolite 1-hydroxypyrene in urine of exposed subjects [5]. In an experimental model of coal tar creosote (CTC) vapour exposure, Springer et al observed pigmentation of tubular epithelial cells, proliferation of urothelium, and a significant increase in BUN concentrations [6]. In a cross-sectional study by the Tabershaw Occupational Medicine Associates (TOMA 1981), wood factory workers exposed to creosote had an increased incidence of haematuria, red cell-and granular casts, although renal function of this cohort was not documented. Interestingly this group also noted eosinophilia in 8% of workers. The smallest and most volatile component in creosote is naphthalene, the principal ingredient of moth balls. Addiction to naphthalene inhalation has been reported, and naphthalene exposure has been associated with chronic renal failure [7]. Mechanistically, a number of factors may contribute to the nephrotoxicity observed with hydrocarbon exposure, as reviewed by Ravnskov [3]. First, hydrocarbons may induce renal injury by combining with renal proteins and acting as haptens to induce auto-immunity against renal cells. Second, hydrocarbons may modify T-cell function leading to an unfavourable cytokine profile and predisposing to T-cell mediated renal injury. This hypothesis is supported by the abrogation of renal injury in animal models by glucocorticoid pretreatment. Third, animal studies demonstrating an effect of species and sex on susceptibility implicate the importance of genetic and hormonal susceptibility. Neurotoxicity Volatile hydrocarbons are rapidly taken up into the circulation, and are sequestered in adipose tissues. Uptake by neurons is particularly rapid, accounting for the rapid 'high' often experienced with inhalant abuse. Subsequent excretion of PAHs or their metabolites occurs via urine, bile and feces. Inhalation of substances containing PAHs have been associated with confusion, disorientation, nystagmus, ataxia, cerebellar degeneration, tremor, white matter degeneration, memory loss, dementia and seizures [8], and there is in vitro evidence for a direct neurotoxic effect of many PAHs present in creosote vapour [9]. However, direct neurotoxicity from creosote vapour inhalation has to our knowledge not been described before. It is important to note that our patient was exposed to creosote vapour for many years prior to presentation, and neurotoxicity prior to dialysis dependence had not been overt. Reports of patients with end stage renal failure due to chronic PAH inhalation do not give the total exposure levels, and so its effects in this population are unknown. As our patient had become oligo-anuric over the initial 18 months of dialysis dependence, we hypothesize a susceptibility to neurotoxicity in this individual due to a decreased renal excretion of PAHs and their metabolites. The neurological signs were similar to that described with inhalant abuse of other aromatic hydrocarbons like toluene [10], including ataxia, gait disturbance, seizures, generalized cerebral atrophy and deep white matter changes. Conclusion Taken together, the clinical findings of this case and current understanding of PAH toxicities strongly implicate creosote vapour inhalation as the cause of this woman's clinical presentation. The ultrasound findings of 'normal sized kidneys' is perhaps unusual for a chronic interstitial nephritis. It is the case, however, that ultrasonographic estimation of renal size by ultrasound may vary by ~1.6 cm between operators or scans [11] and the histological picture indicates chronic kidney damage. Although causality remains unproven and most histological features are common to any toxin-induced interstitial nephritis, the cytoplasmic pigmentation of renal tubular cells identified on renal biopsy is unusual and congruent with experimental data [6]. As is often the case with inhalant abuse, this diagnosis proved difficult to establish despite repeatedly questioning the patient. Study of urinary hydrocarbons or tubular protein excretion was not undertaken as vapour inhalation was not noted until the patient was dialysis dependent and anuric. Nevertheless our contention that the creosote inhalation was instrumental to the pathogenesis of chronic renal failure is supported by an increasing body of evidence implicating PAH exposure in the development and progression of renal disease. This notion has implications for the significance of occupational exposure to PAHs. Our findings imply an occupational health merit in monitoring renal function in workers chronically exposed to coal tar and its volatiles, a requirement further supported by the possible increased risk of extra-renal toxicity in subjects with pre-existing renal impairment. Finally, this case highlights the often-underestimated value of assessing patients in the home environment.
18323500
s2orc/train
v2
2016-05-14T10:15:56.789Z
2014-02-17T00:00:00.000Z
Frequency-converted dilute nitride laser diodes for mobile display applications We demonstrate a 1240-nm GaInNAs multi-quantum well laser diode with an integrated saturable electro-absorber whose wavelength is converted to 620 nm. For conversion, we used a MgO:LN nonlinear waveguide crystal with an integrated Bragg grating in direct coupling configuration. Broadened visible spectral width and reduced speckle as well as a high extinction ratio between the below and above threshold powers were observed in passively triggered pulsed operation with smooth direct current modulation characteristics. The demonstration opens a new avenue for developing 620-nm semiconductor lasers required for emerging projection applications. Background Red laser light sources emitting in the wavelength range of 610 to 620 nm are particularly interesting for mobile display applications due to increased luminous efficacy and higher achievable brightness within eye-safety regulations [1]. Unfortunately, this wavelength range is difficult to achieve by using traditional GaInP/AlGaInP red laser diodes (LDs) [2]. Another well-known drawback of GaInP/AlGaInP diodes is the reduction of characteristic temperature of threshold current (T 0 ) with wavelength. High T 0 values have been demonstrated with red laser diodes emitting at wavelengths above 650 nm [3], while shorter wavelength diodes suffer from poor temperature characteristics [4]. These features render impossible the use of standard AlGaInP laser diodes in embedded projection displays, where large operating temperature range is typically required. Frequency conversion of infrared laser emission is an attractive solution for the generation of short-wavelength red light [5]. While GaInAs quantum well (QW) emission wavelength is practically limited to approximately 1200 nm [6], by using dilute nitride GaInNAs QWs with a tiny fraction of nitrogen added to the highly strained GaInAs, the emission wavelength can be extended to 1220-1240 nm for high luminosity red light generation at 610 to 620 nm by frequency conversion [5]. In addition, excellent temperature characteristics and high power operation have been demonstrated with GaInNAs laser diodes in this wavelength range [7]. Methods The GaInNAs/GaAs semiconductor heterostructure was grown on an n-GaAs (100) substrate by Veeco (Plainview, NY, USA) GEN20 molecular beam epitaxy (MBE) reactor with a radio frequency plasma source for nitrogen, a valved cracker for arsenic, and normal effusion cells for the group-III materials and dopants. Silicon and beryllium were used as n-and p-type dopants. The active region of the laser structure consisted of two 7-nm thick GaInNAs QWs separated by a 20-nm GaAs layer. The Ga 1 − x In x N y As 1 − y QWs had the nominal indium and nitrogen compositions of x = 33.6% and y = 0.6%, respectively. This double-QW structure was embedded in GaAs whose thickness was 142 nm on both sides of the structure. The undoped waveguide structure was surrounded by 1.5-μm thick n-Al 0.30 Ga 0.70 As on the substrate side and 1.5 μm p-Al 0.30 Ga 0.70 As on the top side. On top of the p-AlGaAs cladding, a p-GaAs contact layer was grown to finalize the structure. Figure 1 shows the band gap profile of the structure and summarizes the layer thicknesses. Strong room-temperature photoluminescence (PL) emission measured from this structure peaked at 1231 nm, as shown in Figure 2. Two heterostructures, comprising one or two QWs, were considered for the frequency-doubled 620-nm laser demonstration. The single-QW and double-QW structures were compared as broad-area ridge-waveguide (RWG) lasers in pulsed current mode. The double-QW structure was opted because it showed only slightly higher threshold current as compared with the single-QW structure (adding the second QW to the test structure increased the threshold current density from 500 to 610 A/cm 2 ), and double-QW lasers are known to be less temperature sensitive, i.e., to have larger T 0 [8], which is important for the targeted application. The difference between the slope efficiency values of the single-QW and double-QW structures was negligible. The processed laser chips employed a single transverse mode RWG process with ridge width of 3.5 μm and cavity length of 1250 μm. The laser diode further comprised an 85-μm reverse-biased saturable electro-absorber section to passively trigger short pulses for enhancing frequency conversion efficiency in the nonlinear waveguide. The front and rear facets of the laser diode were AR/HR coated with reflectivities of <1% and >95% at 1240 nm, respectively. A nonlinear waveguide crystal made of MgO-doped LiNbO 3 with high nonlinear coefficient was used for frequency doubling to visible wavelengths. The crystal had a surface Bragg grating implemented near the output end of the waveguide. The function of the surface Bragg grating is to provide self-seeding to frequency lock the IR laser diode in order to maintain sufficient spectral overlap with acceptance spectrum of quasi-phase- Results and discussion Free-running performance In free-running mode with the absorber section unbiased, the 1240-nm RWG laser diode exhibited an average slope efficiency of approximately 0.7 W/A and smooth L-I characteristics at 25°C as shown in Figure 3. The temperature performance was investigated in continuous wave (CW) mode (i.e. the absorber section forward biased by a contact to gain section). Kink-free operation up to 300 mA was demonstrated over the temperature range from 25°C to 60°C, as shown in Figure 4. The corresponding characteristic temperature (T 0 ) was 97 K for the low front-facet reflectivity-coated free-running LDs (see Figure 4). As it can be seen in Figure 5, the lateral far field exhibited stable single-mode operation up to 350 mA with no evidence of beam steering. The beam opening angles (FWHM) were 40°and 17°for fast and slow axes, respectively. Comparing the measured threshold current and T 0 values with the values of related red AlGaInPbased laser diodes is difficult, because these lasers can hardly reach lasing at 620 nm at normal temperature and pressure. Commercial single-transverse-mode RWG laser diode operating at longer wavelengths (633 nm) [9] has a threshold current of about 60 mA at 25°C, which is identical to the value of the GaInNAs laser reported here. Based on the data available on the datasheet [9], the T 0 of this commercial laser diode is estimated to be 89 K, which comes close to the value reported here for the GaInNAs laser. However, the T 0 value of free-running GaInNAs diode is suppressed due to the low front-facet reflectivity [10] and can thus be improved by providing the wavelength locking optical feedback from Bragg grating in nonlinear waveguide [11]. In addition, it is known that the performance of AlGaInP-based laser diodes, especially their T 0 values, deteriorate strongly as the wavelength is decreased towards 620 nm [4,12,13]. Frequency conversion The passively pulsed frequency-converted 620-nm laser configuration is shown in Figure 6. The 1240-nm infrared emission from the GaInNAs laser diode is directly coupled Figure 5 Lateral far-field stability vs. current in continuous wave mode at room temperature. Figure 6 Coupling configuration of passively pulsed frequencyconverted 620-nm laser. to MgO:LN waveguide for single-pass frequency conversion. The surface Bragg grating is implemented near the output end of the nonlinear waveguide, while the reverse-biased saturable absorber is located near the highly reflective back facet of the laser diode. Both facets of the nonlinear waveguide, as well as the output facet of the laser diode, are AR-coated to suppress interface reflections. Successful wavelength locking and passively pulsed operation (with absorber reverse biased) are achieved with the direct coupling configuration between the GaInNAs laser diode and MgO:LN waveguide. The infrared and visible spectra were recorded using Yokogawa AQ6373 optical spectrum analyzer (Tokyo, Japan) with extended wavelength range. Compared with the CW mode, the infrared ( Figure 7) and visible spectra ( Figure 8) are broadened when the absorber section was biased with 0.4-to 1.5-V reverse-bias voltage triggering passively pulsed mode. A considerable reduction in the speckle visibility is observed under pulsed mode when compared with continuous wave operation. This observation is supported by the measured broadening of the visible spectrum. The L-I-V performance under the passively pulsed reverse-biased mode was investigated using 0.2-mA current resolution in the visible output power range of 0 to 1 mW, as targeted for near-to-eye display applications. The lasing threshold was 63 mA under 0.4-V reverse bias. Above the lasing threshold, the visible light output represented smooth, slightly non-linear L-I curve within the targeted operating power range. The results are summarized in Figure 9. The exceptional feature of the 620-nm frequency converted visible light source with 'no visible light below lasing threshold' is presented in Figure 10, where the emitted infrared light and visible light are shown with logarithmic Y-axis scale. Below the lasing threshold, there is spontaneous infrared emission up to 150 μW, while the visible light emission remained below the detector responsivity limit. When considering applications requiring high contrast ratio, such as near-to-eye and head-up displays, this greatly enhanced extinction ratio is expected to be of particular importance. The projected output beam of the 620-nm laser is presented in Figure 11. Conclusions A transversally single-mode frequency-converted GaInNAsbased 620-nm laser diode is demonstrated with high single pass conversion efficiency and extinction ratio. Further improvements of threshold current and conversion efficiency are expected by optimizing the laser diode manufacturing process and optical coupling configuration. Figure 10 Comparison of frequency-converted 620-nm and infrared 1240-nm output. Figure 11 Projected 620-nm output beam of the GaInNAs laser diode. MgO:LiNbO 3 nonlinear waveguide crystal was used for single-pass frequency conversion from 1240 to 620 nm.
254600190
s2orc/train
v2
2022-12-14T15:36:24.035Z
2017-02-02T00:00:00.000Z
An Investigation of the Degree of Market Power in the Greek Manufacturing and Service Industries This paper investigates the degree of market power in the Greek manufacturing and service industries over the period 1970–2007. The markup model developed by Hall (1988) and Roeger (1995) is taken into consideration where market power is expressed as the difference between the selling price and the marginal cost of production. The analysis will be conducted in three steps; the first step estimates the price-cost margin of the manufacturing and services industries over 1970–2007; the second step applies the cross section specification under which the markup ratio is obtained for the 23 manufacturing and 26 service 2-digit ISIC sectors of the panel sample; and the third step estimates the price-cost margin of the manufacturing and services industries for each year over 1973–2007 by employing the Hall-Roeger time series specification. The empirical findings suggest that both industries exert a positive markup ratio; however, the service industry appears to be less competitive than the manufacturing industry. competition in the most influential markets of any economy. If competition is enhanced, then social welfare will tend to be equal to the optimal level proposed by perfect competition. Thereby, the markets will be operating efficiently by utilizing the production capabilities of the firms to their fullest. The European Commission (Europe 2012) has announced the formulation of a policy framework under which the European Union members can reach new growth levels by developing fully integrated networks. The main intention of this framework is similar to the Single Market Mechanism (SEM) which was introduced in 1987. It corresponds to the facilitation of an efficient market structure in which the setting price of the firms will tend to be equal to their marginal cost. Moreover, the OECD (2012) provides evidence that the Greek markets are the most heavily regulated within the OECD members due to a number of legislations that do not allow competition to flourish. They impose a number of restrictions, such as barriers to entry or very high fixed costs that discourage new firms to enter the market, thus providing the incumbent firms with market power. The main argument of the aforementioned reports is that competition results in increased output growth by enhancing economic activity. Consequently, increased production will lead to additional employment which will boost gross national income and the purchasing power parity of consumers. If this happens, then firms will gain more revenue due to increased sales and innovation will be used as a tool of competition. For this reason, there is a need of particular indicators expressed in terms of pricing and production decisions that can identify the degree of market power in various industries and sectors. In this context, the price-cost margin can be used as an indicator of price markup over the cost of inputs, such as intermediate inputs, labour and capital. As a result, whenever the price level exceeds the marginal cost of inputs, there is a degree of market power reflected by a higher price level compared to the optimal level of perfect competition. The methodology provided by Hall (1988) and Roeger (1995) will be employed in this study in order to identify the market structure of the two most influential industries of the Greek economy: the service and the manufacturing industry. 1 This methodology is known as the Hall-Roeger approach, under which the nominal growth rate of the Solow residual is independent of the growth rate of nominal capital productivity. Under perfect competition, the growth rate of value added must be equal to the growth rate of inputs. This equality is significant for market efficiency because it provides consumers with higher product quality as a result of lower prices and higher innovation (Rezitis and Kalantzi 2013). However, if the former growth rate exceeds the latter, the market is characterized by imperfect competition. This happens because the price level is higher compared to the one of perfect competition, thus resulting in under-production. This study applies the Hall-Roeger approach in the Greek manufacturing and service industries under a three-step approach as introduced by Rezitis and Kalantzi (2011). The first step concerns the estimation of the price-cost margin in both industries over the period . The second step employs the cross section specification by identifying the price markup in the 23 manufacturing and the 26 service 2-digit ISIC sectors of the panel data set individually. Lastly, the third step employs the time series specification through the estimation of the price cost margin of both industries for each year over 1973-2007. Consequently, this study aims to complement the findings of Rezitis and Kalantzi (2011, 2012a, b, 2013 and Polemis (2014a, b, c) of the degree of market power in the Greek manufacturing and service sectors. This paper is organized as follows: Section 2 provides the literature review of the price cost margin approach; Section 3 develops the model formulation; Section 4 presents the methodology; Section 5 provides and discusses the empirical findings; and section 6 offers a conclusion. Literature Review An important contribution to the price-cost margin literature was made by Hall (1988) under the assumption that markets are perfectly competitive when the price level is equal to the marginal cost of the firms. When the price level is higher, the market structure is considered to be uncompetitive. However, while the price level is observable, the marginal production cost of the firms may not be known. As a result, Hall overcame this drawback by showing that the nominal growth rate of the Solow residual is independent of the growth rate of nominal capital productivity. The price-cost margin approach was applied in the United States manufacturing industry and provided evidence of market power as the price level was higher than the marginal cost of production. In particular, Solow (1957) introduced the concept of residual in the production process by taking into account a production function which allowed technical change to be included along with the inputs of labour and capital. By applying this formulation in the United States over 1909-1949, where output per hour approximately increased by 100%, he found that 12.5% of the increment in labour productivity could be attributed to increase capital per hour. However, the remaining 77.5% is explained by different factors than labour and capital accumulation which refer to the Solow residual. For this reason, the calculation of such unobserved shocks may not be feasible and thus, they may restrict the calculation of the price-cost margin. Nevertheless, Roeger (1995) expanded this framework by taking into consideration the difference between the production-based (primal) Solow Residual (PRS) and the cost-based (dual) Solow Residual (DSR). This formulation is used in order to eliminate the unobservable shock of productivity and thus, obtain an unbiased estimate of market power. The indicator of market power is reflected by the difference between the growth rate of value added and the growth rate of inputs. Consequently, this is the final form of the markup equation which is employed by many studies over a number of industries in various economies. In particular, there have been several studies that utilize the Hall-Roeger approach in order to test the degree of market power in the United States manufacturing industry. Shapiro (1987) and Norrbin (1993) found evidence in favour of markups consistent with oligopolistic pricing decisions as the manufacturing firms have been operating under imperfect competitive conduct. Bhuyan and Lopez (1998) validated such results for the United States food and tobacco sectors. They found that the price-cost margin resulted in oligopoly-induced allocative efficiency losses equal to 5% of sales over 1987. On the other hand, Mazumder (2014) contradicts such findings by employing a generalized version of the Hall-Roeger approach. The new version includes a relaxation of the assumption that labour can be adjusted at a fixed wage rate at no cost. The results support the presence of countercyclical and decreasing price-cost margins since the 1960s because the main factor influencing this measure is the share of imports in this industry. As a result of increasing foreign competition, the price level fell, thus converging to the one of perfect competition. Moreover, Martins et al. (1996) applied the Hall-Roeger approach in 14 OECD manufacturing industries over the period 1970-1992. The model took into account output in terms of gross value added and for this reason, the variable of intermediate inputs was added in the cost function. The findings support the presence of positive and significant markups across the industries, thus verifying the presence of imperfect competition. Bloch and Olive (2003) investigated the presence of markups in the manufacturing industries of the United Kingdom, the United States, Germany and Japan over 1970-1991. The evidence rejected the markup model in many industries; however, a positive relationship between the price-cost margin and the level of industrial concentration was identified. Concentrated industries are more likely to exhibit higher markups which are influenced by competing foreign prices. As a result, markups are either pro-cyclical or a-cyclical. Görg and Warzynski (2003) studied the markup behaviour of the United Kingdom manufacturing industry over 1990-1996. The results provided evidence that exporting firms tend to exhibit higher markups than non-exporting firms. In addition, higher markups also depend on the degree of product differentiation. In sectors with homogenous products the price-cost margin tends to be lower compared to sectors with differentiated products. Boyle (2004) estimated the price-cost margin of the Irish manufacturing sectors over 1991-1999. The sample was differentiated into output-oriented and input-oriented sources of market power. The findings do not support the presence of imperfect competition in output-based markets; however, there is strong evidence of imperfect competitive conduct in certain input-based markets. Dobrinsky et al. (2004) applied the Hall-Roeger approach to a panel of Hungarian and Bulgarian manufacturing firms over [1974][1975][1976][1977][1978][1979][1980][1981][1982][1983][1984][1985][1986][1987][1988][1989][1990]. They found evidence that support the presence of positive markups associated with production technology and scales of economy. In addition, Dobbelaere (2004) studied the markup behaviour of the Belgian manufacturing firms over 1988-1995. The product and labour markets were taken into account in order to investigate the degree of heterogeneity in the price markup and the bargaining power of unions. The results indicate that the inclusion of the labour market in the analysis of the product market is essential as the value of markup is underestimated when the study of the latter market is conducted independently. Consequently, sectors with higher labour bargaining power tend to exhibit higher price-cost margins. Wilhelmsson (2006) investigated the degree of market power in the Swedish food and beverages sector over 1990-2002 and the effects imposed by the competitive forces of the European sectors. The estimates show that many firms exhibit positive price-cost margins; however, increased competition from the European Union sectors resulted in reduced market power. Thereby, foreign competition had a negative effect on the markup level of domestic firms. In a relevant study, Molnár (2010) estimated the price-cost margin for the manufacturing and service industries of Slovenia over 1993-2006. The estimates conclude that the price-cost margin appears to be higher on average in the service than the manufacturing sectors. Similar results were obtained by Molnár and Bottini (2010) for a number of OECD European countries over 1993-2006. The estimated markups tend to be higher for particular sectors, such as real estate and professional service and lower for sectors such as retail and wholesale trade. Moreover, the forces of competition appear to be more persistent in the sectors of the United Kingdom and the Scandinavian countries, except Sweden, and lower in Central European countries (see Polemis 2014c). Christopoulou and Vermeulen (2012) also formed a panel set of European countries and investigated the markup ratio to identify the degree of market power. As in the previous study, the average markup ratio in the service industry is higher compared to the manufacturing industry, thus concluding that the service sectors are more flexible exercising their market power on the price level. Noria (2013) investigated the effect of the North American Free Trade Agreement on the price-cost margin of the Mexican manufacturing sectors over [1994][1995][1996][1997][1998][1999][2000][2001][2002][2003]. The findings support the fall of that margin in 1994 as an immediate interaction to foreign competition, but its pattern is uncertain over the following years. The author differentiates the sample into sectors that were liberalized in 10 years and sectors that were liberalized in 5 years. Competition was more intense in the former group by forcing those sectors to adjust the price level to their marginal cost of production. On the other hand, the market structure of the latter group remained imperfectly competitive due to various domestic factors. Similar studies have been performed for the Greek manufacturing and service industries by employing the Hall-Roeger approach. Rezitis and Kalantzi (2011, 2012a, b, 2013 Overall, the aforementioned studies conclude that the majority of the constituent industries and sectors exhibit a degree of market power expressed in terms of positive price-cost margins. This means that the price level exceeds the marginal cost of production, thus allowing firms to enjoy positive profit levels through under-production. As a result, the degree of social efficiency is not at its optimal level as consumer surplus is exploited by firms. In this context, the Hall-Roeger approach provides a sufficient empirical tool of analysis that allows the investigation of market power in several industries. Model Formulation and Data The approach employed in this study corresponds to the model developed by Hall (1988) and extended by Roeger (1995) in order to provide an unbiased estimate of market power. In particular, an industry is assumed that produces output (y t ) according to a homogeneous production function f using three inputs: intermediate inputs (m t ), 2 labour (l t ) and capital (k t ) where θ t is an index of total factor productivity (Hicks neutral productivity term) reflecting technological progress and t denotes the time interval. Any output variation is independent of input fluctuations through disembodied changes in technology. According to such production function, Hall (1988) showed that the production-based (primal) Solow Residual can be defined as the difference between output and input growth weighted by their shares in total value added. However, in this study, output is expressed in terms of gross output and thus, total value added is replaced by this measure. For this reason, the variable of intermediate inputs is included in the production function in order to avoid biased overestimated markup values. The main assumptions of this formulation are (i) constant returns to scale, (ii) imperfect competition in product markets, and (iii) perfect competition in the input markets. Therefore, the Solow Residual for this study is given by where a mt ¼ pm t m t =p t y t is the share of intermediate inputs in gross output, pm t refers to the price of intermediate inputs, a lt ¼ w t l t =p t y t is the share of labour in gross output, w t corresponds to the wage rate and p t is the price level of output. The coefficient LI t is the Lerner index that measures the market power of the industry and it is expressed where mc t is the marginal cost of production and μ t is the price markup over marginal cost. 3 However, the estimation of LI t is problematic in eq. (2) due to the presence of correlation between the measure of productivity growth and the error term, thus resulting in biased and inconsistent markup estimates. This weakness was identified by Roeger (1995) who pointed out that the difference between the change in price and the weighted change in factor input prices must be taken into consideration. By applying this formulation, one obtains where u t is the rental cost of capital. By subtracting (3) from (2) the productivity shock θ t is cancelled out, thus obtaining This is the final equation provided by Roeger (1995) that reflects the degree of market power. By re-arranging the terms, it follows This is the main formulation developed and utilized by Rezitis and Kalantzi (2011) and it is the markup equation which is going to be employed in the present study. For simplicity, it is assumed that where ΔY t reflects the growth rate of gross output per unit of capital, and ΔX t is the growth rate of intermediate inputs and labour expenses per unit of capital. Moreover, according to this formulation, when the value of the price-cost margin μ t is equal to unity, the market structure is perfectly competitive because the growth rate of gross output is equal to the growth rate of inputs. A value above unity shows that the industry sets a price level higher than the marginal cost of production and thus, it is described by imperfectly competitive conduct. Consequently, the first step of the analysis will estimate eq. (5) for the manufacturing and service industries over 1970-2007 in order to obtain the price-cost margin at the aggregate level. For simplicity, eq. (5) is also expressed as where μ reflects the price-cost margin of the aggregated manufacturing and service industry respectively over 1970-2007. The estimated parameter takes into account the whole panel of manufacturing and service sectors separately in order to obtain an aggregate estimation for both industries. The second step of the analysis will employ the cross section specification of the Hall-Roeger approach by identifying the price-cost margin of the constituent manufacturing and service sectors individually over 1970-2007. Thereby, eq. (6) is transformed into where μ i is the markup ratio of each 2-digit sector i for both industries and DS i is a cross section dummy variable (i = 1,..,N denotes the number of the constituent sectors) which is set to unity for sector i and zero otherwise. This variable allows for the estimation of individual effects reflected by the manufacturing and service sectors on the price-cost margin. The third and last step of the analysis refers to the time series specification of the Hall-Roeger approach. It provides evidence of the markup level of the aggregate industry for each year over 1973-2007. As a result, eq. (6) is transformed into where μ t is the annual markup ratio estimated for the manufacturing and service industries separately over 1973-2007 and DT t is a time dummy variable (t = 1973,…,2007 is the number of years) which is set to unity for year t and zero otherwise. This specification will identify the markup value for each year individually for the manufacturing and service industries. The data set is obtained from the EU KLEMS, 4 the AMECO and the World Bank database. The sample comprises of 23 2-digit ISIC manufacturing sectors and 26 2-digit ISIC service sectors over the period 1970-2007 as presented in Table 3 in appendix. The interpretation of the variables included in eqs. (6), (7) and (8) is as follows: p t and y t reflect gross output price and volume indices respectively (1995 = 100), pm t and m t are the intermediate inputs price and volume indices (1995 = 100), l t is the number of employees, w t is the labour cost expressed in terms of the compensation of employees and k t is the capital compensation at basic current prices. The observations of these variables were obtained directly by the EU-KLEMS database. On the other hand, the rental cost of capital u t is obtained by where (i − π e ) reflects the real interest rate, F t is the deflator of fixed asset investment and δ is the depreciation rate which is fixed at 5% across all sectors (Martins et al. 1996). The observations were obtained by the AMECO and the World Bank database and have been fixed for all manufacturing and service sectors. Methodology The estimation process of the aforementioned equations takes into account the fixed and random effects models in order to identify the individual effects in the panel sample. According to Baltagi (2001), a general case of a one-way linear unobserved individual effects model for N individual observations and T dated periods has the following form where y it is the dependent variable for individual i and time t, α denotes the overall constant term of this regression, X' it represents the transpose time variant regressors' vector (1xk), n i corresponds to the time invariant individual effects term which also addresses the cross-section effects (random or fixed) and e it is the idiosyncratic error term. Unlike the vector of regressors X' it, the time invariant individual effect n i cannot be easily estimated (i.e. due to historical or institutional factors). The fixed effects model considers that the heterogeneous individual effects term is correlated with the vector of regressors. Since n i cannot be controlled directly, the fixed effects model demeans eq. (10) by using the following transformation where Since the time invariant individual effect is fixed, the difference from its mean will be zero and thus, its effect from eq. (10) is eliminated. On the other hand, a simple random effects model has the following form and β 0i = β 0 + v i . By substituting the latter into the former equation, one obtains where y it is the dependent variable for individual i and time t, β 0 denotes the overall constant term and X' it represents the transpose time variant vector of regressors (1xk). Those terms can be viewed as the fixed part of this model. On the other hand, the random part consists of the two terms v i and e it which are correlated. In particular, v i is the individual effect for each sector i = 1,…,N, which is not correlated with X' it and allows for differential intercepts over the given time sample and e it corresponds to the error term. As a result, the random effects model is preferable to the fixed effects model when correlation emerges between the individual effects and the error term of the model. Such effects can be captured by parameter v i and test whether the fixed or the random effects model is more suitable. Empirical Results The estimation process of the manufacturing and service industries is conducted in three steps under which the Hall-Roeger approach is applied. The first step estimates the price-cost margin of both industries by aggregating the panel sample; the second step provides the markup values for each manufacturing and service sector individually over 1970-2007; and the third step presents the results of industrial pricing decisions for each year over 1973-2007. This process will provide evidence about the degree of market power in the constituent industries and sectors and whether the findings suggest imperfect competitive conduct. Table 1 presents the diagnostic tests for each estimated equation under the three step procedure. The first test corresponds to the Breusch and Pagan LM test (Breusch and Pagan 1980) for the identification of cross section dependency in the panel sample. The results suggest that the three Hall-Roeger specifications for both industries are subject to such dependency, thus preventing the use of the pooled OLS estimation technique due to this form of contemporaneous correlation. In addition, the fixed effects model is formulated using the dummy variables least squares technique (LSDV); while the random effects model is estimated using the generalized least squares (GLS) in order to take into consideration the presence of correlation between the individual effects and the error term. Therefore, the Hausman test (Wu 1973;Hausman 1978) is employed in order to identify which model is best suited under the null hypothesis that the individual effects are not correlated with the explanatory variables. Moreover, White's test (White 1980) and the Breusch and Godfrey LM test (Breusch 1978;Godfrey 1978) are used in order to identify the presence of heteroskedasticity and serial correlation in the panel data sample. According to the results, the three specifications for both industries are estimated using the fixed effects model. However, given the presence of heteroskedasticity and serial correlation, the feasible generalized least squares (FGLS) estimation technique is applied in order to take into consideration those problems. The estimated markups for the manufacturing industry are presented in Table 2. The pricecost margin is equal to 1.180. A value equal to unity suggests that the growth rate of gross output is equal to the growth rate of inputs and thus, the price level is equal to the marginal cost of production. The value of the manufacturing industry shows that the price level exceeds the marginal cost of production by 18% over 1970-2007. As a result, the industry has been operating under imperfect competitive conduct charging a higher price level compared to the one of perfect competition. The results of the cross section specification are presented in the second column. This particular specification allows the inclusion of cross section individual effects in the panel sample to identify the price markup of each sector according to the value of the whole industry. The values range over 1.072-1.554 suggesting that all manufacturing sectors exert a positive price markup, thus operating under imperfect competitive conditions. The lowest values are obtained by the sectors of motor vehicles, trailers and semi-trailers (i.e. 34), of pulp paper and paper (i.e. 21) and of other machinery products (i.e. 29). The highest values are estimated for the sectors of tobacco (i.e. 16), of other manufacturing products (i.e. 36) and of wood and cork (i.e. 20). This shows that the markup ratio of the manufacturing sectors is similar to the value of the aggregate industry. The difference between those values may be due to the number of firms operating in each sector and/or their ability to innovate. It is expected that sectors with a limited number of firms will tend to be more oligopolistic compared to sectors with many firms. Also, innovating firms will have the option to charge a higher price level as a result of increasing the quality of their products, thus rendering them more attractive to both domestic and foreign markets. 5 However, in order to conclude that the manufacturing industry operates under imperfect competition, we must also estimate the price-cost margin for each year individually. For this reason, the Hall-Roeger time series specification is applied on eq. (6) to identify the annual markup ratios over 1973-2007. The results are presented in the last column of Table 2 and illustrated in Fig. 1. Over the period 1973-1983 the price-cost margin was quite stable around 1.17. However, in the following years (1984)(1985) it rapidly fell to 1.08 only to be increased in 1986. An 5 Nevertheless, Giokas et al. (2015) argue that the capital stock of the Greek manufacturing sectors was not improved significantly over 1995-2003. This means that technological progress on average was not the main tool of competition. (1985), where y is the dependent variable and ŷdenotes the fitted value of y interpretation of this behaviour may refer to the introduction of the Single European Market (SEM) which was about to be implemented in 1987. For this reason, firms tried to attract more customers in the short-run in order to increase their profits. The markup values over the years 1987-1989 are not available given some limitations in the rental cost of capital. The Single European Market was gradually implemented in 1987 and completed in 1992. It can be seen that the market power in 1990 remained the same as in the previous years, but it gradually fell up to 1993. This outcome may refer to the successful implementation of this framework that enhanced competition in the Greek manufacturing industry through free trading networks with the European countries. In addition, in 1993 there was an attempt to boost the competitive forces of the manufacturing firms by increasing production and reducing the price-cost margins. A number of developmental laws and The values in parentheses are t-statistics. B-B denotes lack of observations in some variables *Significant at the 5% level of significance **Significant at the 1% level of significance operational programs, such as the BOperational Programme for Research and Technology IIâ nd the BIndustrial Research Development Programme^contributed to the research and technological innovation and infrastructure of the Greek firms (Rezitis and Kalantzi 2011). Over the following years there is an increasing trend in the markup ratio with a temporary under-spike in 1999, reaching its climax in 2001. The price level exceeds the marginal cost of production by 26% and the main reason of such increase corresponds to the introduction of the euro currency in the Greek economy. The new currency resulted in additional Purchasing Power Parity for consumers and thus, the manufacturing firms aimed to take into advantage additional levels of consumer surplus created by this shock. Subsequently, the markup level started to fall, reaching a value equal to 1.20 in 2004, under which the hosting of the Olympic Games occurred. Even if domestic and foreign demand were boosted over this year, the results show that the price-cost margin did not rapidly increase but instead, it remained in a relatively high level. Over the last years, there is a significant increase in 2006 but subsequently, the markup ratio was reduced to a level equal to 1.09. This outcome may have been caused by the increasing price of intermediate inputs 6 over 2007-2009 and the slow adjustment rate of the price level to such changes. In addition, the upcoming financial crisis of 2008 might have rendered the manufacturing firms more reluctant to acquire additional market power due to future demand uncertainty, even when the aggregate economy was growing. The evidence presented for the manufacturing industry validate the results of Rezitis and Kalantzi (2011, 2012a, b, 2013 and Polemis (2014a, b, c) about the imperfect competitive market structure. The values may vary because of the different data set and the underlying methodology but the empirical suggestions point to the direction of imperfect competition in the industry. The markup estimates for the service sectors are presented in Table 3. In particular, the value for the service industry is equal to 1.311 denoting that the industry has been charging a price level 31% higher than marginal cost over 1970-2007. This value is higher compared to the one of the manufacturing industry, thus indicating that the service industry is less competitive. This outcome validates the suggestions of several studies, such as Molnár (2010) and Molnár and Bottini (2010) in favour of higher markup levels exercised by the service industry. 6 Such inputs refer to rotation soybeans, rotation corn and continuous corn. 27 1973 1978 1983 1988 1993 1998 2003 Price-cost margin Years Fig. 1 The price-cost margin of the Greek manufacturing industry over 1973-2007 This argument is also validated by the sectorial estimates obtained under the Hall-Roeger cross section specification. The values are presented in the second column of Table 3; however, there are sectors with high markup ratios and sectors with markups equal to unity. This means that even if the aggregate service industry operates under imperfect competitive conduct, there are sectors that behave according to perfect competition. In particular, the lowest price-cost margins are estimated for the sectors of insurance and pension funding (i.e. 66) and of public administration and defence (i.e. L) which are almost equal to unity. This outcome suggests that the price level of those sectors corresponds to the marginal cost of service provision. The highest values are estimated for the sectors of real estate activities (i.e. 70), and of other service activities (i.e. 93). The latter values reflect a price level that exceeds marginal cost by more than 100% suggesting that the market structure of those sectors is highly oligopolistic. These markup ratios may be interpreted according to the degree of product differentiation, as service provision is considered to be quite heterogeneous across firms and sectors. The third and final step provides the estimates of the service industry for each year over 1973-2007. The markup values are presented in the last column of Table 3 and illustrated in Fig. 2. Unlike the behaviour of the manufacturing industry, the service industry experienced more volatile fluctuations in the price-cost margin. In particular, the degree of market power is relatively high over the period [1973][1974][1975][1976][1977][1978][1979][1980][1981][1982][1983][1984][1985][1986] where the highest value is observed in 1974. With the exception of 1977, where the markup rate fell rapidly, the service industry experienced a stable trend over 1973-1984 with an average value equal to 1.40. This may be a result of the limited number of firms operating in the industry over that period. Consequently, limited competition led to increased price-cost margins due to market power acquisition. Over the following years (1985)(1986), the price markup fell close to the level of perfect competition, but four years later it converged to its average value. Over the period 1990-1993, the ratio fell due to the implementation of the SEM which enhanced competition. Therefore, the outcome of this framework led to increased competitive interactions in both industries of this study. The following years are characterized by volatile fluctuations as 1998 is considered to be the year over which the industry was operating according to perfect competition. However, 2000-2001 is a period exerting increased price markups as a result of the new currency. As in the case of the manufacturing industry, the service sectors tried to acquire more profits through the exploitation of consumer surplus due to the increased level of Purchasing Power Parity. In 2002 the markup ratio temporarily fell, only to increase over 2003-2004 because of the hosting of the Olympic Games. A temporary fall is also observed in 2005 but subsequently, the markup trend tends to increase given the conditions of the aggregate economy that allowed for imperfect competitive conduct to persist. The empirical findings suggest that the pricing decisions of the service industry were different compared to the ones of the manufacturing industry. An interpretation of this outcome may lie on the market power that each constituent sector possesses. In general, the markup ratio of the manufacturing sectors is lower than the service sectors; however, there exist two service sectors that operate according to perfect competition. This means that even if the service industry exhibits a higher price-cost margin than the manufacturing industry, the pricing decisions of the constituent sectors may not be reflected by the aggregate value. Overall, the manufacturing and the service industries have been operating under imperfect competitive conditions over 1970-2007. The results obtained for both industries validate the presence of positive price-cost margins. However, the values across the manufacturing and service sectors are not similar to the ones obtained by Polemis (2014a, b, c). An interpretation of this outcome may lie on the econometrics procedure and the panel techniques employed in this study. 7 Moreover, the cross section and time series specification extend the analysis to the investigation of sectorial and annual industrial pricing behaviour. Consequently, the present study complements the argument that (i) the Greek manufacturing and service industries exert positive markup levels and (ii) the service industry is less competitive than the manufacturing industry. The values in parentheses are t-statistics. B-B denotes lack of observations in some variables *Significant at the 5% level of significance **Significant at the 1% level of significance Conclusion This study extended the market power investigation in the Greek manufacturing and service industries by employing the markup model formulated by Hall (1988) and Roeger (1995). The results suggest that both industries appear to have positive price-cost margins over 1970-2007. In addition, the constituent sectors exhibit a positive markup ratio with the exception of two service sectors (i.e. 66 and L) that set their selling price equal to the marginal cost of service provision. These suggestions are complemented by the annual markup values obtained for both industries at the aggregate level over 1973-2007. Consequently, it can be concluded that the Greek manufacturing and service industries operate under imperfect competitive conduct. A possible remedy that would enhance the forces and incentives of competition in these industries might refer to the re-introduction of developmental and operational programs, as in 1993, that will contribute to the innovative and technological infrastructure of the Greek sectors. In particular, the European Commission (Europe 2012) is working on a policy framework for the European Union members under which domestic markets will achieve new levels of growth by developing fully integrated networks that will enhance the economies overall. One of the most important factors that may contribute to this outcome is the enhancement of business environment by introducing opportunities for active and new entrepreneurs. A possible barrier that prevents such opportunities may refer to barriers to entry due to market power acquisition by the incumbent firms. According to IOBE (2014), the Greek business environment leaves little place for new firms to operate because of the presence of heavy regulation and monitoring imposed by oligopolistic firms. Consequently, barriers to entry in oligopolistic markets should be eliminated so that new entrepreneurs can start their business in the Greek manufacturing and service industries. Moreover, the findings of this study complement the arguments of the OECD that the Greek economy and in particular, the manufacturing industry is under-performing (OECD 2012(OECD , 2014. There are 555 problematic regulations identified where 329 of them could be improved by enhancing competition. This means that the Greek manufacturing industry is heavily regulated, thus constraining its efficiency and capacity that results in welfare losses and market power exploitation. The second report focuses on the sectors of beverages (i.e. 11); textiles, clothing apparel and leather (i.e. 13, 14 and 15); machinery and equipment (i.e. 28); and coke and refined petroleum products (i.e. 19). The findings are once again in favour of 5 1973 1978 1983 1988 1993 1998 2003 Price-cost margin Years Fig. 2 The price-cost margin for the Greek service industry over 1973-2007. Source: estimations of eq. (8) regulations that harm competition. This argument is validated by the positive price-cost margins presented in Table 2. As a result, the OECD makes 88 recommendations on improving legal frameworks by utilizing the EU legislation that minimizes barriers to entry and promotes incentives for innovation. To this end, innovation can be considered as a significant factor of competition through which firms will achieve economies of scale and diversify their products in order to enhance their sales. If innovation leads to this outcome, particular firms will gain competitive advantage against their competitors. When the same rationale is adopted by every market participant, the degree of imperfect competition will be reduced. Therefore, the Scumpeterian creative destruction will run its course by forcing inefficient and non-competitive firms to exit the market (Reinert and Reinert 2006). Overall, the present study provides evidence of an imperfect competitive market structure in the Greek manufacturing and service industries. Future research could take into consideration more disaggregated data at a firm level and test the pattern of the price-cost margin of the manufacturing and service firms. Moreover, the same methodology can be applied in the economies of the European Union and investigate whether the markup ratios across countries appear to be correlated because of the SEM framework. As a result, the market structure in the European manufacturing and service sectors will be investigated over time and be compared to the imperfect competitive structure of the Greek sectors. Sector of research and development 74 Sector of other business activities L Sector of public admin and defence; compulsory social security M Sector of education N Sector of health and social work O Sector of other community, social and personal service 90 Sector of sewage and refuse disposal, sanitation and similar activities 91 Sector of activities of membership organizations 92 Sector of recreational, cultural and sporting activities 93 Sector of other service activities P Sector of private households with employed persons
46137750
s2orc/train
v2
2010-09-14T03:24:40.000Z
2010-04-26T00:00:00.000Z
Anomalous ordering in inhomogeneously strained materials We study a continuous quasi-two-dimensional order-disorder phase transition that occurs in a simple model of a material that is inhomogeneously strained due to the presence of dislocation lines. Performing Monte Carlo simulations of different system sizes and using finite size scaling, we measure critical exponents describing the transition of beta=0.18\pm0.02, gamma=1.0\pm0.1, and alpha=0.10\pm0.02. Comparable exponents have been reported in a variety of physical systems. These systems undergo a range of different types of phase transitions, including structural transitions, exciton percolation, and magnetic ordering. In particular, similar exponents have been found to describe the development of magnetic order at the onset of the pseudogap transition in high-temperature superconductors. Their common universal critical exponents suggest that the essential physics of the transition in all of these physical systems is the same as in our model. We argue that the nature of the transition in our model is related to surface transitions, although our model has no free surface. We study a continuous quasi two-dimensional order-disorder phase transition that occurs in a simple model of a material that is inhomogeneously strained due to the presence of dislocation lines. Performing Monte Carlo simulations of different system sizes and using finite size scaling, we measure critical exponents describing the transition of β = 0.18 ± 0.02, γ = 1.0 ± 0.1, and α = 0.10 ± 0.02. Comparable exponents have been reported in a variety of physical systems. These systems undergo a range of different types of phase transitions, including structural transitions, exciton percolation, and magnetic ordering. In particular, similar exponents have been found to describe the development of magnetic order at the onset of the pseudogap transition in high-temperature superconductors. Their common universal critical exponents suggest that the essential physics of the transition in all of these physical systems is the same as in our simple model. We argue that the nature of the transition in our model is related to surface transitions although our model has no free surface. Real solids are commonly in a strained state. This can be due to a variety of reasons, ranging from forces applied upon them to the presence of structural defects, to ongoing phase transformations. Such strains affect the ordering processes of the materials [1][2][3][4][5][6]. Therefore, understanding the extent of these effects is important. In this Communication, we study the continuous order-disorder phase transition in a model of a strained material. The strain field we consider results from a "wall of dislocations", that is, a linear array of parallel edge dislocation lines. This particular arrangement of defects is relatively common in crystals, as it often occurs because of surface treatments. The resulting strain is inhomogeneous, and order develops inhomogeneously in the material, with ordered regions growing in quasi two-dimensional layers around a central cylindrical rod-shaped nucleus [1]. Each layer orders at a different critical temperature. In order to study the critical behavior of this process, we consider a mesoscopic spin model in which the coupling between spins reflects the strain field induced by the dislocation walls. Performing several simulations of systems with different sizes and using finite size scaling, we are able to measure the critical exponents characterizing the transition. The critical exponents found are comparable with exponents that have been measured experimentally in a variety of materials, and for different types of transitions [1,2,[7][8][9][10][11][12][13][14]. Notably, similar critical exponents have recently been measured for the magnetic ordering transition that accompanies the onset of the pseudogap state in high T c superconductors [15][16][17][18]. These exponents are also compatible with those found in multicritical surface transitions [19], although in our case the exponents describe bulk measurements. Assuming that atoms interact more strongly where they are pushed closer together and more weakly where they are pulled apart, a phenomological model that captures the effect of strain on ordering due to a dislocation line can be constructed [1]. Assuming the defects are arranged in walls extending in the y direction with the lines parallel to z, it is found that the local relative critical temperature change τ c ( r) is where r is the normal vector pointing from the closest dislocation line, b is the magnitude of the Burgers vector, l is the unit of length used, ν is Poisson's ratio, h is the local average distance between defects, T ′ c ( r) is the local transition temperature and T c is the transition temperature for a defect-free crystal. This results in inhomogeneous ordering in which ordered regions nucleate and grow in the vicinity of the dislocation lines via the addition of quasi 2-D layers around nuclei with the shape of narrow cylindrical rods [1]. Here we study the universal critical scaling properties of this ordering process. To identify the essential physics that controls the scaling properties of this ordering behavior, we studied a zero-field 3D Ising model on a simple cubic lattice with periodic boundary conditions and nonconstant coupling J ij between nearest neighbor spins i and j. The Hamiltonian is where s i = ±1 is the value of the i th spin and ij indicates sum over the nearest neighbor spins on the lattice. The spins simply represent the state of local order. The value of the coupling J ij is chosen in order to reflect the strain field giving rise to Eq. 1 in the following way. First note that in a "regular" Ising model, with constant coupling J 0 , the critical temperature is proportional to the coupling constant: where a is some proportionality constant. Also, from Eq. 1 it follows that, given a τ c ( r), Therefore, from Eqs. 3 and 4, the parts of the system that become critical at a given temperature T ′ c are those that have a coupling Thus, given the arbitrarity of J 0 and of the other proportionality constants, we set where we take h to be the size of the system in the y direction. To reproduce the effects of the strain of a wall, we use the above expression only for the coupling between spins in the x and y directions, while we set the coupling of the spins along z at 1. The simulated systems con- tained a single dislocation line in the center. The replicas due to the periodic boundary conditions used effectively turned it into a wall of lines. Notice that while the strain field due to a single dislocation line is long-range, the one due to a wall is short-ranged [1]. However, the field of a wall maintains the dipole-like nature of the field of a single line, with the effect of promoting the order on one side of the system, while suppressing it on the other. The order parameter in our simulations was given by the ensemble averaged absolute value of the magnetization per spin: where N is the total number of spins. Using the Wolff algorithm [20], which is a cluster flipping algorithm [21], we performed extensive Monte Carlo simulations of this model. The cylindrical ordered regions grow with decreasing temperature as the surfaces of the cylinders order in a fashion consistent with earlier predictions [1]. Figure 1 shows the order parameter in an x-y cross section of a 45×91×40 system, averaged over z, at a temperature of 4.49 in units of the Boltzmann constant k B . As anticipated, order is increasingly enhanced with proximity to the dislocation line on one half of the system. On the other half, instead, order is increasingly suppressed. Also notice that the contour lines closely follow the predicted shape, shown in Fig. 6 of Ref. 1 and computed by numerically solving the following parametric equations for a particular value of τ c : We find that the ordering occurs via a continuous transition. To measure the critical exponents, we simulated systems of different sizes and estimated their values using finite size scaling [22]. The observables measured were the magnetization order parameter and the ensemble averaged total energy, given by Eq. 2. From the fluctuations of magnetization and energy we also calculated the magnetic susceptibility χ and the specific heat c. The measurements were taken at the same time over the entire system and over an arbitrarily chosen quasi twodimensional layer, corresponding to a fixed, chosen value of τ c . For each value of the temperature we took ensemble averages over a number of system updates between 10 6 and 10 8 . The whole system sizes were 59 × 23 × 13, 109 × 43 × 25, and 205 × 83 × 50, while the circumferences of the x-y cross sections of the quasi two-dimensional layers measured were 50, 102 and 200, corresponding to τ c = 0.9. The sizes of the systems in the x direction were chosen so that the coupling between spins was within 10 −6 of unity at the boundaries. To perform data collapses using finite size scaling, we define the scaled reduced temperaturet as where L is the length of the largest dimension of the system considered, which in our case corresponds to the length in the x direction, ν is the correlation length critical exponent and t ≡ T −Tc Tc is the reduced temperature. With this definition oft, the scaling functions for the order parameter, the magnetic susceptibility and the specific heat are, respectively, where β, γ and α are the corresponding critical exponents. The data collapses for the quasi two-dimensional layer, shown in Fig. 2, allow estimates of the critical indices of β = 0.18±0.02, γ = 1.0±0.1, α = 0.10±0.02 and ν = 2.0 ± 0.1, with a critical temperature of 6.7 ± 0.2. The errors were conservatively estimated as the range over which a reasonable scaling collapse was achieved. Similarly, the measurements of the whole systems, whose data collapses are shown in Fig. 3, allow the values of the critical exponents to be estimated as β = 0.18 ± 0.02, γ = 1.1 ± 0.2 and ν = 2.0 ± 0.25, with a critical temperature of 4.50 ± 0.05. We could not produce a good scaling collapse for the specific heat. Note that we get essentially the same exponents for the whole system that we do for the quasi two-dimensional layer. This reveals that the nature of the transition of the whole system is essentially the same as that of a quasi twodimensional layer. At any given temperature, there is a part of the system that is critical. The biggest of these parts corresponds to the measured critical temperature for the whole system. Also notice that the susceptibility for the smallest system does not scale well near the peak, presumably due to finite size effects. The size of the error bars on the data shown in Figs. 2 and 3 is substantially smaller than the size of the symbols. The exponents characterizing the transition are compatible with those corresponding to the, so-called, "special" multicritical point in surface critical phenomena. In particular, the value β = 0.18 was reported in Refs. [19] and [23] and is consistent with prior theoretical calculations based on scaling [19,24]. Also, the measured mean-field value of the exponent γ = 1 is expected at the multicritical point [25]. Furthermore, using the "bulk" 3d-Ising value for the critical exponent ν = 0.632 in the scaling laws, as in Ref. [19], the hyperscaling relation predicts α = 0.104, which is compatible with the one we measured. Note, however, that while these previous studies considered systems with an actual surface, our model does not have free layers. In fact, the quasi twodimensional layers whose ordering we studied are in the midst of of the system. Nevertheless, the ordering in our system does occur in a quasi two-dimensional layer at the surface of the already ordered region. Similar exponents have also been measured for a number of different types of transitions in a variety of physical systems, ranging from structural transitions, to the percolation of excitons in polymeric matrices, to magnetic order in frustrated materials [1,2,[7][8][9][10][11][12][13][14]. In particular, as mentioned earlier, there have been recent observations of magnetic ordering at the onset of the pseudogap transition in high-T c superconductors in which similar critical exponents have been measured [15][16][17][18]. Given the scale invariant nature of critical phenomena, the fact that the phase transition in our model apparently has the same set of critical exponents suggests that the essential physics is the same in both systems, and that our results may be relevant to the open question of the nature of the pseudogap state itself. Intriguingly, recent experiments have shown that the onset of the pseudogap state is accompanied by local modulations of atomic displacement that generate significant inhomogeneous strains [26,27]. This suggests that, like the quasi two-dimensional ordering process we have considered, the pseudogap transition occurs because of inhomogeneous strain. Assuming this is true and noting that the pseudogap transition precedes the onset of high-T c superconductivity [17], it appears that some strain is required for the development of high-T c superconductivity. However, strain is also known to adversely affect superconductivity [28][29][30][31] and too much strain supresses it altogether [18]. The optimal doping concentration of the high-T c superconductor YBa 2 Cu 3 O 7-δ occurs at only δ ≈ 0.08. Such a small deviation from an exact stoichiometry presumably introduces enough strain to cause a pseudogap transition while causing only minor adverse effects. This supports the idea that the pseudogap state is a physically direct precursor to superconductivity, even though its cause competes with it, consistent with some of the original ideas concerning the mechanism of high-temperature superconductivity [32,33].
119102110
s2orc/train
v2
2010-07-12T17:46:08.000Z
2010-07-12T00:00:00.000Z
On the long and short nulls, modes and interpulse emission of radio pulsar B1944+17 We present a single pulse study of pulsar B1944+17, whose non-random nulls dominate nearly 70% of its pulses and usually occur at mode boundaries. When not in the null state, this pulsar displays four bright modes of emission, three of which exhibit drifting subpulses. B1944+17 displays a weak interpulse whose position relative to the main pulse we find to be frequency independent. Its emission is nearly 100% polarized, its polarization-angle traverse is very shallow and opposite in direction to that of the main pulse, and it nulls approximately two-thirds of the time. Geometric modeling indicates that this pulsar is a nearly aligned rotator whose alpha value is hardly 2 degrees--i.e., its magnetic axis is so closely aligned with its rotation axis that its sightline orbit remains within its conal beam. The star's nulls appear to be of two distinct types: those with lengths less than about 8 rotation periods appear to be pseudonulls--that is, produced by"empty"sightline traverses through the conal beam system; whereas the longer nulls appear to represent actual cessations of the pulsar's emission engine. INTRODUCTION Pulsar B1944+17 was discovered in August 1969 at the Molonglo Radio Observatory. This 440-ms pulsar attracted attention thereafter because of its long null intervals, (Backer, 1970). Remarkably, it nulls some 70% of the time and exhibits null lengths ranging between 1 and 300 stellar-rotation periods (hereafter P1). In 1986 Deich et al. (hereafter DCHR) investigated B1944+17's nulling behavior based on the received notion that its nulls represented "turn offs" of the pulsar emission mechanism. Thus they were concerned with the time scales of the cessations and resumptions of emission. While some pulsars, indeed, do appear to "turn off" for extended intervals-notably B1931+24 )-there is a growing body of evidence that many nulls do not represent a shutdown of a pulsar's emission engine. There remain a small number of pulsars, however-B1944+17 prominent among them-whose observed nulling effects are not easily ascribed to either mechanism. That is, neither emission cessations, marked by partial nulls with near instantaneous decay times, nor "empty" sightline traverses through a rotating subbeam carousel, marked by null periodicites, provide any clear explanation. Pulsar B1944+17 exhibits a complex combination of behaviors including both very short and very long nulls. In addition, DCHR identified what appeared to be several distinct emission modes, denoted A-D. And furthermore, Hankins & Fowler (1986; hereafter HF86) discovered that B1944+17 has a weak interpulse that nulls in synchrony with its mainpulse region (hereafter IP/MP). Synchronous nulling indicates, remarkably, that whatever mechanism is responsible for MP nulling is also controlling the IP emission. The presence of both MP and IP emission raises vexing questions about the overall emission geometry of the star. Several analyses of pulsar geometry (e.g., Lyne & Manchester 1988;Rankin 1993a,b, hereafter R93a,b) have mentioned B1944+17, but as for virtually all such pulsars, 1 no credible model of its overall MP-IP emission geometry has yet been articulated. In short, neither an "opposite pole" nor "single pole" IP geometry provides an obvious solution, so among the various other issues, this basic question is still open for B1944+17. When not in the null state, MP pulse sequences (hereafter PSs) exhibit four modes of emission, three of which are well defined drift modes. Burst lengths are as large as 100 pulses, indicating a non-random null-burst distribution. The strict organization of the principle drift mode (Mode A, to be defined below) is in stark contrast with the disorganized non-drifting burst mode (Mode D) as well as the pulsar's preponderance of null pulses. The rich variety of PS effects exhibited by this pulsar complicates its analysis, as well as any effort at modeling its many phenomena. While B1944+17 exhibits so many different identifiable behaviors (organized drifting, nearly "chaotic" subpulse behavior, bright emission, short nulls, long nulls, etc.), what makes the star so compelling is that in any time interval of reasonable length, (∼2000 P1), one will see each of these behaviors. This consistency indicates that the processes that produce such variable emission patterns are in some way repeating themselves. This paper reports a new synthetic study of pulsar B1944+17. We have conducted long, high quality polarimetric observations using the upgraded Arecibo telescope at both meter and decimeter wavelengths, carefully analyzed the star's emission and nulls on a PS basis, and reconsidered the emission geometry of its MP and IP. §2 describes the observations, §3 details aspects of our analyses, §4 builds a geometrical model, and §5 presents our thorough null analysis. §6 then provides a summary and discussion of our results. OBSERVATIONS The observations used in our analyses were made using the 305-m Arecibo Telescope in Puerto Rico. The 327-MHz (P band) and 1400-MHz (L band) polarized PSs were acquired using the upgraded instrument together with its Gregorian feed system and Wideband Arecibo Pulsar Processor (WAPP 2 ) on 2006 August 19 and 2008 March 15, consisting of 7038 and 5470 pulses, respectively, see Table 1. The auto-and cross-correlations of the channel voltages were three-level sampled and produced by receivers connected to orthogonal linearly polarized feeds (but with a circular hybrid at P band). Upon Fourier transforming, sufficient channels were synthesized across a 25-MHz (100-MHz at L band) bandpass, providing resolutions of approximately 1 milliperiod of longitude. The Stokes parameters have been corrected for dispersion, interstellar Faraday rotation, and various instrumental polarization effects. At L band, four 100-MHz channels were observed with centers at 1170, 1420, 1520, and 1620 MHz. Both of the observations encountered virtually no interference (hereafter RFI), except for the 1620 MHz channel at L band which was disregarded. At L band, the lower three 100-MHz bands were appropriately delayed and added together to give a 300-MHz effective bandwidth. The PPAs of the two observations are approximately absolute in that they have been corrected for both ionospheric and interstellar Faraday rotation. Figure 1 presents the polarized profiles and polarizationangle (hereafter PPA) histograms of pulsar B1944+17 at both 327 and 1400 MHz. While these profiles are familiar, it is useful to inspect them in detail. Notice that the halfpower or equivalent width of the star's roughly symmetrical profile increases greatly at higher frequencies; whereas the more than ±25 • -longitude interval over which significant emission is observed changes hardly at all. The PPAs also reiterate this circumstance clearly; the discontinuous orthogonal polarization mode (hereafter OPM) extends over the full > ∼ 50 • width of the profiles, whereas the more prominent OPM occupies a more restricted longitude range at the lower frequency. As the PPAs are nearly absolute and the two OPMs lie conveniently in the upper and lower halves of the PPA panel, we will refer to them as the "positive" and "negative" polarization mode, respectively. 3 The star's profile has been classified previously (R93a,b) as belonging to the conal triple (cT) class; looking more closely, however, at the overall L-band form, it would be more accurate to regard it as exhibiting a hybrid cT and conal quadruple (cQ) behaviour. This said, the two profiles are so different in form that it is not easy to see how to align them. Rather, we have used the structures of stationary modulation on the leading and trailing edges of the PSs, and we note that this tends to align the profile edges but not the peaks. We will come back to considering how these characteristics should be interpreted below. The two panels display the total power (Stokes I), total linear polarization (L [= Q 2 + U 2 ]; dashed red) and circular polarization (Stokes V , defined as left -right-hand; dotted green) (upper), and the polarization angle (PPA [= 1 2 tan −1 (U/Q)]) (lower). Individual samples that exceed an appropriate >2 sigma threshold appear as dots with the average PPA (red curve) overplotted. The tiny black box at the left of the upper panel gives the resolution and a deflection corresponding to three off-pulse noise standard deviations. The PPAs are approximately absolute, such that the discontinuous regions of OPM emission at positive and negative PPAs correspond to each other. Note that the P and L-band profiles each extend more than ±25 • and that the lower frequency profile has an unusually narrow equivalent width relative to its higher frequency counterpart. Non-Random Null Distribution In our observations, B1944+17 nulls about 2/3 of the time, somewhat higher than the 60% value given by DCHR, but closer to null percentage reported by Rankin, (1986). The majority of these null pulses can readily be distinguished from the bursts; however, there is a small portion of weak pulses that are difficult to identify as either nulls or pulses. Interestingly, the distinction between nulls and pulses is easier to define at L band, as can be seen in the respective null histograms of Fig. 2. Given that the nulls and pulses cannot be fully distinguished, we can choose an intensity threshold that will be conservative and reliable either in selecting pulses or nulls, but not both. In Fig. 2, we have taken the latter optionthat is, using low thresholds that will tend to slightly underestimate the null population. Then, using this conservative discriminator of nulls, we have computed the burst-and null-length histrograms in Figure 3. These show that 1-pulse bursts and nulls have the highest frequency, but we see that very long bursts and nulls also occur. In the 7000-pulse 327-MHz observation, for instance, a small number of bursts of 40-50 P1 and two of 80-90 P1 were encountered alongside the more frequent long nulls ranging up to 300 P1. Even qualitatively we immediately see that the nulls in B1944+17 are distributed within the PS in a very non-random manner. Recent investigations into pulsar nulling have raised two important new questions about their distributions: a) whether they are randomly distributed (e.g., Redman & Rankin 2009;Rankin & Wright 2007); and b) whether they are periodic (HR07/09). With such a large null fraction, one would expect to see few long sequences in any given observation. The tendency of B1944+17's bursts and nulls to clump into sequences of roughly 20-100 pulses immediately indicates a non-random distribution. Application of the above RUNS 4 test to our observations, namely burst and null se- . Burst-and null-length histograms corresponding to the P-and L-band observations and thresholds in Figure 2. These distributions show unsurprisingly that burst and null lengths of a single rotation period are highly favored; however, we also see that bursts can last up to approximately 90 P 1 and nulls up to three times this! Thus, in the language of the Runs Test, the nulls occur non-randomly in the PS by virtue of being "undermixed" (see text). Note that the P-band observation included nulls up to 300 P 1 ; however the null histogram has been truncated down to 180, so as to resolve detail in the short null distribution. quences, returns values ∼ < -60, verifying this conclusion of a non-random "undermixing". Regarding null periodicity, we find only a suggestion in our observations of a very long periodicity-far too long in relation to their total length for it to be significant. Thus the bursts and nulls of B1944+17 can be regarded as falling into two categories: a) short bursts or nulls of some 1-7 P1 that show a roughly random distribution, and b) medium to long bursts or nulls (>20 P1) that can occasionally persist for several hundred pulses and are patently non-random. We will elaborate further on this distinction in §5. Modes of emission We here investigate the properties of the four modes identified by DCHR. Following their convention, we refer to the three drift modes as A-C, and the final burst mode as "D". The defining characteristics of modes A-D are the same at both P and L band, as are their frequencies of occurrence. The four modes can be readily distinguished by eye due to their unique subpulse structures and intensities, as shown in Figure 4. The transitions between modes occur on a time scale of less than one pulse-i.e., there are typically no observable "transitions" between modes. 5 We find that within a sequence of 10 3 pulses there is a high probability of finding at least one occurrence of each mode. It is interesting that this pulsar, which displays an almost overwhelming variety of behaviors, is quite reliable in how often it does so. As seen in the colour polarization display of Fig. 4, the star's mode changes are usually punctuated by nulls, though there are some combinations of mode changes that characteristically occur adjacent to one another. . Pulse-sequence polarization display showing several of the pulsar's PS behaviors, with reordered mode sequences. The bright and ordered subpulses of mode A begin at pulse 1 and last for 110 periods. Mode A is succeeded by mode C, separated by a null. Mode C is characterized by roughly stationary subpulses and a quasi-periodic cycle of short bursts and nulls. The next 100 pulses are mode D, the weakest and least structured of the star's modes. Note the weak evidence of subpulse structure. Mode D is succeeded by a bright and very well defined mode B, separated by a three-period null. The total power I, fractional linear L/I, PPA χ, and fractional circular polarization V /I are colour-coded in each of four columns according to their respective scales at the left of the diagram. Both the background noise level and interference level of this observation are exceptionally low with the former effectively disappearing into the lowest intensity white portion of the I color scale. Fig. 1 for the four modes at L and P band. The base width within which there is measurable emission is roughly constant in all the modes at both frequencies. There is, however, significant variation in the FWHMs of the various modes at the two different bands. Most notable is the narrowing of the FWHM at P band, clearly displayed in every mode. Also, at L band the FWHMs of modes A and C are broader than those of modes B and D. Conversely, at P band the FWHMs of modes B and D are broader than those of modes A and C. Numerical values for modal properties are summarized in Table 2. L band n/a n/a 4.4±0.05 22 n/a 11.0 21 P band n/a n/a 20.6±0.05 119 n/a 12.1 12 Figure 2. Null histograms for B1944+17 at P (top) and L (bottom) bands. The histogram peaks (at 1110 and 980) corresponding to the large null fractions have been truncated in order to better show the pulse-amplitude distribution. Despite the high S/N, the star's null-and pulse-energy distributions overlap, so that the nulls and pulses cannot be fully distinguished, though this difficulty is more severe at P band than at L band. Plausible, conservative thresholds (shown by dotted vertical lines) of 0.28 and 0.20 <I>, respectively, indicate that some 2/3 of the pulses are nulls or pseudonulls. Figure 6. A 50-P 1 mode-A interval at L band. The PS has been folded at a P 3 of 14 P 1 . The fold length was chosen to demonstrate that the peripheral conal outriders, seen here mainly on the leading edge of the pulse window, fluctuate with a period equal to mode A's P 3 . Modal distinctions in subpulse structure As modes A and B exhibit subpulse drifting, they can best be characterized by their P2 and P3 values, where P2 is defined as the separation of subpulses within a period, and P3 is the separation between drift bands at a fixed pulse phase. Mode C characteristically displays an organized yet stationary subpulse structure. Lastly, we classify those PSs which show no organized subpulse structure as mode D; it is worth noting that mode D is significantly weaker than the others. Table 2 gives P2 and P3 values for modes A, B, and C. The A mode is characterized by prominent intervals of remarkably precise drifting subpulses-which is paradoxical considering the star's otherwise unpredictable and discontinuous behavior. Mode A is unique in its regularity and is marked by its negatively-drifting bands with a roughly 14-P1 P3. This 14-P1 P3 feature can be seen in an lrf of a PS that includes all the modes, indicating its dominance (Weltevrede et al. 2006(Weltevrede et al. , 2007.Two bright, central subpulses are usually seen in mode A. At L band, weak subpulses on the outer edges of the profile turn on and off with a period that is comparable to mode A's P3; see Figure 6. Mode A always appears in bursts having durations of more than 15 periods; however, usually these bursts are even longer, typically some 60 -100 P1-and, remarkably, these A-mode intervals are very rarely interrupted by nulls. The drifting subpulses of mode B are visibly less ordered than those of mode A; however they are clearly structured and negatively-drifting. P3 is approximately half that of mode A at both L and P band; although due to its irregularities these values cannot be measured with the same degree of precision as for mode A. The B mode characteristically persists for less than 25 P1. The third "drift" mode, C, displays three roughly stationary subpulse drift bands (P3/P2 ≈ 0), with the components' relative intensities being variable. Mode C manifests itself in a complex variety of ways: most often the intensities of the three components are approximately equal to each another; see Fig. 4. However, one or two of the components intermittently either turns off or notably weakens relative to the other two; every combination of the three constituent components was observed. The P2 value for mode C is similar to that of mode A, taking into account the presence or absence of its three constituent features. It is possible that the nearly vertical drift bands in mode C result from a near stoppage of carousel rotation; it is also possible that this is an effect of aliasing. Because of the other similarities between modes A and C, it is likely that this "stopping" happens during what is otherwise known as mode A. The difference between modes A and C is therefore only the carousel motion, not the fundamental subpulse structure. This dynamic accounts for the varying number of subpulses observed in mode C. In mode A, as the subpulses drift across the pulse window, anywhere between 1 and 3 may be seen depending on the modulation phase-i.e., see Fig. 6. In mode C short bursts and nulls alternate quasiperiodically with each burst or null lasting some 10 P1 or so, and switching back and forth up to 10 times. The length of these segments is very comparable to the P3 of mode A. Deich et al. first referred to mode D as "chaotic"that is, displaying little perceptible order in its subpulses. Our investigation has uncovered a slightly different story for mode D. The mode-D emission does not usually span the entire pulse window, see Fig. 4. While its sparse subpulses are overshadowed by the bright and ordered ones of modes A, B and C, they appear to consistently have an underlying characteristic structure. Mode D's brightest subpulses appear approximately every 5-10 P1 on the leading edge of its substantially narrower emission window. This withdrawal of emission in the profile wings is unique to mode D. Figure 5 gives partial polarized profiles for each of the four modes at L and P band. The profiles have different total power and total linear forms in the different modes. Most evident are their different modal widths in each of the bands. Mode A has the broadest modal profiles; see Figure 5(a). Its L-band profile displays a nearly uniform distribution of linear power across the pulse window, showing only small dips that correspond to the subpulse separation. Its leading edge is marked by a distinct shoulder at both L and P band, also visible in the linear power, a feature hardly seen in any of the other modes at L band. The bright central subpulses in mode A display a linear power distribution that indicates there are in fact two distinct features that lead the central peak. They can be identified at both bands, though the second one is weaker at P band, nevertheless distinguishing itself as a unique feature. Also in both bands, the trailing subpulse features are brighter and more clearly defined than the leading ones. Modal distinctions in total and linear power distributions The mode-B profiles display a slightly narrower FWHM than in mode A; see Table 2 and Fig. 2. The leading feature in mode A's profile is clearly visible in mode B at P band but is very weak at L band; see Figure 5(b). At L band, the central peak of mode B occurs ∼6 • earlier than in mode A; at P band the peaks of the two modes occur in the same location. The trailing feature in mode B is poorly defined, however this is likely due to the dense drift bands blurring the intensity distribution between the leading and trailing features, rather than a decrease of subpulse intensity. Mode C displays a FWHM width nearly as broad as that of mode A; see Table 2. There is more evidence for the leading feature in the linear power distribution of mode C than in mode B. As in mode B, the central peak in mode C at L band is shifted 6 • earlier relative to the peak in mode A, whereas at P band they occur at the same longitude. The trailing component is very well defined in mode C in the profiles of both bands. Mode D displays a significantly narrower profile at L band compared to the other modes. As in mode B, its leading component is very poorly defined at L band, though there is some evidence for it in the linear power distribution. At L band, the central peak of mode D is shifted approximately 4 • compared to that of mode A. At L band, the trailing feature is slightly more distinct in mode D than in mode B, though it remains weak. The trailing feature at P band is well defined. Mode changes Mode changes are often punctuated by nulls, with two distinct exceptions: First, a small proportion of mode changes are characteristically not interrupted by a null. We find that mode D often precedes modes A and B and sometimes succeeds mode B continuously. Second, nulls do not necessarily result in mode changes. Mode C (discussed below) characteristically displays a pattern in which the emission turns on and off in segments of short 10-15 P1 bursts and nulls. Short nulls are associated both with this mode-C emission pattern as well as with transitions between modes A, B, C and D. Main Pulse -Interpulse Relationship We have used the conservative thresholds of 0.28 and 0.20 < I > to build partial P-and L-band PSs of the pulses and nulls. Full period polarization profiles corresponding to the pulses (not the nulls) are shown in Figure 7 and are similar to the MP profiles in Fig. 1 except that Stokes I, L and V are replotted at 50X scale to reveal the structure of the Fig. 1) of the P-(upper) and L-band (lower) partial PSs containing the pulses (falling above the thresholds in Fig. 2, numbering 2403 and 1803 pulses, respectively), not the nulls. Here, the total power is shown at full scale and then Stokes I, L and V are replotted at 50X scale in order to reveal the structure of the interpulse; otherwise the displays are identical to those in Fig. 1. Only 128 longitude bins were used across the full profile so as to maximize the S/N of each sample. Note the virtually complete IP linear polarization and the similarity in half-power widths between the IPs in the two bands. Note also that the centers of the PPA traverses of the IP and MP (positive OPMs) both fall at about +60 • and have opposite slopes. IP. Here we see clearly for the first time that the B1944+17 IP is almost fully linearly polarized (and has negligible circular), that its half-power width is very comparable at the two frequencies, and that its IP PPA traverse is centered at virtually the same angle as the MP positive-mode, but with a negative slope. In the 327-MHz profile, the discrete PPA dots under the IP profile show that some IP samples are strong enough to define their linear polarization; thus the IP is not uniformly weak, but exhibits a range of intensities. We can also confirm that the IP-to-MP intensity ratio (SIP /SMP ) decreases with frequency, being about 1% at P and some 0.2% at L band. HF86 reported values of about 3% at 430 MHz and 0.3% at L band. Notably, they found that most pulsars with IPs show a similar decrease as frequency increases. More perplexing was HF86's finding that the MP-to-IP spacing (∆φIP −M P ) decreases by some 10 • between P and L band, and given the drastic changes in MP form it is not fully clear how such a change could even be consistently measured. 6 Using such partial PSs for the pulses (but now at both the restricted and full longitude resolutions), we also computed correlation functions that included the longitude ranges of both the MP and IP. These showed no significant (>3 standard deviations in the off-pulse noise) level of correlation at lags of either 0 or ±1 pulse. We also computed partial profiles for the null partial PSs, having lengths of 4635 and 3667 at P and L band, respectively. In these, we were able to find neither significant total power nor any correlated PPAs that might indicate weak linear polarization in the baseline regions. This circumstance argues very strongly that the IP either nulls, or remains very inactive, during MP nulls. EMISSION GEOMETRY OF THE MAIN PULSE AND INTERPULSE At various stages of our analyses we had indications that the baseline regions of the full period profiles were significantly linearly polarized. In order to explore this further, we reprocessed our observations in a manner such that no baseline level was subtracted from Stokes Q and U . We then found a significant level of linear polarization in both regions between the MP and IP corresponding to some 0.25% of the MP peak at P and 0.8% at L band. In both cases the PPAs associated with this baseline linear polarization are nearly flat, suggesting that only one Stokes parameter is well defined, therefore we have concluded that this baseline polarization cannot be well measured with our polarimeter configurations. 7 Again, we stress that this baseline linear power is associated with the pulses and not the nulls! HF86 also allude to a "bridge" of total power emission between the star's MP and IP (see their fig. 2d and the associated discussion). Regardless of whether the star is classified as a member of the cT or cQ class, it can be said with confidence that our sightline crosses two concentric emission cones. The weaker cone is encountered on the profile edges and exhibits similar properties and dimensions at the two frequencies. It is the stronger central emission that shows an "unresolved double" pattern above 1 GHz and a more "single" form at 6 Thanks to Tim Hankins, we have been able to examine the interpulse-discovery observations that were reported in HF86, and we can see how their interpretation of a frequency-dependent ∆φ IP −M P arose. At that time, it could not have been realized that the unusual shape changes of B1944+17's MP make it nearly impossible to correctly align profiles of different frequencies. 7 The L-band system consists of dual linears and the P band dual linears with a circular hybrid between the feed and the cal injection; thus in neither system are Stokes parameters Q and U determined fully by correlation of the receiver voltages. meter wavelengths. Two aspects of this profile's evolution are quite unusual (e.g., R93a,b): a) that the central emission pattern narrows (shows a smaller equivalent width) at lower frequencies, and b) that the "outriding" component pair retains essentially the same dimensions between the two bands. Generally, we encounter outer cones on profile edges, and these show substantial "spreading" with wavelength; whereas inner cones are seen inside the outer ones and show little spectral variation in their dimensions. In B1944+17 these usual expectations seem reversed; how could this occur? Assembling the Relevant Evidence Careful reference to the total power profiles of Fig. 7 shows that the MP and IP are both rather broad-as is expected for a small value of α-with emission extending over most or all of 60 • in each case. Moreover, (HF86 notwithstanding), the MP to IP spacing is close to half the rotation period. Given the complex and changing form of the MP in particular, it is difficult to be more precise than this. Several aspects of the linear polarization are also important to note: First, the absolute PPAs associated with the central longitudes of the MP and IP are nearly identicalallowing for the 90 • OPM "jumps-arguing that the respective emission regions are either conjunct or in opposition along a given magnetic longitude. Second, the PPA traverses under the MP and IP have opposite senses. At both frequencies, 90 • OPM-dominance "jumps" occur on both the leading and trailing edges of the MP profiles. Third (see Fig. 1), the MP PPA traverse is unusually shallow. Its sweep rate R (= ∆χ/∆φ) is only some +0.75 • / • and remarkably linear across the bodies of both the P and L-band profiles. The shallow PPA traverse indicates that the magnetic colatitude α and sightline-impact angle β are similar and both small (i.e., the sweep rate |R|=sin α/ sin β). Interestingly, the negative sweep rate of the IP has an even shallower slope. Finally, the interpulse emission is of an entirely different character than that of the MP. The MP has a complex structure of regions with different dynamics and OPM activity, whereas the IP is more nearly unimodal in form (though with low persistent peaks) and fully linearly polarized. In contrast to the MP, it displays a continuous, smooth PPA traverse-i.e., it does not show any "90 • jumps" as the MP does. Moreover, the IP emission seems uncorrelated with that of the MP. These MP and IP properties provide crucial information about the emission geometry as we will see below. Building A Geometrical Model While B1944+17's IP has been known for more than 20 years (HF86), its polarization properties are measured here for the first time. Therefore, all previous efforts to understand the star's basic geometry have been based on its MP properties alone. HF86 speculated about whether the IP reflected a single or two-pole configuration, but made no attempt at a quantitative model. The star's unusually shallow R value presents a major difficulty for any model. Indeed, the only published model (R93a,b) . The magnetic axis is marked with an "M" and the rotation axis with an "O". As we know little for B1944+17 about the frequency dependence of the beam dimensions, we have used their (R93a,b) nominal 1-GHz valuese.g., 8.7 • for the outer beam's outer 3-db radius (see text). The observer's sightline orbit then makes a small circle around the rotation axis such that it touches both cones and the periphery of the core beam. The beam pattern has a larger angular size at the lower frequency, so we show the relative size of the sightline orbit within the beam pattern. The L and P-band sightline orbits are then indicated by dashed and solid black curves, respectively. profile as cT and obtains reasonable dimensions for the inner and outer cones-but only by assuming that the apparent R value was somehow too flat. The actual sweep rate of 0.75 • / • implies that α will be even smaller than β. This also suggests strongly that the observer's sightline to B1944+17 largely remains inside its emission cones! A current PPA fit to our same 327-MHz observations that includes the IP (Mitra & Rankin 2010; see their fig. A8) gives highly correlated (98%) α and β values of 2.8 and 4.0 • , respectively, with large errors (±15 • ). Clearly, somewhat different values of α and β that maintained R would also fit the PPA traverse well. The half-or equivalent widths of the MP and IP are roughly equal and occupy opposing ∼60 • or 1/6 sections of the star's rotation cycle. Moreover, the similar PPA ranges and opposite slopes of the MP and IP traverses argue that our sightline cuts these regions within the same range of magnetic longitude (modulo 180 • ) but at different colatitudes. This symmetry is consistent both with an orthogonal rotator model (in which the MP and IP emission comes from the two respective poles) as well as a single pole model (wherein both MP and IP are emitted within a single polar region). Notably, the extreme shallowness of R and the small values of α and β are more consistent with the latter configuration. The very small α and β values indicate that the pulsar is a nearly aligned rotator with the Earth positioned almost directly above its "nearer" polar cap of emission, similar to the single pole IP model first proposed by Gil in 1985. According to this model, the MP is a result of our sightline first crossing the inner cone of emission obliquely, then making a tangential cut through the inner/outer cone overlap region, and finally recrossing the inner cone symmetrically. Such an "inside out" sightline traverse accounts for the stability of the outer parts of the MP profile as well as the unusual frequency dependence of the middle region. Moreover, the weak IP feature can apparently be understood as a grazing encounter of the sightline with the far "skirts" of the core beam-such that the MP and IP are centered on opposing field lines that are cut in the same directions. Finally, we can assemble the elements of a quantitative model for the emission geometry of B1944+17. Following R93a,b we know that the outer and inner conal beams have particular dimensions at 1 GHz, here specified in terms of the outside half-power points (which for a 440-ms pulsar) are 8.7 • (=5.75 • P −1/2 1 ) and 6.5 • (=4.33 • P −1/2 1 ), respectively. A similar model is available for the peaks of conal beams (Gil et al. 1993) such that these fall at 6.9 • (=4.6 • P −1/2 1 ) and 5.6 • (=3.7 • P −1/2 1 ). No such study has determined the inner half-power dimensions of conal beams, but if we assume that they are radially symmetric, we can estimate their inner dimensions from the above data. These values are then 5.2 • (=3.45 • P −1/2 1 ) and 4.5 • (=2.97 • P −1/2 1 ) as above, respectively. These conal beam characteristics are illustrated in Figure 8: the outer and inner cones are hashed in blue and green, respectively, up to their half-power levels, and their peaks are indicated by heavy colored dashed lines. Note that the two beams overlap significantly (as they often do in actual profiles) and this overlap region is hashed in cyan. Finally, the core beam is shown in red hashing centered on the magnetic axis ("M") out to its 1.8 • (=2.45/2P 1/2 1 ) halfpower point. The respective L (dashed) and P (solid) sightline trajectories are then fitted into the above radiation-beam geometry as indicated in Fig. 8 by the black sightline curves centered on the rotation axis ("O"). The magnetic colatitude α is about 1.8 • , the sightline impact angle β about 2.4 • , making the sightline circle radius just over 4 • . Of course, the angular radius of the sightline circle does not vary with frequency. Given, however, that the B1944+17 profiles provide very little information about the "conal spreading" at meter wavelengths, we have chosen to illustrate the pulsar's geometry using the well determined 1-GHz dimensions of the beam structure. In relative terms then, the sightline at L band extends well past the radial maximum point of the inner cone, such that it exhibits a cQ structure; whereas the P-band traverse falls just short of this point and has a cT form. In both cases the sightline encounters the core producing the IP. The multiple lines of evidence that our analyses of this pulsar provide point strongly to the unusual emission geometry discussed above. Were reliable baseline polarimetry available, the sightline traverse could be modeled in considerable detail and the actual dimensions of the several beams Figure 9. The short null profile-i.e., the sum of all 296 P-band pulses which participate in nulls less than 8 periods in length. An intensity threshold of 0.28 <I> was used to differentiate between nulls and bursts. Then, those pulses in the short null sequence which still displayed visible power (appeared to possibly be weak bursts) were removed by hand. The remaining weak MP and IP profile corresponds to an intensity threshold of 0.18 <I>, and indicates that there is a weak intensity signal (not detectable on a single pulse basis) present in the pulses which participate in short nulls. estimated through fitting. Unfortunately, this work remains beyond the scope of this paper. Nonetheless, we believe that the circumstances responsible for the B1944+17 interpulse are largely understood-that both the MP and IP are emitted by a single pole and that the IP very likely represents weak core emission. HOW CAN WE UNDERSTAND B1944+17'S NULLS? Pulsar B1944+17 has been most famous for its remarkably long null intervals, which contrast so beautifully with its intense and well organized burst sequences. The star is in the null state some two-thirds of the time, on average. Notably, this value is the same for both the main pulse and the interpulse. Since Backer's first documentation of pulsar nulling in 1970, the principal question surrounding nulls has been one of causality. Here we discuss and distinguish between nulls that are likely caused by two entirely different mechanisms: empty sightline traverses across a rotating carousel, and cessations of emission. We find that the two null mechanisms can largely be distinguished by their characteristic lengths, and we will therefore refer to them as such: the former being short nulls ( < ∼ 7 P1), and the latter long ones (≪ 7P1). Note that this 7 P1 boundary falls on the tail end of the random distribution of null lengths in Fig. 2, whereas the long nulls quickly become non-random. Null Analysis It is important to note that the distinction between short and long nulls is not a clear one. In this investigation, the numerical boundary was set by averaging the total power in a series of null sequences, with varying upper bounds in null length, then selecting the short versus long null boundary to be where the average power dropped off. This procedure makes no claim of being exact. We are primarily interested in the possibility that B1944+17 exhibits two distinct types of nulls, rather than in defining their features precisely. Short Nulls Short nulls can be found in virtually any context of this star's emission-that is, they are not unique to a particular mode, nor do they only occur between mode changes. The short null average profile was constructed by summing all the nulls (falling below the established threshold, 0.28 <I> at P band) which participated in null sequences 7 P1 in length (after removing a few with obvious "burst" power by hand). The total power in the short null average profile in Figure 9 (comprised of the 296 remaining nulls) was nonzero-i.e., it did not represent noise, rather there is very clear indication of emission. This short null profile-showing clear emission features at the positions of the MP and IPexhibits an aggregate intensity 0.18 <I>, a level well below the threshold but far above the noise fluctuations. It is challenging yet of great importance to decipher whether this non-zero power is the result of emission from (a) consistent low intensity emission throughout the null sequences and/or (b) a small number of high-er intensity pulses that skew the average. To reduce the possibility that the detected profile was due to (b), we reemphasize, the few pulses (half a dozen) for which there was visible power (i.e., perhaps "should" have been called weak bursts) were removed by hand from the short null partial PS. It was then after this procedure that significant power was detected in the short null profile (whereas, we will see below that the long null profile is indistinguishable from noise). We therefore conclude that the power in the short nulls is due to (a): consistent low intensity emission not detectable on a single pulse basis. Short nulls are more or less randomly distributed throughout the PSs both at L and P band. Recall that B1944+17's subpulses consistently repeat behaviors (evidenced by the star's well defined modes) though not with any perceptible regularity. It is not surprising that the short nulls complimenting such modes appear consistently, but not periodically. Mode changes are often, though not always, punctuated by short nulls. Modes A and D are often interrupted by short nulls of some 1-3 P1. Of the four modes, mode D is most frequently punctuated by short nulls. That the short nulls probably represent empty sightline traverses is very consistent with mode D's emission behaviors. Mode C characteristically alternates short quasiperiodic bursts and nulls of some 10 -20 P1. These transitions between nulls and bursts appear to entail no turnoff as no possible such "partial null" was ever observed in the pulse window. Their characteristic length falls roughly on the boundary where we distinguish between short and long nulls. We have found no means of testing whether these mode-C nulls represent the longest pseudonulls or the shortest of the long null intervals. Long Nulls The rest of the story belongs to the long uninterrupted null sequences. As mentioned earlier, B1944+17's nulls last up to 300 P1 in our observations; however, note that we define long nulls to be any null sequences lasting longer than 7 rotation periods. Long nulls appear very consistently in this pulsar, and their presence alone indicates an "undermixing" of bursts and nulls. As discussed in §3 and displayed in Fig. 3, the long null distribution is highly non-random. In contrast to the short nulls, the aggregate power in the long nulls at P band is only 0.009 <I>, roughly that of noise fluctuations, demonstrating that there is truly no measurable power from the star during these nulls. Every long null interval in our observations has a noise-like profile. This is the case at both L and P band. Considering their noise-like character, length, and nonrandom distribution, the probability that long nulls represent "empty" sightline traverses is extremely small. The evidence assembled indicates that long nulls represent actual cessations of the pulsar's emission. This finding would be strengthened by measuring a decrease in spindown rate during the nulls, as in B1931+24; unfortunately, even the longest nulls in B1944+17 are far too short for this to be possible. In summary, we find that there are two distinct mechanisms behind B1944+17's nulls: first, short pseudonulls that are an artifact of carousel properties and rotation, exhibit measurable aggregate emission and tend to punctuate high intensity, well structured burst sequences; and second, long (true) nulls which are distributed non-randomly, have no detectable aggregate power, and thus plausibly represent actual cessations of the pulsar's emission engine. In view of this star's varied emission behaviors, it is not surprising that its nulling phenomena are similarly complex. Null transitions Deich et al. looked for, but did not observe, transitions from the null-to-burst or burst-to-null state, that were expected to occur infrequently just as the star's emission cone swept through the Earth's direction-that is, occasions when the star's emission switched off or on during the pulse window. In our investigation we found two such possible transitions at L band, one from the burst-to-null state and the other the opposite. In this PS there were some 100 long null intervals, therefore some 200 transitions to and from long nulls. As the MP profile has a roughly 30 • equivalent width, we have at best a 1 12 chance of observing a given transition. This implies about 17 transitions; however we were able to identify only 2 with good confidence. This seeming shortage of null-burst transitions cannot be read into deeply: a) it may be that such transitions can be securely identified only over a 10-15 • interval, b) a number of the shorter long null intervals may not be what they seem, or c) confident detection may require particular modes before and after the long null. And in any case, merely 2 partial nulls throws any speculation into the regime of small number statistics. This all said, both transitions occurred approximately halfway through the pulse window and were marked by a sharp decrease or increase in emission intensity. Both transitions occurred on the edges of long null intervals, from mode C in both cases. If these are valid indicators of transitions to or from cessations of emission, they occur on millisecond scales that are comparable to our longitude resolution. SUMMARY AND DISCUSSION We have thoroughly investigated the radio pulse-sequence properties of pulsar B1944+17. This star nulls some twothirds of the time, has four distinct modes of emission and an IP, and we find that its magnetic and rotation axes are closely aligned. We confirmed the four modes first identified by Deich et al. . Their properties are as follows: • Mode A is the brightest and most ordered of the star's four modes. It consistently displays three to four subpulses that drift with a P3 value of about 14 periods. • Mode B drifts at a faster rate and displays a P3 of approximately 6 P1. It usually shows three subpulses, which are characteristically less ordered than those of mode A. • Mode C resembles mode A but is stationary. It displays between one and three subpulses with similar brightness and virtually identical spacing (P2) to those of mode A, however with negligible subpulse drift-and bursts alternate quasiperiodically with nulls of 10-30 P1. Mode C may be interpreted as representing a carousel of "beamlets" organized in the same way as in mode A, but which is nearly stationary and flickering quasiperiodically. • Mode D is the weakest of the four modes. Though it lacks the ordered drift bands of the other modes, we find that it is not chaotic as it had been previously described by DCHR. Its brightest subpulses are in the leading edge of the pulse window, appear consistently at the same phase, and remain active for some 5 periods or so. • Both the total power and the FWHM as displayed in the modal average profiles show consistent behaviors within the different modes. In order to provide a sound basis for interpretation, we investigated B1944+17's emission geometry, and we drew the following conclusions: • The pulsar has both an MP and IP, which occupy opposing roughly 60 • intervals of its rotation cycle. • The MP and IP are separated by almost exactly half a rotation period, independent of radio frequency. • The nearly complete linear polarization and disorderly PS modulation of the IP stands in stark contrast to the modest linear and rich phenomenology of the MP. • The IP subpulses vary widely in intensity but show negligible correlation with subpulses in the MP. • The MP's equivalent width increases with increasing frequency, whereas its full width is nearly constant over a broad band. This appears contrary to the frequency evolution exhibited by most other pulsars. • The PPA rates (R) of the star's MP and IP are unusually small, but of opposite senses. The MP value is only +0.75 • / • . • Emission-beam modeling constrained by the MP and IP PPA-traverse information argues that α and β are some 1.8 and 2.4 • , respectively-such that the sightline orbit has an angular radius of just over 4 • and remains inside the conal emission beam (See Fig. 8). • The small α and β values indicate an unusual singlepole interpulse geometry-that is, here the IP emission occurs along field lines opposite to the MP and is produced by weak core emission. • The rarity of B1944+17's geometrical configuration seems to account for some of its perplexing modulation and polarization phenomena. The prominent nulls of pulsar B1944+17 have been the principal interest driving this investigation. Here, we summarize the patterns of its nulling behavior, discuss the nulllength distribution, and evaluate which possible mechanisms are responsible for the nulls. • B1944+17 appears to null nearly 70% of the time. That is, some two-thirds of the star's "pulses" have intensities falling below a plausible "null threshold" and are putative nulls. In practical terms then, there is a two-thirds chance that any given "pulse" will be a "null" rather than a burst. • In the above context, it is remarkable to find that B1944+17 exhibits bursts lasting up to 100 or so P1 and nulls with durations of up to some 300 periods. • In Runs Test terms, the star's long nulls and bursts are non-randomly "undermixed". • One-period nulls appear with the highest frequency. Nulls with lengths up to approximately 7 P1 or so occur with a roughly random distribution; whereas nulls longer than this are obviously (and increasingly) non-random. • The short nulls-that is, those with durations less than roughly 8 P1-have significant aggregate power in the form of a weak average MP and IP profile; see Fig. 9. These short nulls are largely pseudonulls-that is, "empty" sightline traverses through the rotating carousel-beam system. • Nulls longer than the above, on the other hand, exhibit negligible aggregate power and thus have the character of noise. These long nulls then represent actual cessations of the pulsar's emission. B1944+17's behavior is therefore similar to the intermittent pulsar B1931+24 . On this basis, we can draw some interesting conclusions about the overall emission properties of pulsar B1944+17-• Within certain A-or B-mode PSs, the emission from the pulsar's inner and outer cones exhibits similar modulation frequencies and thus folds synchronously. This strongly suggests that the inner and outer-cone emission is produced by the same set of particles. • Similarly, the MP and IP emission generally bursts and nulls together, but is otherwise uncorrelated. This suggests either that the core emission is produced at low altitude within the polar flux tube by sets of particles that produce the conal emission at higher altitude, or, perhaps that the conal emission is refracted inward as suggested by Petrova & Lyubaski (2000). • If we are correct about the star's emission geometry, then the star's subpulse modulation reflects a complex combination of inner and outer-cone contributions. Our analyses found no basis for determining the pulsar's carousel circulation time; however, some speculations are appropriate. First, identification of a circulation time may be difficult or impossible in a pulsar with intermittent emission. Second, rough estimations of the star's circulation time from Ruderman & Sutherland (1975) or by extrapolaring from B0943+10 (DR01) both suggest that it could be shorti.e., 10-15 P1. If there were then 10 or more "beamlets" in the carousel, the primary "drift" modulation frequency could be aliased into the second order. Such aliasing together with discrete changes in "beamlet" number could give rise to "modes" such as are observed in B1944+17. For instance, were the star's carousel to circulate in some 12 P1 with a configuration of 13 "beamlets", drift modulation similar to the A mode would be produced. Similarly, 14 "beamlets" would modulate the PSs with a P3 like that of the B mode, and 12 would produce a nearly stationary modulation very like that of mode C. We found it both fascinating and challenging to study a star such as B1944+17 which exhibits so many distinct phenomena. Clearly, its complex modes and nulls still have much to teach us. Different polarization measurements may be able to better define its sightline geometry, and interferometry could well reveal a "pedestal" of continuous emission. Finally, even longer observations are needed to further explore the star's carousel-beam configuration and fully assess the frequency of partial nulls. ACKNOWLEDGMENTS We are pleased to acknowledge Vishal Gajjar, Tim Hankins, Joeri van Leeuwen and Geoffrey Wright for their critical readings of the manuscript and Jeffrey Herfindal for assistance with aspects of the analysis. We also sincerely thank both Tim Hankins for providing us with the original interpulse discovery observations, and Joel Weisberg for the ionospheric Faraday rotation corrections. One of us (IMK) sincerely thanks the Barry M. Goldwater Scholarship and Excellence in Education program and the UVM College of Arts and Sciences for the APLE Summer Fellowship, which together permitted her to complete this work. The other (JMR) thanks the Anton Pannekoek Astronomical Institute of the University of Amsterdam for their generous hospitality and the Netherlands National Science Foundation and ASTRON for her Visitor Grants. Portions of this work were carried out with support from US National Science Foundation Grants AST 99-87654 and 08-07691. Arecibo Observatory is operated by Cornell University under contract to the US NSF. This work used the NASA ADS system.
233282830
s2orc/train
v2
2021-04-17T18:41:50.776Z
2020-01-01T00:00:00.000Z
Remembered or forgotten stimuli: a functional magnetic resonance imaging study on the effects of emotion Correspondence: Erol Ozcelik, Cankaya University, Faculty of Arts and Sciences, Department of Psychology, Ankara Turkey E-mail: ozcelik@cankaya.edu.tr Received: March 20, 2020; Revised: July 14, 2020; Accepted: September 28, 2020 ABSTRACT Objective: The first aim of this study is to examine why emotional events enhance memory for preceding stimuli. The second goal is to identify brain regions associated with remembering and forgetting by finding brain activation differences during encoding of remembered and forgotten stimuli. The third goal is to examine which brain areas are activated when studying emotional pictures compared to neutral ones. Method: In each trial, a picture of an object followed by an emotional or neutral picture from the Turkish culture were presented to 15 volunteers. The effect of the succeeding pictures on the remembering of preceding stimuli was examined. The participants studied the stimuli in the magnetic resonance scanner and, meanwhile, brain images were taken. The memory performances of the participants were measured with the recognition test administered one week later. Results: Behavioral results suggest that emotion has no effect on memory for preceding stimuli. Functional magnetic resonance imaging results indicate that remembered stimuli compared to forgotten ones caused more activation in left inferior frontal gyrus and left superior medial gyrus. Emotional pictures create more activation in the mid-temporal gyrus and supramarginal gyrus compared to neutral images. Conclusion: Brain structures in which activations are observed in remembered stimuli compared to forgotten ones (left inferior frontal gyrus and left superior medial gyrus) are responsible for the semantic elaboration and associative memory formation. Thus, it can be concluded that object pictures are remembered because they are processed more deeply. Besides, activations are observed in the areas known to be related to the processing of emotional face expressions when emotional and neutral pictures are compared. INTRODUCTION An emotional event, such as a terrible traffic accident, is remembered much better than a neutral event (1). From an evolutionary perspective, remembering information around emotional events is essential for survival and for passing on genes to the next generation (2). In one of the key researches in the field, it has been revealed that presenting emotional pictures 4 seconds after neutral stimuli increase the remembering of neutral stimuli compared to presenting neutral pictures (3). Of interest to researchers here is that emotional events automatically affects the remembering of pre-presented neutral stimuli that are not presented. However, no studies have been carried out to explain why this effect occurs. The brain, once described as a black box, began to be directly observed using advanced techniques such as functional magnetic resonance imaging (fMRI) applied in the field of neuroscience. In this context, this study aims to examine why emotional events increase the remembering of neutral stimuli, which are just presented before them, using the fMRI technique. This advanced imaging method can reveal the brain areas responsible for the enhancement of observed memory performance. Since the relationship between brain structures related to this process and other cognitive processes have been described in the literature, arguments can be developed about the cognitive processes that cause an increase in memory performance. Valence and Arousal One of the interesting topics in the literature is that emotional stimuli are remembered more than neutral ones. In emotional memory studies, emotional stimuli are generally examined in two dimensions as valence and arousal (4). Valence determines how positive or negative the stimulus is emotionally, and arousal determines the degree of calmness or excitement in the person (4). Valence and arousal independently increase the remembering of stimuli and trigger different activation in the brain (5). At this point, the valence and arousal properties of emotional stimuli are one of the important issues that need to be investigated. Remembering Emotional Events Research and practice in both cognitive and clinical psychology show that emotional events are better remembered than ordinary events. Similar results were obtained when different stimuli such as words, sentences, and pictures were used (6,7). Stimulants with positive or negative valence are also remembered better than neutral ones (8). As a result of researches carried out on various cognitive processes of emotions using brain imaging techniques, it has been found that medial temporal lobe (MTL) regions, prefrontal cortex, and various brain structures, especially the amygdala, are involved in these processes (4). Researches show that the amygdala occupies a central place in the emotional memory system and plays a role in the activation of other parts of the brain associated with emotional memory (modulator effect) (9). It is observed that the emotional memory performance of patients with bilateral amygdala damage (Urbach-Wiethe syndrome) is decreased (10). In addition, lesion studies conducted with patients with damaged MTL emphasize that reciprocal connections between the amygdala and MTL regions play a critical role in emotional memory processes. Another factor in the better remembering of emotional stimuli is the consolidation process, which makes memory traces stronger over time and become more resistant to interference. While there was no difference in memory tests applied immediately in Urbach-Wiethe patients with amygdala damage compared to normal participants, a loss in memory performance was observed when the memory test was administered late (for example, 1 week after the study) (11). In addition to this study, considering the lesion studies showing that the hippocampus is effective in the consolidation process (12), it can be argued that the amygdala affects the consolidation processes by modulating the hippocampus (13). Dm Effect Research in the literature showing that emotional stimuli are remembered better than neutral ones has increased the interest in the causes of this effect (14). There are many reasons why emotional events are better remembered. Among the researches on this subject, those examining the Dm effect (difference due to memory effect or subsequent memory effect) stand out (15). The Dm effect is defined as the differences in neural activation between remembered and forgotten stimuli in the post-experiment process (16). Eventrelated neuroimaging techniques, especially fMRI and event-related potentials, enable detailed studies of the Dm effect. In this case, by looking at the differences in neural activation of remembered and forgotten stimuli during encoding, the reasons for better remembering of emotional stimuli can be investigated using brain imaging techniques. Although there is not enough research on this subject in the literature, studies on the Dm effect using words or pictures via various brain imaging techniques, especially with event-related fMRI, draw attention. It has been found that activation in the frontal and MTL predicts the Dm effect in some studies using emotional and neutral stimuli (17). The Effect of Emotional Events on Remembering the Preceding Stimuli In addition to better remembering of emotional events, one of the leading research in the field (3) found that emotional events increase the remembering of preceding stimuli. The point that draws the attention of researchers here; emotion automatically affects the remembering of preceding neutral stimuli that are not presented. In this study, pictures with positive or negative emotions were presented 4 or 9 seconds following the neutral stimuli. Participants assessed the intensity of emotional pictures and remembering of neutral stimuli by scoring. One week later, the volunteers were presented with neutral stimuli and asked which ones they were working on. Unexpectedly, it was found that the memory performance associated with neutral stimuli shown 4 seconds before the emotional pictures were affected by the emotional intensity of the pictures, while the memory performance associated with the neutral stimuli shown 9 seconds before was not affected by emotional events. In the second experiment, unconventional neutral pictures were presented instead of emotional pictures, and the effect found in the previous experiment was not observed. With the second experiment, it was shown that better remembering of stimuli is due to the emotional nature of the events, not the distinctive characteristics of the events. Similar to the findings of Anderson et al. (3), it was found in a very recent study that emotional stimuli increase the remembering of stimuli related to task goals (18). In an experiment during which each picture was displayed for 100 milliseconds, one of the 12 pictures presented was rotated 90 degrees to the right or left (19). In this experiment where stimuli were presented very quickly, the task of the participants was to find out which direction the critical picture was rotated. It was observed that when a negative emotional picture was presented with 2 stimuli before the critical pictures, the direction of the rotation was better detected than a neutral picture. These results were interpreted as emotional events strengthened the consolidation process of critical pictures. These two studies show that emotional events increase the remembering of preceding stimuli compared to neutral events, but the reason for this effect is unknown. The reason emotional events increase the remembering of preceding stimuli may also be due to mechanisms other than consolidation processes. Finn and Roediger (20) argued that the cause stemmed from the reconsolidation processes that occur during the return of memory traces. However, Dunsmoor et al. (21) claimed that information that could be associated with emotional events was better remembered. Mather et al. (22) argued that emotional events increased the priority of information around and thus elevated memory performance in a model where stimuli competed for limited mental resources. Similarly, it was observed that the more important the stimuli were perceived, the more it was remembered. (23) Schmidt and Schmidt (24) suggested that the responsible mechanism was the attention directed to emotional information. In light of the findings in the literature, the hypotheses of this study are as follows: Hypothesis 1. The increase in activation in the amygdala will modulate the hippocampus and increase the remembering of the emotional events preceding stimuli (13). Hypothesis 2. More activation will occur in MTL and amygdala in the brain compared to neutral ones when functioning on emotional pictures (4). Hypothesis 3. More activation will be observed in the frontal and medial lobes during the encoding of remembered stimuli compared to those forgotten (17). This study is aimed at revealing the cause of the aforementioned effect with the cognitive paradigms using the fMRI technique. The fMRI technique has the potential to explain the sources of the impact of emotional events on remembering, as it allows us to examine the processes that take place in the brain while performing a task. The observed increase in memory performance may be due to emotional events affecting the consolidation processes. Emotional stimuli can affect how post-stimuli are processed. Remembering the post-presented neutral stimuli may vary depending on participants' moods, their state of distraction source, or the clear strategies that can be applied. However, emotional events unlikely to increase the remembering of preceding stimuli, that is, not being displayed at that moment. Participants do not develop a specific strategy, because they do not know whether an emotional or neutral event will be presented following the neutral stimuli. The mood cannot affect the processing of neutral stimuli because emotional events are presented following the stimuli to be remembered. Presenting emotional events in post-neutral stimuli is expected to reduce memory performance, as demonstrated in various experiments (e.g., 25) in which emotional events are highly used for sources of distraction. To date, there has been no study testing such possibilities. In this context, the main purpose of our study is to reveal why emotional events increase the remembering of preceding stimuli. Although behavioral experiments indicate that emotional events increase the remembering of preceding neutral events, no research has been conducted to show the reason for the observed increase in the memory performance. With behavioral experiments, the effect of independent variables on memory performance can be examined, but information about the processes taking place in the brain cannot be obtained. However, brain structures associated with cognitive processes can be revealed with the fMRI technique. The objectives of our study based on these needs are as follows: 1. Finding out why emotional events increase the remembering of preceding stimuli and the brain areas associated with this increase in performance, 2. To examine in which areas of the brain activation occur when studying emotional pictures compared to neutral ones, 3. To identify the brain areas associated with remembering and forgetting information by making a difference in how remembered and forgotten stimuli are processed during encoding. Sample Sample size for planned repeated measures analysis of variance (ANOVA), was calculated using G * Power software based on one-way, 5% error, 80% power, and 0.35 effect size. The expected effect size is smaller than that obtained in the literature (0.51) (3). According to the power analysis results, it was determined that the sample size should be at least 15 people. 15 undergraduate students studying at Atilim University Faculty of Engineering, who took human-computer interaction course, had a right-hand preference, did not have any psychiatric disorders, were not using psychiatric or neurological medications, had normal or corrected normal acuity, and did not have any substance addiction, voluntarily participated into fMRI study. These students were given additional points for the course they took for their participation. Students who did not want to participate in the experiment were given assignments that they could complete in 2-3 hours in order to provide equal opportunities. The study was approved by the Atilım University Human Research Ethics Committee. Experiment Design: 2-factor (the type of second stimuli; emotional, neutral) repeated measurement experiment design was used. Validity Study: The second stimuli were chosen from Turkish culture instead of the International Affective Picture System (26) library, which reflects American culture. While the emotional pictures included scenes such as an earthquake, martyrs funeral, and terrorist incident; the neutral paintings included scenes such as a village coffee house, a school library, and road construction work. Stimuli were selected from the internet since no emotionally charged picture database reflects Turkish culture with norm data as far as we know in the literature. With the validity study, 248 pictures of the same size were presented to the participants in random order and in the middle of the screen without a time limit, and the intensity of excitement felt for each stimulus was asked to evaluate. While looking at the pictures, they were asked to press keys 1 to 9 on the keyboard, depending on how calm (1 key on the keyboard) or excited (9 key of the keyboard) was felt. As soon as the participant's assessment was received, the next stimulus was presented without a time interval. Different participants were recruited for validity and fMRI studies. A total of 48 pictures were used in the experiment, 24 negative with the highest arousal and 24 neutral with the least arousal. Measures and Process A signed consent form and MR information form were obtained from all participants before starting the experiment. Then Edinburg Handedness Inventory was applied. This experiment was designed according to the method of the study conducted by Anderson et al. (3). Accordingly, the first stimulus and then the second stimulus was presented in each trial. The first stimulus was selected from 48 neutral object pictures (such as plane picture, ant picture, apple picture, hand picture, table picture) obtained from the Snodgrass and Vanderwart database, which are frequently used in psychology experiments (27). These 48 black and white object pictures that individuals will work on were divided into two, resulting in two lists of 24 pictures each. The counterbalancing technique was used to assign these lists to experimental conditions. Thanks to this control method, each list was used equally in each experimental case. The presentation order of the pictures in each list was randomly determined for each participant. According to the findings obtained from the validity study, 24 negative with the highest arousal and 24 neutral with the lowest arousal constituted the second stimuli of the study. Since variables such as human and animal presence are known to affect the remembering of pictures, the lists were created to be equal in terms of these two variables. Positive pictures were not used in this experiment to shorten the duration of the experiment, as Anderson et al. (3) showed that arousal, not the valence of the pictures, affects memory performance. Stimuli were presented via a computer and projection device (NEC NP125) outside the MR room using the E-prime experimental package program (28). While on the MR device, participants saw the stimuli projected on the curtain with a mirror attached to the head coil. After the structural shooting, functional shots were carried out in 4 blocks of 12 trials each, only during the operation of stimuli. In each trial, participants were presented with a fixation mark (2 seconds) followed by the neutral-valued primary stimulus (4 seconds). Participants were asked to evaluate whether each object was living or nonliving to process the stimuli in deep. In both lists created, half of the objects belonged to living beings and half to non-living beings. Volunteers were reminded of their duties by displaying the text "Live: Y N" right above the first stimuli (29). Participants were asked to respond to this task within 4 seconds. This text was displayed on the screen for 4 seconds, even if it was answered before this time. Two keys of the MR compatible response pad device (Current Designs, Philadelphia, United States) were used to collect the responses of individuals. The second stimulus, emotional or neutral, was then presented for 4 seconds. Participants were not assigned a task related to the second stimulus and were asked to only look at the stimuli. Following the object picture and emotional/neutral Turkish culture picture, mathematical operations (such as 5+6+7=18) were given to add 3 single-digit numbers for 15 seconds in each trial, and the accuracy of these operations was asked to evaluate. The purpose of this task was to prevent emotional stimuli from affecting other trials (3). Half of the operations shown on the screen were correct while the rest were incorrect. As soon as the volunteers' answers were received, a new mathematical operation was presented. In studies examining the effect of emotion on memory through consolidation processes, one of the most common periods used between study and memory testing is 1 week (eg, 3,30,31). Therefore, memory performance in front of the computer was evaluated by the recognition test 1 week after all object pictures were studied. In this test, 48 studied object pictures and 48 unworked (new) object pictures taken from the same database were presented in random order. After each stimulus was shown on the screen for 2 seconds, the participants were asked whether they recognized the pictures and they were asked to press the relevant key. After pressing the key, the next stimulus was instantly presented. The remembering of the emotional and neutral pictures was also evaluated by the recognition test, and 48 old (24 emotional and 24 neutral pictures) and 48 new (24 emotional and 24 neutral) pictures were presented in a random order, similar to the object pictures. MR Captures: MR images were taken with a 3-Tesla MR device (Magnetom, Trio TIM system, Siemens) at the National Magnetic Resonance Imaging Center. Anatomical images T1-weighted MP-RAGE (magnetization-prepared rapid gradient-echo) sequence Behavioral Data Analysis: Statistical evaluations of behavioral data were made with SPSS 25 package program. The Kolmogorov-Smirnov Normality Test was used to test whether the data showed normal distribution. Since some variables did not display normal distribution (p<0.05), the Wilcoxon test was applied with paired samples that do not require normality assumption in the analyzes. The significance level was determined as (p<0.05) in statistical evaluations. MR Data Analysis: Functional images were analyzed with AFNI 18.3.03 program (32). Individual analyzes were performed on each participant with afni_proc.py pyton script. 2 3D functional images captured at the beginning of each block were not included in the analysis to prevent the noise experienced during the stabilization stage of the MR signal. 3D functional images were pre-processed. In this context, despiking, slice timing correction, motion correction, coregistration of functional and structural images, spatial normalization, spatial smoothing, masking and scaling processes were performed, respectively. To eliminate the noise caused by possible discharges in functional data, the 3dDespike program, one of AFNI's recommended pre-processing steps, was used. Section timing correction was performed based on the first section captured. In the correction of head movements and the coregistration of functional and structural images, 3-D shooting with minimum outlier was taken as a basis. The spatial smoothing was performed with a Gauss-type kernel with a full width at half maximum (FWHM) of 4.5 mm. Anatomical and functional data were normalized in Talairach space according to Colin N27 template. With masking, automatic masks were created and non-brain areas were removed. In order to eliminate the signal difference between the participants and the shots, the data recorded as time series in each voxel was transformed into a percent signal change with respect to the mean by scaling. A single gamma hemodynamic response function was used in modeling blood oxygen level-dependent (BOLD) responses. Beta coefficients of each predictor variable were found by general linear model (GLM) regression. In order to eliminate the effects caused by head movements, a total of 24 motion parameters, 6 in each block, were included in the regression analysis. In group analysis, beta weights obtained from GLM were examined by random effects variant analysis (ANOVA) in order to make generalizations in the universe. In order to avoid errors arising from multiple comparisons, cluster sizes with corrected significance threshold p <0.05 and uncorrected significance threshold p<0.005 in clusters determined by Monte Carlo simulations obtained through the 3dClustSim (with ACF option) program, and results exceeding these threshold values were reported. Validity Findings The arousal values of the selected negative images (Mean=7.14, SS=0.41) are higher than the arousal value of neutral images (Mean=2. 16,SS=0.20) according to Mann-Whitney U test results (Z=-5.94, p<0.001). In order to examine the inter-rater reliability, the intra-class correlation coefficient was found to be at a good level (ICC=0.82, p<0.001) according to the criteria of Koo and Li (33). These results prove that the pictures selected from Turkish culture are valid and reliable. Demographic and Neuropsychological Findings Participants are between 21 and 29, with an average age of 23.07 and a standard deviation of 1.94. 8 of the volunteers were women whereas 7 of them were men. According to Edinburg Handedness Inventory, all participants preferred to use their right hand. Behavioral Findings The paired sampled Wilcoxon test was applied separately for each dependent variable (such as hit, false alarm, corrected recognition). In all these analyzes, the type of the stimulus (emotional, neutral) was used as an independent variable, and the false alarm rate was removed from the hit rate of each participant to eliminate the effect of predictions on memory performance (34). When analyzed with the corrected recognition values obtained, no significant difference between remembering the stimuli preceding emotional pictures (object pictures) and remembering the stimuli preceding neutral pictures (Z=-2.80, p=0.78). Similar statistically insignificant results were also found for hits (Z=-1.14, p=0.26) and false alarms (Z=-0.47, p=0.64) ( Table 1). As seen in Table 2, according to the corrected recognition results obtained from memory tests performed one week after the study to measure the remembering of secondary stimuli, it was observed that emotional pictures were remembered at a higher rate than neutral ones (Z=-2.53, p=0.01). The effect of the independent variable stimulus type (emotional, neutral) is also significant in hits (Z=-3.35, p=0.001) and false alarms (Z=-2.62, p=0.01). During the study, no effect of the stimulus type (emotional, neutral) was observed in the accuracy of the mental processes presented in each trial following the primary and secondary stimuli (Z=-0.21, p=0.83) and reaction time (Z=-0.97, p=0.33). Similarly, the effect of stimulus type on other dependent variables such as the accuracy of the vitality assessment task (Z=-0.53, p=0.60) and reaction time (Z=-0.68, p=0.50) was not significant. The mean and standard deviation of these variables are given in Table 3. fMRI Findings As seen in Figure 1, it was found that the remembered stimuli caused more activation in the left inferior frontal gyrus (IFG) and left superior medial gyrus than the forgotten ones. Emotional pictures created more activation in the mid temporal gyrus and supramarginal gyrus compared to the neutral ones ( Figure 2). The center of mass and voxel sizes of the activation areas in Talairach coordinates are given in Table 4. DISCUSSION In this study, fMRI, one of the event-related neuroimaging techniques, which enables us to observe brain areas related to cognitive processes, which is widely used in the literature, was used. With this technique, it was aimed to show the brain areas associated with this effect in addition to the study of Anderson et al. (3) revealing that the preceding stimuli are better remembered. However, behavioral findings showed that neutral stimuli preceding emotional events did not provide better remembering. Hence, Hypothesis T 0 a b Figure 2. Activation areas associated with emotion (Emotional>Neutral) in the brain. The activations observed in (a) the right midtemporal gyrus and (b) the right supramarginal gyrus were imaged in axial, sagittal, and coronal planes. Similar activations also exist on the left of the same structures. Activation areas are shown in the anatomical image of Colin N27 based on T data. The right mid-temporal gyrus is a brain structure located in the middle parts of the temporal lobe between the superior temporal sulcus and the inferior temporal sulcus and plays a role in the process of recognizing facial expressions that are a part of the ventral pathway. The Supramarginal gyrus is also known as Brodmann 40 th area (BA 40). It is a part of the parietal lobe and the mirror neuron system and is a brain area involved in the processing of human gestures. 1 did not support it. There are thought to be two reasons for this. First, the study was carried out on an fMRI device in a laboratory environment, which is the opposite of the participants' daily lives. Second, although it will be explained in detail above, the emotional (negative) pictures used in the study did not sufficiently affect the participants. Thereupon, the activation differences during the processing of emotional and neutral stimuli in fMRI findings were examined, and it was found that emotional stimuli caused activation in four different areas, unlike the neutral ones: the mid-temporal gyrus (MTG), inferior temporal gyrus, fusiform gyrus and supramarginal gyrus. However, since no activation was found in the main areas of the brain (amygdala and MTL regions) where the emotional stimuli stated in Hypothesis 2 were expected to be activated, the Dm effect was examined and it was observed that the stimuli remembered on the left IFG and left superior medial gyrus caused more activation. Comparing the remembering of the emotional pictures with that of the neutral pictures, it was found that the participants recognized the emotional pictures at a higher rate than the neutral pictures, in accordance with the literature (eg, 6). When looking at the fMRI findings, emotional pictures cause activation in four areas in the brain compared to neutral ones: MTG, inferior temporal gyrus, fusiform gyrus and supramarginal gyrus. Although MTG is a part of the visual ventral pathway, there is not enough information about its function (35). In the literature, there are studies showing that MTG is activated in the processing of negative facial expressions (such as angry, sad) (36), negative pictures (37), and involves in the object recognition process (38). Considering the use of negative and neutral images in this study, object recognition is expected to occur for both emotional and neutral pictures. The reason why MTG activation is more for negative pictures at this point, may be due to the processing of negative pictures rather than the object recognition process. This interpretation is also supported by the activations in the inferior temporal gyrus, fusiform gyrus, and supramarginal gyrus during processing of negative stimuli (39,40). Although the inferior temporal gyrus is a part of the ventral pathway with MTG, it plays a role especially in the object recognition and facial expression process (41). Fusiform gyrus is known for being involved in color recognition and facial expression processing (40). Finally, the supramarginal gyrus is a part of the mirror neuron system and is an area of brain involved in the processing of human gestures (42). Looking at their common points, we can conclude that negative facial expressions of all areas are involved in the encoding process. The emphasis on emotion in the pictures used in the study is usually human facial expressions (such as people crying or expressing desperation). Considering all these, we can conclude that MTG, inferior temporal gyrus, fusiform gyrus and supramarginal gyrus are activated during the recognition process of the negative facial expressions in accordance with the literature. When evaluating this result, it is likely that the participants paid attention to the facial expressions in the pictures, as most of the secondary stimuli used in the experiment contained human faces. In summary, emotional pictures were found to generate more activation in the MTG and supramarginal gyrus than neutral ones, and as these brain structures were responsible for the processing facial expressions, the participants were thought to pay attention to how people felt in the emotional pictures with predominantly human face. For example, it should be looked at the people's facial expressions to see if they're suffering or sad. In this case, the association of the activated brain areas with the processing both the negative pictures and facial expressions can be explained by considering the emotional emphasis of the pictures used. Although our study found activation in brain areas associated with emotional facial expressions, no activation was observed in the expected amygdala and MTL associated with emotional stimuli. Therefore, the findings do not support Hypothesis 2. Among the reasons why the expected results were not achieved may be that the pictures presented in the study did not affect the participants sufficiently. Considering Turkey's agenda, a new one is added frequently to the events such as terrorism, natural disasters, crime, violence against women and children and even in many television series and Turkish film such events are conveyed as if they were part of daily life (eg., [43][44][45][46]. For example, one third of the total time contain violence in television broadcasting five channels in 80 films shown in Turkey (46). It is possible that these facts exist for the Turkish people, who may be exposed to news and images such as death and violence, whose emotional intensity is probably more than the pictures used in the media for information and entertainment purposes every day in our study, became commonplace over time, and therefore the pictures presented in our study did not affect the participants as expected (47). As a result, the amygdala and MTL regions may not have been activated, because the pictures presented in the experiment did not cause sufficient emotional arousal in the participants. Accordingly, it is a normal result that the object pictures presented preceding emotional stimuli are not remembered better. Another reason why emotional stimuli may not increase the remembering of preceding object pictures may be that the participants did not pay enough attention to these emotional pictures (25). When the behavioral data are examined, it can be seen that the accuracy of emotional stimuli is 0.70 (Table 2). This memory performance obtained in the recognition test one week later supports that the participants paid attention to emotional pictures. Another reason for not achieving the expected results may be that the arithmetic process given after emotional stimuli consumes limited mental resources. According to the model of Mather et al. (22), stimuli compete for limited mental resources, emotional events in such a competition take precedence and enable better remembering of information around them. Participants who want to perform the arithmetic process correctly and quickly after emotional stimuli may have given their limited mental resources to this task, and therefore, there may not be enough mental resources left for emotional stimuli to increase memory. When the activation differences between the encoding of the remembered and forgotten object images were examined, more activation was found in the left IFG and left superior medial gyrus in remembered stimuli compared to those forgotten. These results support Hypothesis 3. At this point, there are studies in the literature showing that the activations in the frontal and medial lobes can predict the Dm effect (48). The IFG, which is a part of the prefrontal cortex whose activation is expected especially for the Dm effect, is involved in semantic elaboration and associative memory formation processes that support the formation of memories (4,49,50). Considering the relationship between processes such as forming elaboration and associative memories with deep processing, it can be concluded that remembering stems from deep processing (29). Given the studies showing that emotional stimuli are involved in deeper encoding processes than the neutral ones (51), semantic encoding of emotional stimuli may provide better remembering of these information compared to neutral ones. To support this possibility, in a study (52) that took the semantic elaboration level as a variable found that the semantic-based encoded stimuli were better remembered than perceptionbased stimuli. In addition, the pictures presented to the participants are also an important factor that evokes them. IFG is also involved in the formation of associative memories. In this study, the stimuli presented were selected from the Turkish culture pictures. Therefore, participants may find pictures that can easily be related to their past experiences and daily lives in the pictures and thus they can better remember some pictures. However, the superior medial gyrus activation supports the IFG's semantic elaboration activation. It is known that the superior medial gyrus located in Brodmann 8th area (BA 8) plays a role in attention and semantic elaboration processes (53). Studies on the Dm effect have also shown the activation of BA 8 in remembered stimuli (54). The observation of BA 8 activation in remembered stimuli is consistent with the finding in the literature that remembered stimuli are better remembered by their involvement in the semantic elaboration process. In summary, more activation occurred in the left IFG and left superior medial gyrus areas of the brain than those forgotten during the processing of remembered stimuli. Since the relationships of these brain structures with the processes of semantic elaboration and associative memory formation are known, the more semantic elaboration and personal memories that participants create when encoding stimuli, the better they will remember the information. Looking to the factors limiting the study, the fact that the pictures used belonging to Turkish culture may lead to associative learning, since it is more likely to coincide with the lives of the participants. The selected pictures can create different emotions among the participants. Therefore, emotional picture libraries adapted to Turkish culture can be developed in future studies and these stimulus sets can be used in memory experiments. In addition, only negative pictures were used in the study, based on studies showing that the participants were affected by the arousal of emotional stimuli, not the value (positive or negative). However, some studies in the literature have found that there are different areas of activation for positive stimuli. Subsequent studies may examine the differences that can be observed in presenting positive pictures. In addition, the emotional emphasis in the stimuli used in this study was on facial expressions. Future studies may investigate the change in activation areas by keeping other variables in the pictures (valence, arousal and the number of pictures with a human face) constant and displacing the emotional emphasis. In this way, it can be seen whether the participants were affected by facial expressions in the pictures or the emotional arousal of the pictures. In summary, our research has not shown that emotional stimuli increase remembering of preceding neutral stimuli. However, activations in the MTG, inferior temporal gyrus, fusiform gyrus and supramarginal gyrus were found in the processing of negative pictures compared to neutral stimuli in the brain. These areas are related to the processing of emotional facial expressions. These activations can be explained by the fact that in the pictures used the emotional emphasis was on the facial expressions. In addition, when looking at the Dm effect, activation was detected in the left IFG and the left superior medial gyrus. These activations are associated with semantic elaboration processes. These findings can be explained by the result that the object pictures are remembered for their deeper processing. Although there are many studies on emotion and memory in our country, studies investigating how cultural variables affect emotion and memory can provide significant contributions to the literature. Contribution Categories Author Initials Informed Consent: Written consent has been obtained from the participants.
1468810
s2orc/train
v2
2016-03-01T03:19:46.873Z
2015-11-25T00:00:00.000Z
Overview of Modelling and Advanced Control Strategies for Wind Turbine Systems The motivation for this paper comes from a real need to have an overview of the challenges of modelling and control for very demanding systems, such as wind turbine systems, which require reliability, availability, maintainability, and safety over power conversion efficiency. These issues have begun to stimulate research and development in the wide control community particularly for these installations that need a high degree of “sustainability”. Note that this represents a key point for offshore wind turbines, since they are characterised by expensive and/or safety critical maintenance work. In this case, a clear conflict exists between ensuring a high degree of availability and reducing maintenance times, which affect the final energy cost. On the other hand, wind turbines have highly nonlinear dynamics, with a stochastic and uncontrollable driving force as input in the form of wind speed, thus representing an interesting challenge also from the modelling point of view. Suitable control methods can provide a sustainable optimisation of the energy conversion efficiency over wider than normally expected working conditions. Moreover, a proper mathematical description of the wind turbine system should be able to capture the complete behaviour of the process under monitoring, thus providing an important impact on the control design itself. In this way, the control scheme could guarantee prescribed performance, whilst also giving a degree of “tolerance” to possible deviation of characteristic properties or system parameters from standard conditions, if properly included in the wind turbine model itself. The most important developments in advanced controllers for wind turbines are also briefly referenced, and open problems in the areas of modelling of wind turbines are finally outlined. Introduction Wind energy can be considered as a fast-developing multidisciplinary field consisting of several branches of engineering sciences. The National Renewable Energy Laboratory (NREL) estimated a growth rate of the wind energy installed capacity of about 30% from 2001 to 2006, [1], and even with a faster rate up to 2014, as represented in Figure 1 [2]. After 2009, more than 50% of new wind power resources were increased outside of the original markets of Europe and U.S., mainly motivated by the market growth in China, which now has 101,424 MW of wind power installed [2]. Several other countries have obtained quite high levels of stationary wind power production, with rates from 9% to 21%, such as in Denmark, Portugal, Spain, France, Ireland, Germany, Ireland, Sweden in 2015 [2]. From 2009, 83 countries around the world are exploiting wind energy on a commercial basis, as wind power is considered as a renewable, sustainable and green solution for energy harvesting. Note however that, even if the U.S. now achieves less than 2% of its required electrical energy from wind, the most recent NREL's report states that the U.S. will increase it up to 30% by the year 2030 [2]. Note also that, even if the fast growth of the wind turbine installed capacity of wind turbines in recent years, multidisciplinary engineering and science challenges still exist [3]. Moreover, wind turbine installations must guarantee both power capture and economical advantages, thus motivating the wind turbine dramatic growth as represented in Figure 2 [2]. Industrial wind turbines have large rotors and flexible load-carrying structures that operate in uncertain and noisy environments, thus motivating challenging cases for advanced control solutions [4]. Advanced controllers can be able to achieve the required goal of decreasing the wind 2 After 2009, more than 50% of new wind power resources were increased outside of the original markets of Europe and U.S., mainly motivated by the market growth in China, which now has 101,424 MW of wind power installed [2]. Several other countries have obtained quite high levels of stationary wind power production, with rates from 9% to 21%, such as in Denmark, Portugal, Spain, France, Ireland, Germany, Ireland, Sweden in 2015 [2]. From 2009, 83 countries around the world are exploiting wind energy on a commercial basis, as wind power is considered as a renewable, sustainable and green solution for energy harvesting. Note however that, even if the U.S. now achieves less than 2% of its required electrical energy from wind, the most recent NREL's report states that the U.S. will increase it up to 30% by the year 2030 [2]. Note also that, even if the fast growth of the wind turbine installed capacity of wind turbines in recent years, multidisciplinary engineering and science challenges still exist [3]. Moreover, wind turbine installations must guarantee both power capture and economical advantages, thus motivating the wind turbine dramatic growth as represented in Figure 2 [2]. After 2009, more than 50% of new wind power resources were increased outside of the original markets of Europe and U.S., mainly motivated by the market growth in China, which now has 101,424 MW of wind power installed [2]. Several other countries have obtained quite high levels of stationary wind power production, with rates from 9% to 21%, such as in Denmark, Portugal, Spain, France, Ireland, Germany, Ireland, Sweden in 2015 [2]. From 2009, 83 countries around the world are exploiting wind energy on a commercial basis, as wind power is considered as a renewable, sustainable and green solution for energy harvesting. Note however that, even if the U.S. now achieves less than 2% of its required electrical energy from wind, the most recent NREL's report states that the U.S. will increase it up to 30% by the year 2030 [2]. Note also that, even if the fast growth of the wind turbine installed capacity of wind turbines in recent years, multidisciplinary engineering and science challenges still exist [3]. Moreover, wind turbine installations must guarantee both power capture and economical advantages, thus motivating the wind turbine dramatic growth as represented in Figure 2 [2]. Industrial wind turbines have large rotors and flexible load-carrying structures that operate in uncertain and noisy environments, thus motivating challenging cases for advanced control solutions [4]. Advanced controllers can be able to achieve the required goal of decreasing the wind 2 Industrial wind turbines have large rotors and flexible load-carrying structures that operate in uncertain and noisy environments, thus motivating challenging cases for advanced control Energies 2015, 8, solutions [4]. Advanced controllers can be able to achieve the required goal of decreasing the wind energy cost by increasing the capture efficacy; at the same time they should reduce the structural loads, thus increasing the lifetimes of the components and turbine structures [4,5]. This review paper aims also at sketching the main challenges and the most recent research topics in this area. Although wind turbines can be developed in both vertical-axis and horizontal-axis configurations, as shown in Figure 3, this paper is focussed on horizontal-axis wind turbines, which represent the most common solutions today in produced large-scale installations. Energies 2015, 8, 1-x energy cost by increasing the capture efficacy; at the same time they should reduce the structural loads, thus increasing the lifetimes of the components and turbine structures [4,5]. This review paper aims also at sketching the main challenges and the most recent research topics in this area. Although wind turbines can be developed in both vertical-axis and horizontal-axis configurations, as shown in Figure 3, this paper is focussed on horizontal-axis wind turbines, which represent the most common solutions today in produced large-scale installations. Horizontal-axis wind turbines have the advantage that the rotor is placed atop a tall tower, with the advantage of larger wind speeds that the ground. Moreover, they can include pitchable blades in order to improve the power capture, the structural performance, and the overall system stability [6,7]. On the other hand, vertical-axis wind turbines are more common for smaller installations. Note finally that proper wind turbine models are usually oriented to the design of suitable control strategies that are more effective for large rotor wind turbines. Therefore, the most recent research focus considers wind turbines with capacities of more than 10 MW. Another important issue derives from the increasing complexity of wind turbines, which gives rise to more strict requirements in terms of safety, reliability and availability [4]. In fact, due to the increased system complexity and redundancy, large wind turbines are prone to unexpected malfunctions or alterations of the nominal working conditions. Many of these anomalies, even if not critical, often lead to turbine shutdowns, again for safety reasons. Especially in offshore wind turbines, this may result in a substantially reduced availability, because rough weather conditions may prevent the prompt replacement of the damaged system parts. The need for reliability and availability that guarantees the continuous energy production requires the so-called "sustainable" control solutions [4,8]. These schemes should be able to keep the turbine in operation in the presence of anomalous situations, perhaps with reduced performance, while managing the maintenance operations. Apart from increasing availability and reducing turbine downtimes, sustainable control schemes might also obviate the need for more hardware redundancy, if virtual sensors could replace redundant hardware sensors [9]. These schemes currently employed in wind turbines are typically on the level of the supervisory control, where commonly used strategies include sensor comparison, model comparison and thresholding tests [1,9,10]. These strategies enable safe turbine operations, which involve shutdowns in case of critical situations, but they are not able to actively counteract anomalous working conditions. Therefore, the goal of this work is also to investigate these so-called sustainable control strategies, which allow to obtain a system behaviour that is close to the nominal situation in presence of unpermitted deviations of any characteristic properties or system parameters from standard conditions (i.e., a fault) [8,11]. Moreover, these schemes should provide the reconstruction of the equivalent unknown input that represents the effect of a fault, thus achieving the so-called fault diagnosis task [9,10]. It is worth noting that the present paper and e.g., the work [12] share some common issues. However, this paper is also presenting some advanced aspects regarding advanced modelling 3 Figure 3. Example of (a) vertical-axis; and (b) horizontal-axis wind turbines. Horizontal-axis wind turbines have the advantage that the rotor is placed atop a tall tower, with the advantage of larger wind speeds that the ground. Moreover, they can include pitchable blades in order to improve the power capture, the structural performance, and the overall system stability [6,7]. On the other hand, vertical-axis wind turbines are more common for smaller installations. Note finally that proper wind turbine models are usually oriented to the design of suitable control strategies that are more effective for large rotor wind turbines. Therefore, the most recent research focus considers wind turbines with capacities of more than 10 MW. Another important issue derives from the increasing complexity of wind turbines, which gives rise to more strict requirements in terms of safety, reliability and availability [4]. In fact, due to the increased system complexity and redundancy, large wind turbines are prone to unexpected malfunctions or alterations of the nominal working conditions. Many of these anomalies, even if not critical, often lead to turbine shutdowns, again for safety reasons. Especially in offshore wind turbines, this may result in a substantially reduced availability, because rough weather conditions may prevent the prompt replacement of the damaged system parts. The need for reliability and availability that guarantees the continuous energy production requires the so-called "sustainable" control solutions [4,8]. These schemes should be able to keep the turbine in operation in the presence of anomalous situations, perhaps with reduced performance, while managing the maintenance operations. Apart from increasing availability and reducing turbine downtimes, sustainable control schemes might also obviate the need for more hardware redundancy, if virtual sensors could replace redundant hardware sensors [9]. These schemes currently employed in wind turbines are typically on the level of the supervisory control, where commonly used strategies include sensor comparison, model comparison and thresholding tests [1,9,10]. These strategies enable safe turbine operations, which involve shutdowns in case of critical situations, but they are not able to actively counteract anomalous working conditions. Therefore, the goal of this work is also to investigate these so-called sustainable control strategies, which allow to obtain a system behaviour that is close to the nominal situation in presence of unpermitted deviations of any characteristic properties or system parameters from standard conditions (i.e., a fault) [8,11]. Moreover, these schemes should provide the reconstruction of the equivalent unknown input that represents the effect of a fault, thus achieving the so-called fault diagnosis task [9,10]. It is worth noting that the present paper and e.g., the work [12] share some common issues. However, this paper is also presenting some advanced aspects regarding advanced modelling topics [13], as well as the very recent and advanced task concerning the sustainable control for wind turbines [14,15], which is fundamental for large rotors and offshore installations. The rest of this paper is organised as follows. Section 2 describes the basic configurations and operations of wind turbines. Section 3 explains the layout of the wind turbine main control loops, including the wind characteristics, the available sensors and actuators commonly used in advanced control. Section 4 describes the basic structure of the typical wind turbine control, which is then followed by a brief discussion of advanced control opportunities in Section 5. On the other hand, Section 5.1 outlines the main sustainable control strategies recently proposed for wind turbines. Concluding remarks are finally summarised in Section 6. Wind Turbine Modelling Issues Prior to apply any new control strategies on a real wind turbine, the efficacy of the control scheme has to be tested in detailed aero-elastic simulation model. Several simulation packages exist that are commonly used in academia and industry for wind turbine load simulation. This paper recalls one of the most used simulation package, that is the Fatigue, Aerodynamics, Structures, and Turbulence (FAST) code [16] provided by NREL, since it represents a reference simulation environment for the development of high-fidelity wind turbine prototypes that are taken as a reference test-cases for many practical studies [4,17]. FAST provides a high-fidelity wind turbine model with 24 degrees of freedom, which is appropriate for testing the developed control algorithms but not for control design. For the latter purpose, a reduced-order dynamic wind turbine model, which captures only dynamic effects directly influenced by the control, is recalled in this section and it can be used for model-based control design. It almost corresponds to the model presented in [4]. The main components of a horizontal-axis wind turbine are its tower, nacelle, and rotor, as shown in Figure 4. The generator is placed in the nacelle, and it is driven by the high-speed shaft, connected by a gear box to the low-speed shaft of the rotor. The rotor includes the airfoil-shaped blades that are used to capture the wind energy. Energies 2015, 8, 1-x topics [13], as well as the very recent and advanced task concerning the sustainable control for wind turbines [14,15], which is fundamental for large rotors and offshore installations. The rest of this paper is organised as follows. Section 2 describes the basic configurations and operations of wind turbines. Section 3 explains the layout of the wind turbine main control loops, including the wind characteristics, the available sensors and actuators commonly used in advanced control. Section 4 describes the basic structure of the typical wind turbine control, which is then followed by a brief discussion of advanced control opportunities in Section 5. On the other hand, Section 5.1 outlines the main sustainable control strategies recently proposed for wind turbines. Concluding remarks are finally summarised in Section 6. Wind Turbine Modelling Issues Prior to apply any new control strategies on a real wind turbine, the efficacy of the control scheme has to be tested in detailed aero-elastic simulation model. Several simulation packages exist that are commonly used in academia and industry for wind turbine load simulation. This paper recalls one of the most used simulation package, that is the Fatigue, Aerodynamics, Structures, and Turbulence (FAST) code [16] provided by NREL, since it represents a reference simulation environment for the development of high-fidelity wind turbine prototypes that are taken as a reference test-cases for many practical studies [4,17]. FAST provides a high-fidelity wind turbine model with 24 degrees of freedom, which is appropriate for testing the developed control algorithms but not for control design. For the latter purpose, a reduced-order dynamic wind turbine model, which captures only dynamic effects directly influenced by the control, is recalled in this section and it can be used for model-based control design. It almost corresponds to the model presented in [4]. The main components of a horizontal-axis wind turbine are its tower, nacelle, and rotor, as shown in Figure 4. The generator is placed in the nacelle, and it is driven by the high-speed shaft, connected by a gear box to the low-speed shaft of the rotor. The rotor includes the airfoil-shaped blades that are used to capture the wind energy. Figure 5 sketches the complete wind turbine model consisting of several submodels for the mechanical structure, the aerodynamics, as well as the dynamics of the pitch system and the generator/converter system. The generator/converter dynamics are usually described as a first order delay system, as described e.g., in [4], as it represents a realistic assumption for control-oriented modelling. However, when the delay time constant is very small, an ideal converter can be assumed, 4 Figure 5 sketches the complete wind turbine model consisting of several submodels for the mechanical structure, the aerodynamics, as well as the dynamics of the pitch system and the generator/converter system. The generator/converter dynamics are usually described as a first order delay system, as described e.g., in [4], as it represents a realistic assumption for control-oriented modelling. However, when the delay time constant is very small, an ideal converter can be assumed, Energies 2015, 8, such that the reference generator torque signal is equal to the actual generator torque. In this situation, the generator torque can be considered as a system input [4]. Energies 2015, 8, 1-x such that the reference generator torque signal is equal to the actual generator torque. In this situation, the generator torque can be considered as a system input [4]. Figure 5 reports also the wind turbine inputs and outputs. In particular, v is wind speed; F T and T a correspond to the rotor thrust force and rotor torque, respectively; ω r is the rotor angular velocity; x the state vector; T g the generator torque; and T g,d the demanded generator torque. β is the pitch angle, whilst β d its demanded value. Wind Turbine Tower and Blade Models As an example, a mechanical wind turbine model with four degrees of freedom is considered, since these degrees of freedom are the most strongly affected by the wind turbine control. In particular, they represent the fore-aft tower bending, the flap-wise blade bending, the rotor rotation, and the generator rotation [18]. Both the tower and blade bending are not modelled by means of bending beam models, but only the translational displacement of the tower top and the blade tip are considered, where the bending stiffness parameters are transformed into equivalent translational stiffness parameters, as depicted in Figure 6. [19]. With reference to Figure 6, the force F T generates the tower displacement y T , which is modelled by the mechanical model of mass m T , with spring and damper parameters k T and d T , respectively. 5 Figure 5. Block diagram of the complete wind turbine model. Figure 5 reports also the wind turbine inputs and outputs. In particular, v is wind speed; F T and T a correspond to the rotor thrust force and rotor torque, respectively; ω r is the rotor angular velocity; x the state vector; T g the generator torque; and T g,d the demanded generator torque. β is the pitch angle, whilst β d its demanded value. Wind Turbine Tower and Blade Models As an example, a mechanical wind turbine model with four degrees of freedom is considered, since these degrees of freedom are the most strongly affected by the wind turbine control. In particular, they represent the fore-aft tower bending, the flap-wise blade bending, the rotor rotation, and the generator rotation [18]. Both the tower and blade bending are not modelled by means of bending beam models, but only the translational displacement of the tower top and the blade tip are considered, where the bending stiffness parameters are transformed into equivalent translational stiffness parameters, as depicted in Figure 6. Energies 2015, 8, 1-x such that the reference generator torque signal is equal to the actual generator torque. In this situation, the generator torque can be considered as a system input [4]. Figure 5 reports also the wind turbine inputs and outputs. In particular, v is wind speed; F T and T a correspond to the rotor thrust force and rotor torque, respectively; ω r is the rotor angular velocity; x the state vector; T g the generator torque; and T g,d the demanded generator torque. β is the pitch angle, whilst β d its demanded value. Wind Turbine Tower and Blade Models As an example, a mechanical wind turbine model with four degrees of freedom is considered, since these degrees of freedom are the most strongly affected by the wind turbine control. In particular, they represent the fore-aft tower bending, the flap-wise blade bending, the rotor rotation, and the generator rotation [18]. Both the tower and blade bending are not modelled by means of bending beam models, but only the translational displacement of the tower top and the blade tip are considered, where the bending stiffness parameters are transformed into equivalent translational stiffness parameters, as depicted in Figure 6. With reference to Figure 6, the force F T generates the tower displacement y T , which is modelled by the mechanical model of mass m T , with spring and damper parameters k T and d T , respectively. With reference to Figure 6, the force F T generates the tower displacement y T , which is modelled by the mechanical model of mass m T , with spring and damper parameters k T and d T , respectively. For the tower, the equivalent translational stiffness parameter is derived by means of a direct stiffness method common in structural mechanics calculations [18,19]. In the same way, the blade displacement y B generated by the force F T is described again as a mass m B with a spring and damper model, whose parameters are k B and d B , respectively. Since the blades move with the tower, the blade tip displacement y B is considered in the moving tower coordinate system and the tower motion must be taken into account for the derivation of the kinetic energy of the blade. The force F T acts both on the tower and on N blades. Only one collective blade degree of freedom is considered. Note that the N blade degrees of freedom would have to be considered individually if control strategies for load reduction involving individual blade pitch control were designed. The assumption that the same external force F T acts on both the tower and the blade degrees of freedom (with N blades) is a simplification. It is reasonable, however, because the rotor thrust force, which is caused by the aerodynamic lift forces acting on the blade elements, acts on the tower top, thus causing a distributed load on each blade. This distributed load generates a bending of the blade, which could be modelled as a bending beam. A beam subjected not to a distributed load but to a concentrated load at the upper point must have a higher bending beam stiffness, in order that the same displacement results at the upper point. However, a reduced-order wind turbine model considers only the blade tip displacement, which requires the assumption of a translational stiffness. To obtain an adequate translational stiffness constant, the bending stiffness of the bending beam must thus be larger than the case of a distributed load. On the other hand, the drivetrain consisting of rotor, shaft and generator is modelled as a two-mass inertia system, including shaft torsion, where the two inertias are connected with a torsional spring with spring constant k S and a torsional damper with damping constant d S , as illustrated in Figure 7 [19]. Energies 2015, 8, 1-x For the tower, the equivalent translational stiffness parameter is derived by means of a direct stiffness method common in structural mechanics calculations [18,19]. In the same way, the blade displacement y B generated by the force F T is described again as a mass m B with a spring and damper model, whose parameters are k B and d B , respectively. Since the blades move with the tower, the blade tip displacement y B is considered in the moving tower coordinate system and the tower motion must be taken into account for the derivation of the kinetic energy of the blade. The force F T acts both on the tower and on N blades. Only one collective blade degree of freedom is considered. Note that the N blade degrees of freedom would have to be considered individually if control strategies for load reduction involving individual blade pitch control were designed. The assumption that the same external force F T acts on both the tower and the blade degrees of freedom (with N blades) is a simplification. It is reasonable, however, because the rotor thrust force, which is caused by the aerodynamic lift forces acting on the blade elements, acts on the tower top, thus causing a distributed load on each blade. This distributed load generates a bending of the blade, which could be modelled as a bending beam. A beam subjected not to a distributed load but to a concentrated load at the upper point must have a higher bending beam stiffness, in order that the same displacement results at the upper point. However, a reduced-order wind turbine model considers only the blade tip displacement, which requires the assumption of a translational stiffness. To obtain an adequate translational stiffness constant, the bending stiffness of the bending beam must thus be larger than the case of a distributed load. On the other hand, the drivetrain consisting of rotor, shaft and generator is modelled as a two-mass inertia system, including shaft torsion, where the two inertias are connected with a torsional spring with spring constant k S and a torsional damper with damping constant d S , as illustrated in Figure 7 [19]. With reference to Figure 7, the angular velocities ω r and ω g are the time derivatives of the rotation angles θ r and θ g . In this case, the rotor torque T a is generated by the lift forces on the individual blade elements, whilst T g represents the generator torque. The ideal gearbox effect can be simply included in the generator model by multiplying the generator inertia J g by the square of the gearbox ratio n g . The motion equations are derived by means of Lagrangian dynamics, which first requires one to define the generalised coordinates and generalised external forces. In this way, the energy terms of the system are derived, as well as the motion equations. The vector of generalised coordinates is given by: q = y T , y B , θ r , θ g T , whilst the vector of external forces is defined as The generalised force F T represents the rotor thrust force, which can be computed from the wind speed at the blade and from the aerodynamic map of the thrust coefficient. On the other hand, the generalised torque T a is given by the aerodynamic rotor torque, which can be calculated from the wind speed and from the aerodynamic map of the torque coefficient described in Section 2.4. With reference to Figure 7, the angular velocities ω r and ω g are the time derivatives of the rotation angles θ r and θ g . In this case, the rotor torque T a is generated by the lift forces on the individual blade elements, whilst T g represents the generator torque. The ideal gearbox effect can be simply included in the generator model by multiplying the generator inertia J g by the square of the gearbox ratio n g . The motion equations are derived by means of Lagrangian dynamics, which first requires one to define the generalised coordinates and generalised external forces. In this way, the energy terms of the system are derived, as well as the motion equations. The vector of generalised coordinates is given by: q " " y T , y B , θ r , θ g ‰ T , whilst the vector of external forces is defined as f " " F T , F T , T a ,´T g ‰ . The generalised force F T represents the rotor thrust force, which can be computed from the wind speed at the blade and from the aerodynamic map of the thrust coefficient. On the other hand, the generalised torque T a is given by the aerodynamic rotor torque, which can be calculated from the wind speed and from the aerodynamic map of the torque coefficient described in Section 2.4. By considering the tower dynamics, the complete blade tip displacement is given by y T`yB , and the kinetic energy has the following form: In the same way, the potential energy has the form: with n g the gearbox ratio. The dampings in the system produce generalised friction forces, which can be written as derivatives of a quadratic form, e.g., the dissipation function [20]. In this case, it assumes the form: The Lagrangian equations of second order including the dissipation term are given by [20]: where the Lagrangian function L denotes the difference between kinetic and potential energy. As the kinetic energy in Equation (1) does not depend on the generalised coordinates and the potential energy in Equation (2) does not depend on the generalised velocities, the motion equations in the following form are obtained: The system Equation (5) can be rewritten in matrix form as: where the mass matrix M; the damping matrix D and the stiffness matrix K have the form: The second order system of differential equations Equation (6) can be transformed into a first order state-space model by introducing the state vector x " " q, . q ‰ T . To this aim, the expression Equation (6) is solved with respect to the second time derivative of the coordinate vector q. The equivalent state-space model is thus obtained in the form: Energies 2015, 8, where the state vector is given by whilst the system matrices have the form: Pitch Model In pitch-regulated wind turbines, the pitch angle of the blades is controlled only when working above the rated wind speed ("full-load" condition, as described in the following) to reduce the aerodynamic rotor torque, thus maintaining the turbine at the desired rotor speed. Moreover, the pitching of the blades to feather position (i.e., 90 o ) is used as main braking system to bring the turbine to standstill in critical situations. Two different pitch technologies are usually implemented using hydraulic and electromechanical actuation systems. For hydraulic pitch systems, the dynamics are described with a second-order model [4], which is able to include oscillatory behaviour. On the other hand, electromechanical pitch systems are usually described as a first-order model, which is represented by Equation (10): where β and β d are the physical and the demanded pitch angle, respectively; The parameter τ denotes the delay time constant. Generator/Converter Dynamic Model A description of the generator/converter dynamics can be included into the complete wind turbine system model. Note that simulation-oriented models do not include it, since the generator/converter dynamics are relatively fast. However, when advanced control designs have to be investigated, an explicit generator/converter model might be required. In this situation, a simple first order delay model can be sufficient, as described e.g., in [4]: where T g,d represents the demanded generator torque; whilst τ g the delay time constant. Aerodynamic Model The aerodynamic submodel consists of the expressions for the thrust force F T acting on the rotor and the aerodynamic rotor torque T a . They are determined by the reference force F st and by the aerodynamic rotor thrust and torque coefficients C T and C Q [18]: The reference force F st is defined from the impact pressure 1 2 ρ v 2 and the rotor swept area π R 2 (with rotor radius R), where ρ denotes the air density: It is worth noting that simulation-oriented benchmarks use the static wind speed v. However, more accurate scenarios should exploit the effective wind speed v e " v´`. y T`. y B˘, i.e., the static wind speed corrected with by the tower and blade motion effects. However, the aerodynamic maps used for the calculation of the rotor thrust and torque are usually represented as static 2-dimensional tables, which already take into account the dynamic contributions of both the tower and the blade motions. As highlighted in the expressions Equation (12), the rotor thrust and torque coefficients`C T , C Qd epend on the tip speed ratio λ " ω r R v and the pitch angle β. Therefore, the rotor thrust F T and torque T a assume the following expressions: The expressions Equation (14) highlight that the rotor thrust F T and torque T a are nonlinear functions that depend on the wind speed v, the rotor speed ω r , and the pitch angle β. These functions are usually expressed as two-dimensional maps, which must be known for the whole range of variation of both the pitch angles and tip speed ratios. These maps are usually a static approximation of more detailed aerodynamic computations that can be obtained using for example the Blade Element Momentum (BEM) method. In this case, the aerodynamic lift and drag forces at each blade section are calculated and integrated in order to obtain the rotor thrust and torque [18]. More accurate maps can be obtained by exploiting the calculations implemented via the AeroDyn module of the FAST code, where the maps are extracted from several simulation runs [21]. It is worth noting that for simulation purposes, the tabulated versions of the aerodynamic maps C Q and C T are sufficient. On the other hand, for control design, the derivatives of the rotor torque (and thrust) are needed, thus requiring a description of the aerodynamic maps as analytical functions. Therefore, these maps can be approximated using combinations of polynomial and exponential functions, whose powers and coefficients are estimated via e.g., modelling [22] or identification [23,24] approaches. Wind Turbine Overall Model By replacing the expressions Equation (14) for the rotor thrust and torque into the mechanical model Equation (8) and adding the models Equations (10) and (11) for the pitch and the generator/converter dynamics, a nonlinear state-space model is obtained: with a state vector that now includes the pitch angle and the generator torque: Since the rotor thrust force and the rotor torque have been used as inputs for the vector u m in the mechanical submodel Equation (8), a new input vector is defined for the complete state-space model Equation (15), i.e., u " " β d , T g ‰ T , whose components are the demanded pitch angle and the generator torque, respectively. The wind speed is normally considered as a disturbance input. The linear part of the state-space model Equation (15) is defined by the matrices: Energies 2015, 8, with: Moreover, the system vector in Equation (15) nonlinearly depends on the state and input vector: Here, the rotor thrust and torque expressions are given in Equation (14), whilst the mass and damping matrices are defined in Equation (7). It is worth noting that in a real wind turbine, the centrifugal forces acting on the rotating rotor blades lead to a stiffening of the blades. As a consequence, the bending behaviour of the rotor blades depends on the rotor speed itself. By considering again the translational spring-mass system of the blade-tip displacement, this second-order effect can be included in the model Equation (15) by introducing a translational blade stiffness parameter k B dependent on the rotor speed, i.e., k B pω r q " α m B r B ω 2 r . r B denotes the distance from the blade root to the blade centre of mass and α tuning parameter. In this way, by including the centrifugal stiffening correction, the nonlinear system vector g px, vq in Equation (18) has the form: The inclusion of the centrifugal term is inspired from the FAST code, in order to obtain a high-fidelity wind turbine simulation model. For example, the translational blade bending model could be required when overspeed scenarios shall be taken into account. However, for the usual operating regimes of a wind turbine, the corrections induced by the centrifugal blade stiffening have only minor effects on the final results. Therefore, the centrifugal correction has been recalled here for the sake of completeness, but it has limited interest in real cases. Measurement Errors Wind turbine high-fidelity simulators, which were described e.g., in [4,25,26], consider white noise added to all measurements. This relies on the assumption that noisy sensor signals should represent more realistic scenarios. However, this is not the case, as a realistic simulation would require an accurate knowledge of each sensor and its measurement reliability. To the best of the author's knowledge and from his experience with wind turbine systems for fault diagnosis and fault tolerant control [27,28], all main measurements acquired from the wind turbine process (rotor and generator speed, pitch angle, generator torque), are virtually noise-free or affected by very weak noise. On the other hand, with reference to real wind turbines, one characteristic frequency is usually affecting the generator speed signal due to the periodic rotor excitation of the drivetrain. When using the generator speed as a controlled variable, a notch filter is thus applied to smooth the generator speed signal. Such a notch filter is usually applied to industrial wind turbine control as described e.g., in [29,30]. On the other hand, the rotor speed measurement is normally assumed to be a continuous signal. However, in many wind turbines, the rotor speed signal is discretised, due to the limited number of metal pieces on the main shaft that are scanned by a magnetic sensor, although better sensing technologies, which could yield nearly continuous signals, are already available, like optical scanning of densely spaced barcodes. Basic Wind Turbine Control Issues Wind turbine control methods depend on the turbine configuration. The horizontal-axis wind turbine can be "upwind", if the rotor is on the upwind side of the tower, or "downwind". This configuration affects the choice of the controller and the turbine dynamics, and thus the structural design. Wind turbines may also be variable pitch or fixed pitch, meaning that the blades may or may not be able to rotate along their longitudinal axes. The fixed-pitch strategy is less common in large wind turbines, due to the reduced ability to control loads and adapt the aerodynamic torque. On the other hand, variable-pitch turbines allow their blades to rotate along the pitch axis, thus modifying the aerodynamic characteristics. Moreover, wind turbines can be variable speed or fixed speed. Variable-speed turbines work closer to their maximum aerodynamic efficiency for a higher percentage of the time, but require electrical power converters and inverters for feeding the generated electricity into the grid at the proper frequency. The effectiveness of the wind turbine power capture is described by Figure 8, which shows an example power curve for a variable-speed wind turbine of several kW of rated power. When the wind speed is low (usually below 6 m/s), the available wind power is lower that the turbine system losses, and the turbine is not working. This operational region is sometimes known as Region 1. When the wind speed is higher, i.e., the Region 3 (above 11.7 m/s), power is limited to avoid exceeding safe electrical and mechanical load limits. speed, pitch angle, generator torque), are virtually noise-free or affected by very weak noise. On the other hand, with reference to real wind turbines, one characteristic frequency is usually affecting the generator speed signal due to the periodic rotor excitation of the drivetrain. When using the generator speed as a controlled variable, a notch filter is thus applied to smooth the generator speed signal. Such a notch filter is usually applied to industrial wind turbine control as described e.g., in [29,30]. On the other hand, the rotor speed measurement is normally assumed to be a continuous signal. However, in many wind turbines, the rotor speed signal is discretised, due to the limited number of metal pieces on the main shaft that are scanned by a magnetic sensor, although better sensing technologies, which could yield nearly continuous signals, are already available, like optical scanning of densely spaced barcodes. Basic Wind Turbine Control Issues Wind turbine control methods depend on the turbine configuration. The horizontal-axis wind turbine can be "upwind", if the rotor is on the upwind side of the tower, or "downwind". This configuration affects the choice of the controller and the turbine dynamics, and thus the structural design. Wind turbines may also be variable pitch or fixed pitch, meaning that the blades may or may not be able to rotate along their longitudinal axes. The fixed-pitch strategy is less common in large wind turbines, due to the reduced ability to control loads and adapt the aerodynamic torque. On the other hand, variable-pitch turbines allow their blades to rotate along the pitch axis, thus modifying the aerodynamic characteristics. Moreover, wind turbines can be variable speed or fixed speed. Variable-speed turbines work closer to their maximum aerodynamic efficiency for a higher percentage of the time, but require electrical power converters and inverters for feeding the generated electricity into the grid at the proper frequency. The effectiveness of the wind turbine power capture is described by Figure 8, which shows an example power curve for a variable-speed wind turbine of several kW of rated power. When the wind speed is low (usually below 6 m/s), the available wind power is lower that the turbine system losses, and the turbine is not working. This operational region is sometimes known as Region 1. When the wind speed is higher, i.e., the Region 3 (above 11.7 m/s), power is limited to avoid exceeding safe electrical and mechanical load limits. The main difference between fixed-speed and variable speed wind turbines appears for mid-range wind speeds, i.e., the Region 2 depicted in Figure 8, which includes wind speeds between 6 and 11.7 m/s. Except for one design operating point (about 10 m/s), a variable-speed turbine captures more power than a fixed-speed turbine. The reason for the discrepancy is that variable-speed turbines can operate at maximum aerodynamic efficiency over a wider range of wind speeds than fixed-speed turbines. This difference can be up to about 150 kW in Region 2 [3]. Typical wind speed conditions The main difference between fixed-speed and variable speed wind turbines appears for mid-range wind speeds, i.e., the Region 2 depicted in Figure 8, which includes wind speeds between 6 and 11.7 m/s. Except for one design operating point (about 10 m/s), a variable-speed turbine captures more power than a fixed-speed turbine. The reason for the discrepancy is that variable-speed turbines can operate at maximum aerodynamic efficiency over a wider range of wind speeds than fixed-speed turbines. This difference can be up to about 150 kW in Region 2 [3]. Typical wind speed conditions allow a variable-speed turbine to capture about 2.3% more energy per year than a constant-speed turbine, which represents a significant difference in the wind industry. Figure 8 does not show the "high wind cut-out", i.e., the wind speed above which the turbine is powered down and stopped to avoid excessive operating loads. High wind cut-out typically occurs at wind speeds higher that 20/30 m/s for large turbines. Even an ideal wind turbine is not able to capture all the power available in the wind. This limitation is described by the actuator disc theory, which shows how the theoretical maximum aerodynamic efficiency, (called the Betz limit) is approximately 60% of the wind power [31]. In fact, the wind must have some kinetic energy remaining after passing through the rotor disc. Energies 2015, 8, Otherwise, the wind would be stopped and no more wind would be able to pass through the rotor to provide energy to the turbine. It is worth noting that the relation in Equation (14) assumes that the wind speed is uniform across the rotor plane. However, as indicated by the "instantaneous wind field" in Figure 9, the wind input can vary substantially in space and time as it approaches the rotor plane. The deviations of the wind speed from the expected nominal wind speed across the rotor plane are considered disturbances for control design. It is virtually impossible to obtain a good measurement of the wind speed encountering the blades because of the spatial and temporal variability and also because the rotor interacts with and changes the wind input. Not only does turbulent wind cause the wind to be different for the different blades, but the wind speed input is different at different positions along each blade. This issue represent an important aspect in the design of sustainable control solutions for large rotor wind turbines, as shown e.g., in [4,32,33]. Figure 8 does not show the "high wind cut-out", i.e., the wind speed above which the turbine is powered down and stopped to avoid excessive operating loads. High wind cut-out typically occurs at wind speeds higher that 20/30 m/s for large turbines. Even an ideal wind turbine is not able to capture all the power available in the wind. This limitation is described by the actuator disc theory, which shows how the theoretical maximum aerodynamic efficiency, (called the Betz limit) is approximately 60% of the wind power [31]. In fact, the wind must have some kinetic energy remaining after passing through the rotor disc. Otherwise, the wind would be stopped and no more wind would be able to pass through the rotor to provide energy to the turbine. It is worth noting that the relation in Equation (14) assumes that the wind speed is uniform across the rotor plane. However, as indicated by the "instantaneous wind field" in Figure 9, the wind input can vary substantially in space and time as it approaches the rotor plane. The deviations of the wind speed from the expected nominal wind speed across the rotor plane are considered disturbances for control design. It is virtually impossible to obtain a good measurement of the wind speed encountering the blades because of the spatial and temporal variability and also because the rotor interacts with and changes the wind input. Not only does turbulent wind cause the wind to be different for the different blades, but the wind speed input is different at different positions along each blade. This issue represent an important aspect in the design of sustainable control solutions for large rotor wind turbines, as shown e.g., in [4,32,33]. Wind turbines can implement several levels of control, which are called "supervisory control", "operational control", and "subsystem control" [4,33]. The top-level supervisory control determines when the turbine starts and stops in response to changes in the wind speed, and also monitors the health of the turbine. The operational control determines how the turbine achieves its control objectives in the Regions 2 and 3. The subsystem controllers cause the generator, power electronics, yaw drive, pitch drive, and other actuators to perform as desired. The operational control loops and the controllers are shown in Figure 9, which exploit the submodels described in Section 2. In particular, the main control objectives, which are recalled in Section 3.1, will be exploited for illustrating the pitch and torque controllers in Section 4. These aspects represent key issues for the design of advanced and sustainable control solutions, as shown in Sections 5 and 5.1. Wind turbines can implement several levels of control, which are called "supervisory control", "operational control", and "subsystem control" [4,33]. The top-level supervisory control determines when the turbine starts and stops in response to changes in the wind speed, and also monitors the health of the turbine. The operational control determines how the turbine achieves its control objectives in the Regions 2 and 3. The subsystem controllers cause the generator, power electronics, yaw drive, pitch drive, and other actuators to perform as desired. The operational control loops and the controllers are shown in Figure 9, which exploit the submodels described in Section 2. In particular, the main control objectives, which are recalled in Section 3.1, will be exploited for illustrating the pitch and torque controllers in Section 4. These aspects represent key issues for the design of advanced and sustainable control solutions, as shown in Sections 5 and 5.1. Main Control Objectives In the Region 2 a variable-speed wind turbine should maximise the power coefficient, and in particular the C Q map included in the expression Equation (12). Moreover, as remarked in Section 2.4, this power coefficient is a function of the turbine tip-speed ratio λ, defined as the ratio of the linear (tangential) speed of the blade tip and the wind speed. v and ω r are always time-varying. The relationship between C Q and the tip-speed ratio λ is a nonlinear function depending on the particular turbine. As already remarked, C Q depends also on the blade pitch angle β in a nonlinear way, and these relationships have the same basic shape for most modern wind turbines. An example of C Q surface is shown in Figure 10 for a generic wind turbine. 13406 Energies 2015, 8, this power coefficient is a function of the turbine tip-speed ratio λ, defined as the ratio of the linear (tangential) speed of the blade tip and the wind speed. v and ω r are always time-varying. The relationship between C Q and the tip-speed ratio λ is a nonlinear function depending on the particular turbine. As already remarked, C Q depends also on the blade pitch angle β in a nonlinear way, and these relationships have the same basic shape for most modern wind turbines. An example of C Q surface is shown in Figure 10 for a generic wind turbine. Therefore, Figure 10 highlights that a turbine operates at its highest aerodynamic efficiency point, C max , given a certain pitch angle and tip-speed ratio. The pitch angle is quite easy to control, and can be suitably maintained at the optimal efficiency point. However, the tip-speed ratio depends on the wind speed v and therefore is always varying. Therefore, in the Region 2 the control objective at tracking the wind speed by varying the turbine speed. Section 4 will explain how this control objective can be achieved by using a simple scheme. On the other hand, the control in Region 3 is typically achieved using a separate pitch control loop, as sketched in Figure 5 of Section 2. In the Region 3, the control objective has to limit the turbine power in order to maintain both the electrical and the mechanical loads at safe levels. This power limitation can be obtained also by yawing the turbine out of the wind, in order to reduce the aerodynamic torque. Note that the power P is related to rotor speed ω r and aerodynamic torque T a by the simple relation: If the power and rotor speed are held constant, the aerodynamic torque must also be constant even if wind speed varies. The maximal power that can be safely produced by a wind turbine is defined as rated power. In this case, in the Region 3 the pitch controller modifies the rotor speed ω r (at the turbine "rated speed") so that the turbine operates at its rated power. It is worth noting that the wind turbine blades may be controlled to all turn collectively or to each turn independently or individually. As outlined in Section 2.2, suitable pitch systems can be used to change the aerodynamic torque from the wind input, and are often fast enough to be of interest to the control community. Variable-pitch systems can limit their power either by pitching to "stall" or to "feather". A discussion of the features of the collective or individual pitch control strategies is beyond the scope of this paper, but more information can be found e.g., in [6,18]. Basic Feedback Control Solutions for Wind Turbines This section considers the basic control strategies that are typically used for the torque control and the pitch control blocks in Figure 5 of Section 2. As depicted in Figure 5, both control loops typically only use rotor speed feedback. The other sensors and measurements acquired from the wind turbine can be used for advanced control purposes, as outlined in Section 5.1. Therefore, Figure 10 highlights that a turbine operates at its highest aerodynamic efficiency point, C max , given a certain pitch angle and tip-speed ratio. The pitch angle is quite easy to control, and can be suitably maintained at the optimal efficiency point. However, the tip-speed ratio depends on the wind speed v and therefore is always varying. Therefore, in the Region 2 the control objective at tracking the wind speed by varying the turbine speed. Section 4 will explain how this control objective can be achieved by using a simple scheme. On the other hand, the control in Region 3 is typically achieved using a separate pitch control loop, as sketched in Figure 5 of Section 2. In the Region 3, the control objective has to limit the turbine power in order to maintain both the electrical and the mechanical loads at safe levels. This power limitation can be obtained also by yawing the turbine out of the wind, in order to reduce the aerodynamic torque. Note that the power P is related to rotor speed ω r and aerodynamic torque T a by the simple relation: If the power and rotor speed are held constant, the aerodynamic torque must also be constant even if wind speed varies. The maximal power that can be safely produced by a wind turbine is defined as rated power. In this case, in the Region 3 the pitch controller modifies the rotor speed ω r (at the turbine "rated speed") so that the turbine operates at its rated power. It is worth noting that the wind turbine blades may be controlled to all turn collectively or to each turn independently or individually. As outlined in Section 2.2, suitable pitch systems can be used to change the aerodynamic torque from the wind input, and are often fast enough to be of interest to the control community. Variable-pitch systems can limit their power either by pitching to "stall" or to "feather". A discussion of the features of the collective or individual pitch control strategies is beyond the scope of this paper, but more information can be found e.g., in [6,18]. Basic Feedback Control Solutions for Wind Turbines This section considers the basic control strategies that are typically used for the torque control and the pitch control blocks in Figure 5 of Section 2. As depicted in Figure 5, both control loops typically only use rotor speed feedback. The other sensors and measurements acquired from the wind turbine can be used for advanced control purposes, as outlined in Section 5.1. It is worth noting that this paper considers the baseline control schemes that have been already considered e.g., in [4], as they represented the starting point for the development of sustainable control methodologies, as summarised e.g., in [15,34,35]. On the other hand, other advanced control strategies are recalled in Section 5. Therefore, by considering the baseline controllers addressed in [4], Figure 8 shows that the nominal operating condition is maintained to satisfy different demands below and above a certain wind speed. In this case, by following the classical control approach, the control task is divided into the design of multiple separate compensators, as highlighted in Figure 11. control methodologies, as summarised e.g., in [15,34,35]. On the other hand, other advanced control strategies are recalled in Section 5. Therefore, by considering the baseline controllers addressed in [4], Figure 8 shows that the nominal operating condition is maintained to satisfy different demands below and above a certain wind speed. In this case, by following the classical control approach, the control task is divided into the design of multiple separate compensators, as highlighted in Figure 11. Therefore, according to Figure 11, a switch reconfigures the control system to the current operating objectives between the Regions 2 and 3. The design of the complete wind turbine controller can be devised into four basic control design steps, as listed below: 1. Controller operating in partial load condition: it refers to the design of the generator torque controller. This controller operates in the partial load (Region 2), and should maximise the energy production while minimising mechanical stress and actuator usage; 2. Controller operating in full load condition: it concerns the speed controller and power controller. These controllers operate in the full load (Region 3), and should track the rated generator speed and limiting the output power; 3. Bumpless transfer: it describe the design of the mechanism that eliminates bumps on the control signals, when switching between the controllers in the partial load and full load regions; 4. Structural stress damper: it regards the design of structure and drivetrain stress damper. The purpose of the module is to dampen drivetrain oscillations and reduce structural stress that could affect the wind turbine tower. The first two items represent the main control loops, whilst the two other tasks concern some advanced control issues, which can enhance both the control and system performances. In this way, the strategy of the complete controller of Figure 11 is to use two different controllers for the partial and the full load region. When the wind speed is below the rated value, the control system should maintain the pitch angle at its optimal value and control the generator torque in order to achieve the optimal tip-speed ratio (switch to the Region 2). Above the rated wind speed the output power is kept constant by pitching the rotor blades, while using a power controller that manipulates the generator torque around a constant value to remove 14 Figure 11. Switching control structure. Therefore, according to Figure 11, a switch reconfigures the control system to the current operating objectives between the Regions 2 and 3. The design of the complete wind turbine controller can be devised into four basic control design steps, as listed below: 1. Controller operating in partial load condition: it refers to the design of the generator torque controller. This controller operates in the partial load (Region 2), and should maximise the energy production while minimising mechanical stress and actuator usage; 2. Controller operating in full load condition: it concerns the speed controller and power controller. These controllers operate in the full load (Region 3), and should track the rated generator speed and limiting the output power; 3. Bumpless transfer: it describe the design of the mechanism that eliminates bumps on the control signals, when switching between the controllers in the partial load and full load regions; 4. Structural stress damper: it regards the design of structure and drivetrain stress damper. The purpose of the module is to dampen drivetrain oscillations and reduce structural stress that could affect the wind turbine tower. The first two items represent the main control loops, whilst the two other tasks concern some advanced control issues, which can enhance both the control and system performances. In this way, the strategy of the complete controller of Figure 11 is to use two different controllers for the partial and the full load region. When the wind speed is below the rated value, the control system should maintain the pitch angle at its optimal value and control the generator torque in order to achieve the optimal tip-speed ratio (switch to the Region 2). Above the rated wind speed the output power is kept constant by pitching the rotor blades, while using a power controller that manipulates the generator torque around a constant value to remove steady-state errors on the output power. This behaviour is obtained by setting the two switches in Figure 11 to the Region 3. In both regions a drivetrain stress damper is exploited to dampen drivetrain oscillations actively. Together, the two sets of controllers are able to solve the control task of tracking the ideal power curve in Figure 10. In order to switch smoothly between the two sets of controllers a bumpless transfer mechanism is implemented. It is worth noting that in order to manage the transition between the Region 2 and the Region 3, an additional control region called Region 2.5 is considered [12]. The primary goal of the Region 2.5 control strategy is to connect Regions 2 and 3 controllers linearly. Unfortunately, this linear connection does not result in smooth transitions, and the discontinuous slopes in the torque control curve can contribute to excessive loading on the turbine. Therefore, this issue motivates the bumpless transfer recalled in Section 4.4. On the other hand, different nonlinear controllers were proposed by the same author e.g., in [34]. Partial Load Controller At low wind speeds, i.e., in partial load operation, variable-speed control is implemented to track the optimal point on the C Q -surface for maximising the generated power. The speed of the generator is controlled by regulating the torque on the generator itself through the generator torque controller. Partial load operation leads to operate the wind turbine at β " 0 o since the maximum power coefficient is obtained at this pitch angle. This means that the highest efficiency is achieved for: where λ opt is the tip-speed ratio maximising the C Q -value for β " 0 o ; and ω r, opt is the optimum rotor speed. In order to obtain the optimal tip-speed ratio a method is used, which suggests to apply a certain generator torque as a function of the generator speed [36]. The advantage of this approach is that only the measurement of the rotor speed or generator speed is required. When utilising this approach, the controller structure for partial load operation is illustrated in Figure 12. the ideal power curve in Figure 10. In order to switch smoothly between the two sets of controllers a bumpless transfer mechanism is implemented. It is worth noting that in order to manage the transition between the Region 2 and the Region 3, an additional control region called Region 2.5 is considered [12]. The primary goal of the Region 2.5 control strategy is to connect Regions 2 and 3 controllers linearly. Unfortunately, this linear connection does not result in smooth transitions, and the discontinuous slopes in the torque control curve can contribute to excessive loading on the turbine. Therefore, this issue motivates the bumpless transfer recalled in Section 4.4. On the other hand, different nonlinear controllers were proposed by the same author e.g., in [34]. Partial Load Controller At low wind speeds, i.e., in partial load operation, variable-speed control is implemented to track the optimal point on the C Q -surface for maximising the generated power. The speed of the generator is controlled by regulating the torque on the generator itself through the generator torque controller. Partial load operation leads to operate the wind turbine at β = 0 o since the maximum power coefficient is obtained at this pitch angle. This means that the highest efficiency is achieved for: where λ opt is the tip-speed ratio maximising the C Q -value for β = 0 o ; and ω r, opt is the optimum rotor speed. In order to obtain the optimal tip-speed ratio a method is used, which suggests to apply a certain generator torque as a function of the generator speed [36]. The advantage of this approach is that only the measurement of the rotor speed or generator speed is required. When utilising this approach, the controller structure for partial load operation is illustrated in Figure 12. Figure 12. Generator torque controller for operation in partial load region (Region 2). The principle of the standard control law is to calculate the wind speed in the definition of the tip-speed ratio, and replace it into the expression for the aerodynamic torque in Equation (14). Hence, the relation can be obtained expressing the required generator torque based on the maximum power coefficient and the optimal tip-speed ratio: The principle of the standard control law is to calculate the wind speed in the definition of the tip-speed ratio, and replace it into the expression for the aerodynamic torque in Equation (14). Hence, the relation can be obtained expressing the required generator torque based on the maximum power coefficient and the optimal tip-speed ratio: This expression is inserted into Equation (14) describing the aerodynamic torque: Since the wind turbine includes a transmission system, the gear ratio and friction components of the drivetrain have to be considered when determining the generator torque corresponding to a certain aerodynamic torque. In order to describe the generator torque only as function of the generator speed, the system has to be assumed in steady-state, where . ω r ptq " . ω g ptq " 0 and ω g ptq " n g ω r ptq. In this way, by considering the drivetrain equations in Equation (5), the following expression is obtained: with: It is worth noting that the term depending on ω g represents the compensation of the frictions in the drivetrain, whose effects are not usually included in traditional control designs, proposed e.g., in [12,36], and thus motivating the need of advanced modelling aspects addressed in this paper. Full Load Operation Controller In full load operation, the desired operation of the wind turbine is to keep the rotor speed and the generated power at constant values for high wind speeds. The main idea is to use the pitch system to control the efficiency of the aerodynamics, while applying the rated generator torque. However, in order to improve tracking of the power reference and cancel steady-state errors on the output power, a power controller is also introduced. Therefore, this section recalls the design of both the speed and the power controllers. Its structure is shown in Figure 13. Since the wind turbine includes a transmission system, the gear ratio and friction components of the drivetrain have to be considered when determining the generator torque corresponding to a certain aerodynamic torque. In order to describe the generator torque only as function of the generator speed, the system has to be assumed in steady-state, whereω r (t) =ω g (t) = 0 and ω g (t) = n g ω r (t). In this way, by considering the drivetrain equations in Equation (5), the following expression is obtained: with: It is worth noting that the term depending on ω g represents the compensation of the frictions in the drivetrain, whose effects are not usually included in traditional control designs, proposed e.g., in [12,36], and thus motivating the need of advanced modelling aspects addressed in this paper. Full Load Operation Controller In full load operation, the desired operation of the wind turbine is to keep the rotor speed and the generated power at constant values for high wind speeds. The main idea is to use the pitch system to control the efficiency of the aerodynamics, while applying the rated generator torque. However, in order to improve tracking of the power reference and cancel steady-state errors on the output power, a power controller is also introduced. Therefore, this section recalls the design of both the speed and the power controllers. Its structure is shown in Figure 13. The wind speed is considered the disturbance input to the system. However, higher frequency components such as the resonant frequency of the drivetrain are also apparent on the measured generator speed. Therefore, the measured generator speed is band-stop filtered before it is fed to the controller, to remove the drivetrain eigenfrequency from the measurement. This solution is also The wind speed is considered the disturbance input to the system. However, higher frequency components such as the resonant frequency of the drivetrain are also apparent on the measured generator speed. Therefore, the measured generator speed is band-stop filtered before it is fed to the controller, to remove the drivetrain eigenfrequency from the measurement. This solution is also found in other wind turbine control schemes to mitigate the effects of structural oscillations and loads, by injecting suitable signals in the control loops [37]. Drivetrain stress dampers In the following, the design of the speed controller and the power controller is summarised. With reference to the speed controller of Figure 13, it is implemented as a standard PI controller that is able to track the speed reference and cancel possible steady-state errors on the generator speed. It is worth noting that the PI standard regulator represents the baseline controller exploited for the basic control of general wind turbines, as described in [4]. Moreover, it is shown that its simple structure can be easily integrated with more advanced control strategies, in order to achieve more complex control performances, as shown e.g., in [34]. The speed controller transfer function D s psq has the form: where K ps is the PI proportional gain and T is is the reset rate of the integrator. It can be shown that pitching the blades has a larger influence on the aerodynamic torque at higher wind speeds. For this reason, the gain K ps of the speed controller should be large near the rated wind speed but smaller at higher wind speeds [7]. The optimal gain of the speed controller associated with a certain wind speed can make the system become unstable at higher wind speeds due to the increasing gain of the system. Therefore, the speed controller is configured with one set of parameters in the region corresponding to stationary wind speeds in the interval 12-15 m/s, while a smaller gain is utilised for the region covering wind speeds of 15-25 m/s. Although the system has different gains in these two working regions, it is possible to design the controllers so that similar transient responses of the controlled system are obtained. On the other hand, with reference to the power controller of Figure 13, it is implemented again as standard PI regulator in order to remove possible steady-state errors on the output power. This suggests using slow integral control for the power controller, as this will eventually cancel steady-state errors on the output power without interfering with the speed controller. However, it may be beneficial to make the power controller faster to improve accuracy in the tracking of the rated power. The power controller is realized as a PI controller, whose transfer function D p psq has the form: where K pp is the proportional gain of the PI regulator; whilst T ip is the reset rate of the integrator. By exploiting the measured output power directly can be a problem, since the measurement is very noisy. This means that the measurement noise has to be take into account in the design and yields that the proportional gain has to be sufficiently small. The proportional gain is usually chosen using a trial and error approach while the reset rate is selected large enough to avoid overshoot on the step response. Structural and Drivetrain Stress Damper Active stress damping solutions are deployed in large horizontal-axis wind turbines to mitigate fatigue damage due to drivetrain and structural oscillations and vibrations. The idea is to add proper components to the wind turbine control signals to compensate for the oscillations in the drivetrain and the tower vibrations. These signals should have frequencies equal to the eigenfrequencies of the drivetrain and the wind turbine structure, which can be found by filtering the measurement of the generator speed and the generated power. When the outputs from these filters are added to the generator torque and the pitch command, the phase of the filters must be zero at the resonant frequency to achieve the desired damping effects. These oscillation and vibration dampers are thus implemented to add compensating signals, as shown in Figure 13. Second order filter structures for the stress and the structural damping have been proposed and can be applied to dampen the eigenfrequency of both the drivetrain and the tower structure [37]. In general, the filter time constant introduces a zero in the filter that can be used to compensate for time lags in the system. To determine the gain of the filter, the root loci are plotted for the transfer functions from the wind turbine inputs to its outputs. More details on the design of these filters, which are beyond the scope of this paper, can be found in [37,38]. Note that, due to the higher loads at higher wind speeds, it is favourable if the filter gains depend on the point of operation. A simple way of fulfilling this property is to apply different gains in the partial and full load configurations of the wind turbine controller. Therefore, Section 4.4 outlines the bumpless transfer issue, which must ensure that no bumps exist on the control signals in the switch between the two different working regions. Bumpless Transfer The purpose of this section is to recall how the bumpless transfer mechanism is designed, i.e., how and when to activate the switch illustrated in Figure 13. The considered transition is the one that brings the control system from partial load operation to full load operation, and vice-versa. When the control system switches from partial load to full load operation, it is important that this transition is not affecting the control signals, i.e., the generator torque and pitch angle. This procedure is known as bumpless transfer, and is important because two controllers may not be consistent with the magnitude of the control signal at the time that the transition happens. If a switch between two controllers is performed without bumpless transfer, a bump in the control signal may trigger oscillations between the two controllers, making the system unstable. The transition from partial to full load operation must happen as the wind speed becomes sufficiently large. For stationary wind speeds this usually happens at about 12 m/s. However, it is not convenient to apply the wind speed as the switching condition, since the large inertia of the rotor causes the generator speed and output power to follow significantly later than a rise in the wind speed. Moreover, the wind speed is almost unknown. Therefore, it is more appropriate to exploit the generator speed as switching condition. In particular, the switching from partial to full load condition is achieved when the generator speed ω g ptq is greater than the nominal generator speed. On the other hand, the switching from full to partial load condition is applied if both the pitch angle β ptq is lower than its optimal value and the generator speed ω g ptq is significantly lower than its nominal value. Notice that an hysteresis is usually introduced to ensure a minimum time between each transition. Due to the switching condition on β ptq and because the output of the speed controller is saturated not to move below 0, the transition already fulfils the bumpless transfer condition for this control signal. On the other hand, for the generator torque signal a bumpless transfer is assured by adjusting the integral state of the controller, such that the generator torque does not change abruptly. The compensation torque is calculated using the expression of Equation (28): T g, comp pkq " T g, 1 pkq´T g, 2 pkq`T g, comp pk´1q (28) where T g, 1 pkq and T g, 2 pkq are the torque outputs at the sample k when the switching occurs from the controller 1 to the controller 2. T g, comp pkq represents the compensation torque ensuring a bumpless transfer. Note finally that the torque compensation is not important when operating above the rated wind speed, because the power controller has integral action. When operating below rated wind speed the compensation torque is discharged to zero, as it otherwise would result in the optimal tip-speed ratio not being followed. Advanced Control of Wind Turbines In the light of the previous considerations, turbine performances may be easily improved using advanced control development. For example, some recent developments were based on data-driven, adaptive control, and other time-varying strategies, see e.g., [5,34,35]. Researchers have also begun to investigate the alteration of basic control schemes using for example feed-forward terms to improve the disturbance rejection performance with respect to expected wind profile deviations [12,39,40]. Most of these controllers exploit suitable estimates of the disturbance and uncertainty effects. The control design can rely also on new sensing technologies that can enhance the achievement of the system performance. For example, there has been recent interest in the use of new sensor potentials, the so-called LIght Detection And Ranging (LIDAR) [12]. LIDAR is a remote optical sensing technology originally proposed in meteorology for measuring wind speed profiles. It was used for monitoring hurricanes and wind conditions around airports. New LIDAR implementations applied to wind turbine systems can provide the estimate of quantities such as the wind speed and direction, as well as wind turbulence and other shear parameters. In particular for large rotor installations, a reliable reconstruction of the wind profile over the rotor plane as shown in Figure 9 can enhance pitch and torque control performances. Recent wind turbine advanced control solutions have been further discussed and compared e.g., in [13,15,24,28]. As turbines get larger and blades get longer, it is possible that turbine manufacturers will build turbines that allow for different pitch angles at different radial positions along the blades relative to the standard blade twist angle. In this case, separate actuators and controllers may be necessary, opening up even more control opportunities [14,46,47]. Researches have been also focussing on actuators other than a pitch motor to modify the aerodynamics of the turbine blade. For example, micro-tabs and tiny valves to allow pressurised air to flow out of the blade may alter the flow of the air across the blade, thus modifying the lift and drag coefficients, thus providing further control solutions [48]. Note finally that, the need for advanced control solutions for these safety-critical and very demanding systems, is motivated also by the requirement for reliability, availability, and maintainability over the required power conversion efficiency. Therefore, these issues have begun to stimulate research and development of the so-called sustainable control (i.e., fault-tolerant control) [9], as outlined in Section 5.1, in particular for wind turbine applications [4]. Sustainable Control Issues In general, wind turbines in the megawatt size are expensive, and therefore their availability and reliability must be high in order to maximise the energy production. This issue could be particularly important for offshore installations, where Operation and Maintenance (O & M) services have to be minimised, since they represent one of the main factors of the energy cost. The capital cost, as well as the wind turbine foundation and installation determine the basic term in the cost of the produced energy, which constitute the energy "fixed cost". The O & M represent a "variable cost" that can increase the energy cost up to about 30%. At the same time, industrial systems have become more complex and expensive, with less tolerance for performance degradation, productivity decrease and safety hazards. This leads also to an ever increasing requirement on reliability and safety of control systems subjected to process abnormalities and component faults. As a result, it is extremely important the Fault Detection and Diagnosis (FDD) or the Fault Detection and Isolation (FDI) tasks, as well as the achievement of fault-tolerant features for minimising possible performance degradation and avoiding dangerous situations. With the advent of computerised control, communication networks and information techniques, it makes possible to develop novel real-time monitoring and fault-tolerant design techniques for industrial processes, but brings challenges [9]. Several works have been proposed recently on wind turbine FDI/FDD, see e.g., [27,28,49,50]. On the other hand, the FTC problem has been recently considered with reference to offshore wind turbine benchmarks e.g., in [4,15], which motivated several issues described in this work. In general, FTC methods are classified into two types, i.e., Passive Fault Tolerant Control (PFTC) scheme and Active Fault Tolerant Control (AFTC) scheme [51]. In PFTC, controllers are fixed and are designed to be robust against a class of presumed faults. In contrast to PFTC, AFTC reacts to the system component failures actively by reconfiguring control actions so that the stability and acceptable performance of the entire system can be maintained. In particular for wind turbines, FTC designs were compared in [4,15]. These processes are nonlinear dynamic systems, whose aerodynamics are nonlinear and unsteady, whilst their rotors are subject to complicated turbulent wind inflow fields driving fatigue loading. Therefore, the so-called wind turbine sustainable control represents a complex and challenging task [14,15]. Therefore, the purpose of this final section is to outline the basic solutions to sustainable control design, which are capable of handling faults affecting the controlled wind turbine. They start from the basic control solutions outlined in Section 4. For example, changing dynamics of the pitch system due to a fault cannot be accommodated by signal correction [4]. Therefore, it should be considered in the controller design, to guarantee stability and a satisfactory performance. Among the possible causes for changed dynamics of the pitch system, they can due to a change in the air content of the hydraulic system oil. This fault is considered since it is the most likely to occur, and since the reference controller becomes unstable when the hydraulic oil has a high air content. Another issue raises when the generator speed measurement is unavailable, and the controller should rely on the measurement of the rotor speed, which is contaminated with much more noise than the generator speed measurement. This makes it necessary to reconfigure the controller to obtain a reasonable performance of the control system [4,15]. Section 5.2 briefly recalls the main issues of active and passive fault-tolerant control systems and suggests how they have been applied to the wind turbine systems. Active and Passive Fault-Tolerant Control Systems In order to outline and compare the controllers developed using active and passive fault-tolerant design approaches, they should be derived using the same procedures in the fault-free case, as shown in Section 4. In this way, any differences in their performance or design complexity would be caused only by the fault tolerance approach, rather than the underlying controller solutions. Furthermore, the controllers should manage the parameter-varying nature of the wind turbine along its nominal operating trajectory caused by the aerodynamic nonlinearities. Usually, in order to comply with these requirements, the controllers are usually designed for example using Linear Parameter-Varying (LPV) modelling or fuzzy descriptions [4,15]. The two basic sustainable control solutions have different structures, as shown in Figure 14. Note that only the active fault-tolerant controller (AFTC) relies on a fault diagnosis algorithm (FDD). This represents the main difference between the two control schemes. The main point between AFTC and PFTC schemes is that an active fault-tolerant controller relies on a fault diagnosis system, which provides information about the faults f to the controller [51]. In the considered case the fault diagnosis system FDD contains the estimation of the unknown input (fault) affecting the system under control. The knowledge of the fault f allows the AFTC to reconfigure the current state of the system. On the other hand, the FDD is able to improve the controller performance in fault-free conditions, since it can compensate e.g., the modelling errors, uncertainty and disturbances. On the other hand, the PFTC scheme does not rely on a fault diagnosis algorithm, but is designed to be robust towards any possible faults. This is accomplished by designing a controller that is optimised for the fault-free situation, while satisfying some graceful degradation requirements in the faulty cases. However, with respect to the robust control design, the PFTC strategy provides reliable controllers that guarantee the same performance with no risk of false FDI or reconfigurations [51]. Energies 2015, 8, 1-x strategy provides reliable controllers that guarantee the same performance with no risk of false FDI or reconfigurations [51]. Active faut tolerance Passive fault tolerance In general, the methods used in the fault-tolerant controller designs should rely on output feedback, since only part of the state vector is measured. Additionally, they should take the measurement noise into account. Moreover, the design methods should be suited for nonlinear systems or linear systems with varying parameters. The latest proposed solutions for the derivation of both active and the passive fault-tolerant controllers rely on LPV and fuzzy descriptions, to which the fault-tolerance properties are added, since these frameworks methods are able to provide stability In general, the methods used in the fault-tolerant controller designs should rely on output feedback, since only part of the state vector is measured. Additionally, they should take the measurement noise into account. Moreover, the design methods should be suited for nonlinear systems or linear systems with varying parameters. The latest proposed solutions for the derivation of both active and the passive fault-tolerant controllers rely on LPV and fuzzy descriptions, to which the fault-tolerance properties are added, since these frameworks methods are able to provide stability and guaranteed performance with respect to parameter variations, uncertainty and disturbance. Additionally, LPV and fuzzy controller design methods have been recently proposed and evaluated for wind turbine application [4,15]. To add fault-tolerance to the basic controller formulations outlined in Section 4, different approaches can be exploited. For example, the AFTC scheme can use the parameters of suitable model structures estimated by the FDD module for scheduling the controllers [4,15]. On the other hand, different approaches can be used to obtain fault-tolerance in the PFTC methods. For this purpose, the design methods described in [15,34] can be modified to cope with parametric uncertainties, as addressed e.g., in [13,14,52,53]. Alternatively, other methods could have been used such as [54], which preserves the nominal performance. Generally, these approaches rely on solving some optimisation problems where a controller is calculated subject to maximising the disturbance attenuation, using Linear Matrix Inequality (LMI) tools [14,55]. Conclusions This paper analysed the most important modelling and control issues of wind turbines from system and control engineering points of view. A walk around the wind turbine control loops discussed the goals of the most common solutions and overviews the typical actuation and sensing available on commercial turbines. The work also intended to provide an updated and broader perspective by covering not only the modelling and control of individual wind turbines, but also outlining a number of areas for further research, and anticipating new issues that can open up new paradigms for advanced control approaches. In summary, wind energy is a fast growing industry, and this growth has led to a large demand for better modelling and control of wind turbines. Uncertainty, disturbance and other deviations from normal working conditions of the wind turbines make the control challenging, thus motivating the need for advanced modelling and a number of so-called sustainable control approaches that should be explored to reduce the cost of wind energy. By enabling this clean renewable energy source to provide and reliably meet the world's electricity needs, the tremendous challenge of solving the world's energy requirements in the future will be enhanced. The wind resource available worldwide is large, and much of the world's future electrical energy needs can be provided by wind energy alone if the technological barriers are overcome. The application of sustainable controls for wind energy systems is still in its infancy, and there are many fundamental and applied issues that can be addressed by the systems and control community to significantly improve the efficiency, operation, and lifetimes of wind turbines. Conflicts of Interest: The authors declare no conflict of interest.
237946270
s2orc/train
v2
2021-09-28T13:29:52.154Z
2021-09-28T00:00:00.000Z
Effect of Working After Retirement on the Mental Health of Older People: Evidence From China There is little empirical research on the effect of working after retirement on the mental health of the older adults in China. To fill this gap in the literature, this study examines the effects of working after retirement on the mental health of the older adults using data from the China Family Panel Studies. We employed the methods of ordinary least squares, ordered logit, and propensity score matching–difference in differences (PSM–DID). Results show that working after retirement is negatively related to mental health of the older adults in China. The deterioration effect of post-retirement work mainly impacts those aged over 60 years, women, and those with lower education background, urban household registration, higher pension, and higher social status. Working after retirement is negatively related to mental health through the mediating effects of deteriorating interpersonal relationships and lower positive attitude. It is necessary to consider mental health effects and their population differences to evaluate the impact and improve the quality of policies of active aging. INTRODUCTION As the millennial generation of people born after the 1990s has been entering the labor market, the baby-boomers born in the 1960s have been reaching retirement age and withdrawing from the labor market. As a result, the scale of the working-age population is gradually decreasing, the burden of pension increasing, and the problem of aging becoming increasingly serious. According to data from China's National Bureau of Statistics, by the end of 2020, 264 million Chinese were aged 60 years and above, accounting for 18.7% of the total population-an increase of 5.38% from 2010. Furthermore, there were 190.64 million people aged 65 years and above, an increase of 4.63% from 2010. According to the United Nations (1), the proportion of the population aged 60 and above in China will reach 30-40% by 2050, when China will have the largest number of older people among all countries and the fastest aging level. In this context, encouraging the older adults to extend their working life and improving their labor participation rate have become an inevitable choice for China, through active aging measures and delayed retirement policy. Then whether and how continuing to work after retirement affects the older adults' mental health is the questions we are interested in. The answers are related not only to the welfare of the older adults, but also to the development of human resources for this subpopulation and the implementation of a strategy of active aging, healthy China, and a better quality of life. Some studies have found that working in old age has a positive impact on mental health (2)(3)(4). These studies have shown that the underlying mechanism is that work is a symbol of personal identity and status, and that leaving the labor market means losing identity or lowered status, which may reduce the level of mental health. Moreover, the social support theory holds that opportunities for social participation and the social support level of the older adults are likely to decline after retirement, leading to adverse effects on their health. On the one hand, extending working life and thereby increasing social participation and social support may lead to better mental health of the older adults after retirement (3,5,6). On the other hand, continuing to work in later years enables many older people to continue to work as they did in middle age, which helps to maintain their sense of meaning and goals in life, and thus, improves the level of their mental health (7,8) and lowers their mortality rate, especially for those who engage in paid work (9,10). However, some studies have found that work may cause the mental health of the older adults to deteriorate, and retirement helps to improve their physical and mental health. The main reasons are as follows. First, the older adults have more flexible use of time after retirement, and they are free to engage in activities besides work, such as exercise and volunteer service, which could improve their health status (11)(12)(13). Second, retirement relieves the pressure of work, and living a comfortable life is conducive to the physical and psychological health of the older adults (14,15). In other words, old age is a time for people to live as they please. The older adults cannot continue to work in the same position after retirement but they may be forced to find other work, which would worsen their physical and psychological health. Although there are many studies on the relationship between older adults' work and mental health, there are no consensus, and few have examined this topic in China. China's situation is unique, because retirement in China has some degree of mandatory characteristics. In-depth research is required on how re-employment after retirement affects the mental health of the older adults. A few studies, such as Cheng et al. (16), which is based on data from Ningbo City, have found that re-employment after retirement has a positive impact on the mental health of the older adults. However, Huang and Yu (17) found that the mental health level of retired people did not change significantly because of continuing to work. There are no consistent conclusions. Furthermore, these studies either are based on regional samples or do not conduct in-depth analysis of the impact mechanism and do not consider the endogeneity problem. To fill these research gaps, this study empirically tests the mental health effect of working after retirement on the older adults and conducts an in-depth analysis of its impact mechanism. On the basis of the foregoing, we put forward the three competing hypotheses as follows: Hypothesis 1a: Working after retirement has no significant association with depressive symptoms in older adults. Hypothesis 1b: Working after retirement will worsen the depressive symptoms of older adults. Hypothesis 1c: Working after retirement will benefit the depressive symptoms of older adults. Some studies have found that the impact of working after retirement on psychological well-being was heterogeneous in terms of individual characteristics (18,19). Moreover, analysis of the effects on different groups is necessary. Therefore, the following hypothesis can be proposed: Hypothesis 2: The impact of working after retirement on the depressive symptoms of older adults is heterogeneous in different groups. Further, we would like to explore the mechanism behind the relationship between reemployment and the mental health of older adults. According to activity theory and studies on retirement, work may help the elderly increase income, increase the opportunities for interpersonal communication (3,5,20), and maintain a positive attitude, which is conducive to one's mental health. Thus, we hypothesize the following: Hypothesis 3: Working after retirement will affect the depressive symptoms of older adults by affecting their financial income, interpersonal relationships, and self-rated confidence in the future. Data The data used in this study are from the CFPS in 2018. The CFPS is a large-scale survey conducted by the Social Survey Center of Peking University. A multi-stage probability proportional to size strategy with implicit stratification was performed in the sampling process that comprises three stages: county level as the primary sampling unit, a community or village for the second-stage sampling unit, and household as the final sampling unit (21). The survey covers 25 provinces, municipalities, and autonomous regions, representing 95% of the population on the Chinese mainland. In 2018, the survey included 29,478 adults. The data contain rich information about the retirement, work, and mental state of older adults, which is suitable for the analysis in this study. We carry out a series of processing to the original data. First, the sample is limited to urban respondents who have gone through retirement procedures or have received pensions. Second, observation values of those aged <45 years and over 80 years are excluded from the sample construction. The reasons are as follows. First, the legal minimum retirement age in China is 50 years for women, and the policy stipulates that a person may retire 5 years before the minimum legal age; thus, samples aged under 45 years should be excluded. Second, the proportion of people aged over 80 years who work is very small, and the level of change is miniscule. Finally, observations whose main variables are missing are deleted. The number of deleted observations is 46. The final study sample consists of 3,940 observations. Dependent Variable The dependent variable is depressive symptoms. The CFPS questionnaire includes the Center for Epidemiologic Studies Depression Scale (CES-D), which is a commonly used international scale for measuring depression (22). According to existing research, this study uses the CES-D to measure depressive symptoms. According to the CES-D scale in the 2018 CFPS data, negative emotions are classified based on answers to the following questions: "I feel depressed, " "It feels very hard to do anything, " "I don't sleep well, " "I feel lonely, " "I feel sad, " and "I can't continue my life." The answers correspond to four options: almost never (<1 day a week), sometimes (1-2 days), often (3-4 days), and most of the time (5-7 days), which are assigned values of 1, 2, 3, and 4, respectively. "I feel happy" and "I live a happy life" reflect positive emotions, which are assigned in reverse. The total score of the CES-D ranges between 8 and 32. The higher the score, the worse the mental health, and vice versa. Independent Variable The core independent variable is "working after retirement, " that is, whether the person continues to work or finds new work after retirement. If the respondents have gone through the retirement procedures but still work, the variable is set as 1, whereas if the respondents have retired and quit the labor market, it is set as 0. Covariates All regression specifications are adjusted for several covariates that may confound the estimates of the effect of work after retirement on mental health. This study selects the individual characteristics, family characteristics and socioeconomic characteristics of the respondents as control variables. The variables for individual characteristics include age, gender, household registration (hukou), years of education, marital status, and health status. The variables for family characteristics include family care and total number of families. The variables for socio-economic characteristics include receiving pension status, the logarithm of family per capita income, and social status. Mediating Variables If re-employment after retirement has a significant impact on the mental health of the older adults, then we need to clarify the reasons. First, work may help the older adults to increase their income. Second, according to activity theory, work may increase opportunities for interpersonal communication and help to maintain a positive attitude (20), thereby improving the health of the older adults. Based on previous related research, this study examines the mechanism of re-employment after retirement on mental health through the three aspects of financial income, interpersonal communication, and positive attitude. Financial income is measured by income selfevaluation, interpersonal relationships are measured by self-rated interpersonal relationships, and positive attitude is measured by whether the respondents have confidence in the future. Selfevaluation, self-rated interpersonal relationships, and confidence in the future all correspond to answers in the questionnaire. The respondents provide a self-evaluation using a maximum of five points and a minimum of one point. Basic Model To empirically examine the effect of working after retirement on the mental health of the older adults, the econometric model is set as follows: where CESD i is the dependent variable of focus, depressive symptoms. reemploy i is the independent variable; X i is a series of control variables, namely, personal demographic characteristics, family characteristics, and socio-economic characteristics; and ε i is a random error term. Depressive symptoms can be regarded as a continuous variable. In this study, OLS is employed for the first step in the investigation. Considering that the explained variables are ordinal variables, the ordered logit regression model is also employed. Thus, we can augment the robustness of the estimation results. Treatment of Endogenous Problems: PSM-DID Method The following endogeneity problems may bias our main results. First, there may be sample selection concerns, that is, the observations for those working after retirement and not working after retirement have heterogeneous initial conditions, as work is not randomly assigned to the retired older adults. Second, there may be unobservable omitted variables that affect both the mental health of the older adults and whether the residents work after retirement. Following Van den Broeck and Maertens (23), we combine the differences-in-differences estimation with propensity score matching (PSM-DID) to overcome these problems and check the robustness of the main results. Therefore, based on the two periods of balanced panel data constructed using CFPS data in 2016 and 2018, the study employs the PSM-DID method to re-estimate the impact of working after retirement on mental health. According to the basic principle of the PSM-DID method, the basic model is where D is the dummy variable (1 for the treatment group, 0 for the control group), T is the treatment group, C is the control group, Y 0 is the depression score of the baseline group, Y 1 is the depression score of the control group, and X represents the covariates. The key point of PSM-DID is to replace the depression score of cross-sectional data with that of panel data based on the propensity score matching (PSM). This method is similar to the quasi-experimental method, which provides an estimate with less selection bias by creating similar features between the treatment and control groups. Variables Definitions Primary variables CESD Depressive symptoms, which is measured by the depression score obtained by the CES-D scale in the CFPS, with a value range of 8-32. The higher the score, the more obvious the depressive symptoms. Self-rated social status, with a value range of 1-5; the higher the value, the higher the social status. IS Self-rated income status, with a value range of 1-5. The higher the value, the higher the income status. IR Self-rated interpersonal relationships, with a value range of 1-5; the higher the value, the better the interpersonal relationships. PA Self-rated confidence in the future, with a value range of 1-5; the greater the value, the higher the confidence in the future Mediating Mechanism Analysis Model Refer to the mediating effect test method of Wen and Ye (24), Equations (3) and (4) were constructed based on Formula (1). Here, mediator i is the mediating variable. According to the mediating effect test method of Wen and Ye (24), the first step is to test the influence of working after retirement on depressive symptoms, that is, the coefficient α 2 of Formula (1). If α 2 is significant, it indicates a mediating effect, and otherwise, a masking effect; The second step is to test γ 1 and δ 1 in turn. If all of them are significant, the mediating effect is significant; if at least one is not significant, we continue to use bootstrap and other methods for the test. The third step is to observe whether δ 2 is significant. If it is not, then the direct effect is not significant, indicating that there is only an intermediary effect but no direct effect; if δ 2 is significant, there is a partial mediating effect. A masking effect is indicated if γ 1 and δ 1 have different signs. A partial mediating effect is indicated by the magnitude of the effect |γ 1 · δ 1 /α 2 |. Table 2 presents the descriptive statistics of the main variables of the sample. On the whole, the depression scores of the whole sample, retired without work, and working after retirement are all observed at low levels; overall, the mental health status is good. In the entire sample, those who were re-employed after retirement accounted for 38% of the total sample, 76% of the total sample were aged 60 or over, and 45% of the sample were male, and 61% of the total sample was urban residence. The mean years of education in the entire sample was 7.3 years. Forty-four percent of older people had junior secondary education or less. The proportion of those who were married was approximately 85%. Seventy-five percent of older people provide home care for their children and 54% had an above average pension income. Descriptive Analysis of Variables Comparing the retired re-employed group with the retired not re-employed group, we found that older people who withdrew from the labor market after retirement had significantly lower mean depression scores. In addition, the proportion of men who retired and re-employed is higher those who retired and did not re-employ (50.7%); the proportion of agricultural hukou (household registration) is higher (67%); and the education level is lower (about 6.4 years on average). There exists a higher proportion of people with lower than average pension. However, those who are re-employed after retirement have better health, higher per capita family income, and higher social status. They are younger (average age of 62 years), represent a higher proportion of married people, and seldom provide family care for their children. Generally, those working after retirement have better social, financial, and physical status than those who are retired and not working; however, their mental health is poorer than the latter. Table 3 reports the estimation results after adding different control variables. The OLS and ordered logit regression of models 1 and 2 do not include any control variables, models 3 and 4 add variables for individual characteristics, and models 5 and 6 add variables for family characteristics and socio-economic characteristics. As shown in Table 3, working after retirement has a negative association with the older adults' mental health. After controlling individual characteristics, family characteristics, and socio-economic characteristics, model 5 shows that re-employment after retirement significantly increases the depression score of the older adults by 0.382. These results are consistent with the results from the descriptive statistics shown in Table 2. Although the estimations indicate that working after retirement adversely impacts older adults' mental health, the estimated effect might still be biased because the working after retirement is likely to be selected and endogenous with mental health. These findings should be viewed alongside further robustness checks described below. Apart from working after retirement, several sociodemographic variables affect mental health of the older adult as well. Men are likely to have better mental health status than women. And as expected, mental health of older adults with higher education are better than those with lower education. Older adults with normal marriage are less depressed than the divorced or widowed. Older adults with urban Hukou have better mental health status than those in agricultural Hukou. Those who are healthier, have higher pension and higher social status tend to have better mental health status. Robustness Analysis: PSM-DID Estimation Results Considering there may be endogeneity problems, such as sample selection and omitted confounding variables between the retired and mental health, this study employs PSM-DID for robustness checks. In this subsection, we construct two-period panel data from the CFPS for the years 2016 and 2018. The retired who newly joined the workforce in the 2018 wave-that is, the retired who did not work in the 2016 wave but worked in the 2018 wave-are taken as the treatment group. The retired who worked neither in 2016 nor in 2018 are taken as the control group. After corresponding processing of the data, we have 177 observations of the treatment group and 1,497 observations of the control group. Then, we estimate the propensity score of work of the retired through a binary logit regression model, including the individual characteristics, family characteristics, and socioeconomic characteristics, and we match samples according to the propensity score. To ensure the validity of the PSM-DID method, we conduct additional analyses. Figure 1 shows the propensity distribution of the treated and control groups before and after matching. The results demonstrate a noteworthy extension of the common support between the treated and the control groups, implying that the overall distributions of the conditional probability to return to work are similar between the two groups. Furthermore, we check whether the data are balanced. Table 4 presents the results of covariates balance testing for PSM before and after matching. Although there are significant differences in some variables between the unmatched treatment and the control groups, the differences of all variables are no longer significant after matching, implying that the matching effect is great. Table 5 shows the results estimated by applying the PSM-DID method. Table 5 shows that in 2016, there was no significant difference in mental health status between the treatment group (the older adults working after retirement) and the control group (the retired and not working); meanwhile, the mental health status of the retired older adults in the treatment group was better than that in the control group, with the average depression score about 0.174 lower than that for the control group. However, in 2018, after the retired in the treatment group returned to work, their depression scores were significantly higher than those in the control group by an average of about 0.306. Subtracting the difference between the treatment group and the control group in 2016 and 2018, we find that the average treatment effect of the treatment group is 0.479, which is significant at the 10% level. Thus, the results support Hypothesis 1b and reject Hypotheses 1a and 1c. This indicates that working after retirement causes the mental health of the older adults to deteriorate, which echoes the baseline result estimated by OLS and ordered logit and confirms the robustness of the results to a certain degree. Effects by Sub Groups Analysis of the effect on different groups would provide a reference for more accurate policy intervention. The sample is stratified by age, gender, education background, hukou, pension level, and social status. The results are reported in Table 6. The estimation results show that the estimated coefficient of age in the younger group (45-60 years) is 0.232, which is not significant, while it is 0.399 in the older group (60-80 years) and significant at the 5% level. This means that re-employment after retirement has no significant effect on the mental health of the younger group, but significantly worsens the mental health status of the older group. From the perspective of gender, women mainly experience adverse effects on mental health from working after retirement. The estimated coefficient of the female group is 0.592, significant at the 1% level, while the coefficient of the male group is not significant. From the perspective of education level, the detrimental effects of working after retirement on the mental health of the older adults mainly affects the group with lower education background. We divide the sample into a lower education group and a higher education group split by average education years and conduct linear model estimation on each. The estimated coefficient for the lower education group is 0.605, significant at the 1% level, but the influence coefficient of the higher education group is not significant. Surprisingly, we find that the detrimental effects of re-employment after retirement on the mental health of the older adults mainly affects the urban group, the higher pension group, and the higher social status group. Overall, the results are consistent with Hypothesis 2, that is, the impact of working after retirement on mental health is heterogeneous in terms of individual characteristics. Mediating Mechanism Analysis We confirm that working after retirement significantly affects the mental health status of retired older adults. However, by what channels does this effect take place? Work can help the older adults to increase their income. According to activity theory, work can also increase opportunities of interpersonal communication and help to maintain a positive attitude, so as to improve the health of the older adults. Therefore, this study examines the mediating mechanism of working after retirement on mental health through the three aspects of financial income, interpersonal relationships, and positive mental attitude. Table 7 reports the overall regression results. Column 1 shows the result without any intermediary variable; columns 2, 4, and 6 show the regression results of the independent variable to each of the three intermediary variables; and columns 3, 5, and 7 add the regression results of economic income status, interpersonal relationships, and positive mental attitude, respectively. The results show that working after retirement significantly improves the level of financial income, while it worsens interpersonal relationships and has a negative impact on positive attitude, albeit not significant. The final test results show that the three factors have different effects on mental health. The results are consistent with Hypothesis 3. Income status has a 0.6% masking effect, while interpersonal relationships and positive attitude explain more than 5% of the mediating effect, especially positive mental attitude. After adding this variable, the influence of postretirement work on mental health is significantly reduced. Working After Retirement and Mental Health Are Negatively Correlated Overall, both the basic results and the results obtained from the robustness test show that working after retirement increases the depression score of older adults and aggravates their mental health, supporting Hypothesis 1b. This could be significantly related to the context of China. First, the driving force of working after retirement is more of a "push" factor than a "pull" factor in China. Factors affecting retired older adults' willingness to work can be divided into four categories: financial needs, work needs, giving full play to their strengths, and spiritual sustenance (25). Financial needs can be regarded as push factors, while the other three factors can be considered as pull factors. However, in China, financial factors are the main drivers of the re-employment of retired older adults (20,26). It is not easy to improve the mental health of older adults based on a push factor as the motivation to work. Second, China has had a mandatory retirement policy for a long time. This greatly affects the employment prospects of older people. For Chinese older people, it is widely believed that retirement is a time when older people should enjoy life. Older people returning to employment after retirement may be perceived as a result of ungrateful children or family misfortune, which puts enormous psychological pressure on older people. Moreover, the long-term implementation of the mandatory retirement policy has also caused most elderly individuals to face a variety of discrimination with regard to demotions and reductions in wages and benefits from their employers (27), which are also detrimental to the mental health of the elderly. Moreover, those individuals who are re-employed after retirement face a lot of work pressure, which negatively affects their mental health. According to the disengaging theory, older adults should be released from social activities in their later years. Maddox (28) and Moen (29) further pointed out that quitting work could relieve stress and thus be beneficial to physical and mental health. However, when older adults re-enter the labor market, there are work responsibilities which are not conducive to their mental health. Heterogeneous Impact of Working After Retirement on Mental Health Although the relationship between working after retirement and depression among the older adults was regulated by group differences, the empirical results supports hypothesis 2. In different age groups, working after retirement has no significant effect on the mental health of the younger group, but it significantly worsens the mental health status of the older group. Carstensen et al. (30) explained this from the perspective of social emotional choice theory. They considered that with age, the older adults perceive future time as limited, and they gradually become more willing to focus on their inner circle of social network relationships, such that they pay more attention to intimate relationships. Under the cultural background of China, retirees are expected to be cared for by their grandchildren and enjoy the happiness associated with being with their family. This concept is deeply rooted in the hearts of the Chinese people, especially among older women. Thus, older Chinese individuals pay more attention to leisure time outside of work, such that the benefit of leisure is higher than that of work (20). There is an increasing conflict between the desire to continue this tradition and the reality of having to work after retirement. Therefore, the deteriorating effect of mental health is more obvious for the cohort of older adults. Thus, we should pay more attention to the crowding out of mental health welfare of older adults when implementing a policy for increasing retirement age. Such a difference also existed among between genders and education levels. Women experienced adverse effects on their mental health from working after retirement since older women are more expected to enjoy the happiness associated with family. Moreover, women are conflicted in their role in terms of providing family care and working due to the tradition thinking of "men work outside their home, women work inside their home." Van Houtven et al. (31) found that female workers who continued to work after retirement reduced their working hours and wages because they were required to care for their families; however, this had little impact on men. In the Chinese cultural background, men dominate the work outside their home, while women dominate work at home; China's current three-child policy implies that retired women assume more family care. These conditions exacerbate the conflict between family responsibilities and work faced by women who work after retirement; this worsens their mental health. From the perspective of education, the detrimental effects of working after retirement on the mental health of older adults affects the group with a lower education background. According to Yang et al. (32), among low education groups, income compensation drives the return to work, that is, they re-enter the labor market because of financial needs, which is not conducive to mental health. Finally, the detrimental effects on the mental health of working after retirement of older adults mainly affect the urban, higher pension, and higher social status groups. This seems to contradict the analysis of the push-pull theory introduced earlier in this section. However, with careful consideration, we think that the level of pension is generally positively related to the level of social status, and people with higher social status usually have higher pensions after retirement. Generally, people with higher social status have stronger abilities than those with a lower social status and have greater labor supply elasticity, such that they are more likely to work after retirement because of the presence of pull forces. However, in reality, their authority after re-employment is often greatly reduced, leading to a large psychological expectation gap, which is not conducive to their mental health. Mediating Mechanism Results show that financial income, interpersonal relationships, and positive mental attitude have different effects on mental health. However, the positive effects of work contributed by income cannot offset the negative effects contributed by the latter two. Specifically, re-employment after retirement has a significant positive boost to older people's financial income, which is positively related to older people's mental health. This is consistent with the findings of mainstream research, where in general, higher income earners generate more positive emotions and lower income earners suffer more negative emotions (33). In this study, though work is generally regarded as an important form of social participation in the elderly, it is negatively related to interpersonal communication and positive attitudesalthough not significant-and through them, it significantly and negatively affects mental health. The findings are contrary to the conclusions derived by Kim and Moen (3) and Forbes et al. (6). This can be explained as follows. Because of their perceived limited lifespan, older people are willing to invest more time in intimate relationships (30), especially in China, a country that has long been influenced by Confucian culture. Moreover, China's current retirement age system and cultural norms are not conducive to the active employment of older adults due to which older people undergo further discrimination in employment. Thus, the interpersonal relationships of the retired who continue to work have not been significantly improved; on the contrary, there may be a trend of deterioration, and it has a negative impact on their positive attitudes. Therefore, when the elderly evaluate the quality of interpersonal relationships and positive attitudes, it is seen that post retirement work has no advantage. CONCLUSION Using data from the CFPS, this study examined the impact of re-employment on the mental health of older adults who retired in China. We found that re-employment significantly increased older people's depression scores and worsened their mental health. In addition, the impact of re-employment on older people's mental health may depend on their socio-economic background, which mainly affects older people in the sample who are over 60 years of age, female by gender, and those with lower educational background, urban residents, higher pensions and higher social status. We also found that income status had a 0.6% masking effect on mental health in the interaction between re-employment and mental health. Interpersonal relationships and positive attitudes explained more than 5% of the mediating effect, especially positive attitudes. The effect of re-employment on mental health was significantly reduced by the inclusion of this variable. The theoretical contribution of this study is that it enriches the existing research on the impact of extending working life and mental health by considering the Chinese cultural background. We not only compare the differences between the mental health of the older adults who work after retirement and those who do not, but also discuss the heterogeneity and mediating effects so as to further clarify the influence mechanism between working after retirement and the mental health of older adults people. Although re-employment can address aging problems from a macro perspective and improve the financial income of the older adults at a micro level, we should also pay attention to its possible adverse effects on their mental health. The findings in this paper are specific to the Chinese context. The impact of re-employment on mental health may be influenced by economic conditions and cultural background. So, whether our results are applicable to other countries deserves further study. A limitation of this study is that multiple comparisons were not conducted. Although we can conclude that re-employment in China somewhat leads to deterioration in mental health among older cohorts, and a number of robustness tests, including the PSM-DID strategy, OLS models and OLogit models, support our findings, these estimates are still subject to some limitations. Additional research using other data sources and methods would help to further strengthen our findings. In spite of the limitations, this study has important implications for active aging in China and other developing countries. Against the current background of aging populations, several countries have implemented or are implementing delayed retirement programs, and this has been the inevitable choice for China too. The social policy of raising the retirement age will help society to develop a better concept of promoting the employment of older people. However, when implementing a plan to increase retirement age, the following three aspects should be considered, based on the findings of this study. First, there should be a gender-specific retirement policy or flexible retirement policy. A policy to delay retirement age needs to pay attention to workfamily balance. Especially in East Asian countries, the family division of labor is still male dominated for outside work and female dominated for work in the home. Women undertake more family care work, and the cost of delayed retirement for women is higher than that for men. Second, retirement policy should aim to improve the replacement rate of personal pension and to increase the income level from older adults employment. Most people who work after retirement do so because they need to earn income. Increasing their income level could improve the welfare of the employed older adults to some extent and thus, alleviate the adverse effects of delayed retirement. Third, it is necessary to improve the re-employment environment of the retired older adults and to enhance their social status of continuing employment. China should strengthen legislation and implement anti-age discrimination measures, ensure that older adults workers enjoy equal opportunities in human resource management, build older adults friendly workplaces, create a good, friendly working atmosphere for the older adults, and achieve a positive aging experience for the older adults. Finally, China should actively advocate policies for living longer, working longer, and lifelong learning to change the traditional concept of stopping work after retirement. This would gradually enhance the society's identification of working until later life at a conceptual level and stimulate the enthusiasm of the older adults to participate in productive work. AUTHOR CONTRIBUTIONS LX and L-lT conceived this research. LX, L-lT, S-qZ, and SZ was responsible for the methodology and conducted software analyses. Y-dY and L-lT conducted necessary validations. Y-yW and L-lT conducted a formal analysis and managed the investigation. L-lT and Y-dY gathered resources, curated all data, wrote/prepared the original draft, and were responsible for project administration. LX and L-lT reviewed and edited the manuscript, were responsible for visualization, supervised the project. H-lY and Z-yL acquired funding. All authors contributed to the article and approved the submitted version.
150350660
s2orc/train
v2
2019-05-12T14:23:02.715Z
2018-07-04T00:00:00.000Z
Determination of the Perceptions of Sports Managers About Sport Concept: A Metaphor Analysis Study This article has Abstract The aim of this research is to determine the perceptions of sport managers in Turkey on the concept of sport by means of metaphors. 74 sport managers participated in the research. Phenomenology, one of the qualitative research methods, was used in the research. Content analysis method was used for the data analysis. Evaluation of the data showed that sports managers produced a total of 50 metaphors. These metaphors produced were collected in 6 different categories. As a result, it was determined that sport managers expressed the concept of sport by means of metaphors in a very rich and diverse perspective. Therefore, the metaphors determined in this study may lead the sport managers and candidates responsible for the administration of sports services and activities in terms of offering a different perspective on the practice of sport management. Introduction Metaphors are one of the powerful and intelligent ways of transmitting findings. With a strong metaphor, a single expression can mean multiple meanings. Moreover, developing and using metaphors is fun for both the analyzer and the reader (Patton, 2014). The metaphor is generally considered to be a word of art to decorate the discourse, but its importance is much more than that. Metaphor generally refers to a way of thinking and a way of seeing the world, which enable us to understand the world. Research in various fields showed that a metaphor has a formative impact on our ability to express ourselves on a daily basis as much as we are on our way of thinking, our language and our science (Morgan, 1998). The metaphor is a perfect technique for teaching unknown things, a proven means of remembering learned information and keeping them in mind. Through metaphor, people attach new information to their old knowledge by sticking it to the already existed schema in their minds. Metaphors, thus, establish strong links between people's past learning and personal experiences and newly learned concepts. Sport management is a discipline that is present in a wide variety of fields today and in various fields (Çiftç i & Mirzeoğlu, 2014). In this respect, as a sub-branch of the administration, it involves a lot of common points both in management and in other scientific branches. However, sport management, in practice, has to take into account the characteristics of the sporting field. Therefore, while the sport management benefits from the concepts, principles and methods of the general administration, it has to create a unique structure within the framework of the relationship between the manager and athlete, the manager and sports organizations, and the society and sports organizations by considering the characteristics of the sports field (İmamoğlu & Ekenci, 2014). Obviously, in the formation of this structure, sports managers are important in terms of being the people that provide the necessary information to manage sports institutions and organizations in an effective and productive manner and that develop methods and implement these methods in the direction of this information, taking into account the characteristics of sports. entrepreneurship, adaptation, flexibility, determination) which has been expected from a sports manager, but also professional awareness which is complement of these features. With a simple expression, professional expression occurs with deeveloping a cognitive, sensory and behavioural response to the questions such as "Why do I do this job?" or "What does this job serve to?". In this context, when it has been thought that sport phenomenon with different dimensions takes part in the center of profession of sports managers, "What are the imagery of sports managers?", "What do they associate to essence of their professions?" constitute the starting point of this research. As a result of the literature review on metaphors, researches on sports have been found. (Segrave, 2000;Cudd, 2007;Özsoy, 2011;Karasahinoglu & Ilhan, 2015;Kesic & Muhic, 2013;Koç et al., 2015;Yilmaz et al., 2017). This study differs from others in that it tries to measure the metaphorical perceptions of sport managers. Therefore, in this study, the fact that the sports managers responsible for the administration of sports services try to explain the concept of sport by analogy is important in terms of presenting a different point of view in future studies in the field of sport management. Purpose of Research The purpose of this research is to determine the perceptions of sport managers in Turkey about the sports concept through metaphors. The Problem of the Research Which metaphors do sport managers use to explain their perceptions for the sports concept? Under which categories are these determined metaphors collected in terms of characteristics in common? Model of Research In this research carried out to determine the perceptions of sport managers in Turkey about the sports concept through metaphors, qualitative research approach was used in order to provide in-depth and detailed information (Yıldırım & Şimşek, 2011). In the framework of the study's purpose, the preferred study design is the phenomenology design. Phenomenology design focuses on phenomena that we are aware of but do not have an in-depth and detailed understanding (Creswell, 2013;Yıldırım & Şimşek, 2014). In this study, the perceptions of sport managers about sport concept were determined through metaphors. Study Group This research was carried out with the sport managers responsible for the administration of sports services in Turkey. A total of 74 sports managers participated in the study. In this study, the study group was selected according to the purposeful sampling method. In the selection of the study group, criterion sampling was used from purposeful sampling methods (Büyüköztürk et al., 2009), which allow for in-depth research by selecting rich situations in terms of information depending on the purpose of the study. Criteria are as follows:  Managers at various levels in the Ministry of Youth and Sports,  Manager's voluntary participation in study. Data Collection Tool The research data were collected with a metaphor form composed of semi-structured questions prepared by the researcher. Semi-structured questions are the most preferred data collection tools in metaphor research (Inbar, 1996;Saban, 2009). In this context, each participant was asked to write a metaphor describing the sport and explain it. In order to determine the mental imagery of sport managers for sport concept in the form, each manager was asked to complete the sentence of "Sports is like ... because ..." and it was determined that the participants stated only one metaphor and explained these metaphors. Data Analysis To begin the analysis of the data, the response papers of participants were first numbered 1 to 74. In this study, content analysis was used from data evaluation methods used in researches in social areas. Content analysis is the process of defining, coding and categorizing data (Patton, 2014). In the research, the evaluation and interpretation process of the metaphors indicated by the sports managers by content analysis was done in 9 stages in total. These are (1) examination of forms, (2) elimination of unsuitable forms, (3) recompilation of forms, (4) numbering forms (5) examination of metaphors, (6) development of categories, (7) the In the process of eliminating unsuitable papers, metaphors and explanations of metaphors in each form were examined one by one. As a result, a form with lack was not identified and it was determined that the sports managers carefully filled the forms as requested. So no form was eliminated. In the course of recompiling the forms, there are no forms that do not match the criteria, so the metaphors in the form were listed and tabulated. In the process of numbering forms, metaphors were numbered between P1 and P74. In the course of the development of the category, the metaphors stated for the concept of sport were examined in terms of their common characteristics. At this stage, there was no problem in the categorical distributions since the metaphors specified by the sports managers were not emphasized in a way that they could be included in several categories at the same time. Validity and Reliability In the process of achieving validity and reliability; attention was paid "to report in detail the collected data and to explain how the researcher obtained the results" (Yıldırım & Şimşek, 2014) respecting the validity of the results of the research. For this purpose, the analysis process obtained from the participants and how the resulting codes had been related to the categories were presented to the reader directly with participant expressions. To ensure the reliability of the research, data were analyzed by two field experts to determine whether the conceptual categories reached as a result of the data analysis represented the acquired themes; the codes obtained and the categories represented by the codes were compared (Yılmaz & Güven, 2015). In this way, the reliability of the data analysis was calculated using the formula [consensus / (consensus + dissensus) x 100] (Miles & Huberman, 1994). A total of 50 metaphors were produced in the research, and 7 metaphors (revival of the body, music composition, machine, shaking, cat, medicine, technology), on which there were dissensus, were identified. The average reliability between the encoders was found to be 86% [43 / (43 + 7) x 100 = 86%]. This result shows that the desired level of reliability in the research has been achieved. On the other hand, the opinions of the sports managers were highlighted in the findings section by specifying the participant number. For example, it is like (P33). Findings and Comments This section introduces the metaphors developed by the sport managers regarding the sport concept, the evaluation of these metaphors under the relevant categories and examples of explanations. When Table 1 is examined, it is seen that the sport managers produced a total of 50 kinds of metaphors for the concept of "Sport" and they presented 74 opinions. Water (6), Breathing (5), Sleep (4), Health (3), Therapy (3) and Life Style (3) metaphors are the most frequently repeated metaphors. In order to be able to explain the sport concept, the managers drew analogies to living (Child, Doctor, Cat, etc.) and non-living (Song, Mobile Phone, Machine, etc.) expressions. Taking into account the explanations of the metaphors developed by sport managers about sport, classification of them as 6 categories in terms of their common characteristics are presented in Table 2. According to Table 2, metaphors developed by sport managers for the sport concept are grouped under 6 categories. These are the categories: in terms of being a basic need (23-%31.08), in terms of psychological comfort (20-%27.02), in terms of improving the quality of life (11-%14.86), in terms of being a passion (9-%12.16), in terms of being in the nature of the individual (6-%8.10) and in terms of providing physical benefit (5-%6.75). Citations from examples of explanations of sport managers; Sleep; Sports is important to me as much as the importance of the sleep to people. (P13) Water; Body's reaction when it is dehydrated is the same as its reaction when it lacks of sports (fatigue, unhappiness, etc.). (P19) Breast milk; I think that the psychological and physiological development of individuals who do not play sports is incomplete. (P7) Sun; Living without the sun is the same as living without the sport. (P10) Organ; It will be not normal when sports is absent in our lives. (P11) As shown in Table 3, a total of 23 metaphors are indicated in the category "in terms of being a basic need" for the sport concept. It is seen that sports managers emphasize that sport is also a basic need in addition to the needs which are important in human life such as air, water, sleeping, breathing, breast milk. Also in this category, sports were tried to be explained with non-living metaphors. In terms of psychological comfort Therapy ( In Table 4, a total of 20 metaphors regarding the sport concept were indicated in the category of "in terms of psychological comfort". As understood from the explanations given for each metaphor; the psychological effects of the sport on the individual were emphasized, considering the explanations of the sports managers such as "it relieves spiritually, it enables us to have fun and get rid of stress, makes us relaxed, heals us, etc. (1). Citations from examples of explanations of sport managers; Life style; When I play sports, I become motive, I am happy. Sport is the best. (P49) Health; My soul and psychology are relieved by the health, comfort and fitness provided by the sport. (P52) Renewal; It enables people to be renewed both body and personality. (P47) Dance; Sport is the integration of the body and spirit; it keeps up with the rhythm of the heart. (P51) As shown in Table 5, a total of 11 metaphors for the sports concept were stated in the category "in terms of improving quality of life". As understood from the metaphors and the explanations of metaphors; it is understood that sports managers emphasized the positive effects of sport on health and daily life and that by this way individuals could increase their quality of life. In addition, sports in this category were tried to be explained by living metaphors as well as non-living metaphors. Citations from examples of explanations of sport managers; Mobile phone; The mobile phone takes up a lot of space in our daily lives and we feel its absence, so that's what sports is for me. (P56) Coffee; It's both addictive and so sweet and fun. (P59) Breathing; The only thing that keeps me in the present time is the sports. Love; I do sports every moment, feeling it deeply (P62) As shown in Table 6, there are a total of 9 metaphors in the category "in terms of being a passion" related to the concept of sports. It is seen that sports managers emphasized that sports have the same effect of coffee and drugs, which are addictive after using them constantly, on the human body; therefore, it becomes a passion for some individuals. In terms of being in the nature of the individual The total of human movements (1), Mind and body harmony (1), Body revival (1), Tree (1), Child (1), Music Composition (1). Citations from examples of explanations of sport managers; The total of human movements; The movement is in the nature of living things and all living things are created to move. (P64) As shown in Table 7, there are a total of 6 metaphors in the category "in terms of being in the nature of the individual". The sport was explained by living (child, tree) and non-living (mind and body harmony, music composition, etc.) metaphors. 6. In terms of providing physical benefit Wine (1), Fish (1), Beauty (1), Machine (1), Lion (1). Citations from examples of explanations of sport managers; Wine; Years pass, but you are still healthy and beautiful. (P70) Fish; We become agile, flexible and free like fishes (P71) Lion; The strongest animal is lion. (P74) As shown in Table 8, a total of 5 metaphors were stated in the category "in terms of providing physical benefit" for the sport concept. As understood from the explanations given for each metaphor; it is understood that they emphasize the bodily benefits of sports on people, considering the explanations such as we become agile and flexible like a fish or the strongest animal is the lion. Discussion and Result In this research, it was aimed to determine the perceptions of sport managers in Turkey on the concept of sport by means of metaphors. According to the results obtained, managers' thoughts on the concept of sport were interpreted. It was observed that sport managers produced a total of 50 metaphors related to the sport concept. These metaphors produced were collected in 6 different categories. It was determined that sports managers produced most in the categories of "in terms of being a basic need" (23 metaphors) and "in terms of being a psychological comfort" (20 metaphors); and these categories were followed by "in terms of improving the quality of life", "in terms of being a passion", "in terms of being in the nature of the individual" and "in terms of providing physical benefit". In the category of "in terms of being a basic need" where the greatest number of metaphors were produced, it was figured out that participants made explanations on the sports such as it is like breast milk, the psychological and physiological development of individuals who do not play sports is incomplete; it is like water, body's reaction when it is dehydrated is the same as its reaction when it lacks of sports. Similarly, in a qualitative research conducted by Tekkursun et al (2016), the parents associated to sports with all development dimensions of their children. In parallel to our study, Küç ük and Koç (2004) stated that sport is no longer a continuity of activities for the physical and psychological strengthening of people, it means much more, and emphasized that sport is in fact a fundamental need. It is stated in the literature that sport has a basic function in terms of making people stronger in terms of physical, motoric and psychological, giving a positive impact on health, developing personality traits of the individual and affecting people socially (Fişek, 1985;Karaküç ük, 2008;Yetim, 2015). When evaluating the categories of "in terms of being a psychological comfort", "in terms of improving the quality of life", and "in terms of providing physical benefit", participants expressed that they are psychologically relaxed when they are doing sports, that they are physically developing and that they are happy at the end and that their quality of life is increased by this way. The information in the literature resembles our study. According to findings of a qualitative research conducted by Esenturk et al (2016), the physical education teachers associated to sports with psychosocial development. Stating that sports have an important place for the orphans Yetim (2015) to maintain their lives in a healthy way and to ensure their physical and spiritual development, Fişek (1998) specified that people can secure their mental and physical health through sports. According to İlhan (2010), playing sports and doing physical activities consciously, balanced and regularly create an environment far away from stresses due to daily life and also help the protective medicine by providing healthy life form. Beside these functions of sport, it is obvious that sport has a positive influence on individual's social and personal character development. Similarly, Bek (2008) emphasized that regular sports and physical activity are important for increasing people's quality of life throughout life, while Zorba (2012) reported that sport can protect the organism from the adverse effects of physical and mental stress. Participants, in other categories, stated that sport is in fact inherent in people and is a passion. It appears that if metaphors such as coffee, drugs, etc. specified by the participants are shown as an example, they emphasize that sports have the same effect of coffee and drugs, which are addictive after using them constantly, on the human body; therefore, it becomes a passion for some individuals (Bamber et al., 2003;Hausenblas & Downs, 2002). It is ideal that each professional figure in sport phenomenon achieves the professional awareness to satisfy and become integrated with their professions (İlhan 2012). The findings present some clues and also some interesting resemblance samples about the profile of research group by revealing a wide metaphoric range about sports. The individuals who are sports managers in Turkey may graduate from different undergraduate education fields. On the other hand, it has been thought that the individuals who graduate from Sports Management Departments of universities have some qualities in this professional area day by day and simulate by establishing relationship with different courses related to sports. According to Kırımoglu et al (2012), the undergraduate education has effect on professional efficiency. As a result, it was determined that sport managers expressed the sport concept through metaphors with rich and different perspectives. In this context, sports managers see sports firstly as a basic need. This can be interpreted in the sense that managers who play a decisive role in sport services have the potential to respond to the social needs with their works. In addition, it can be said that they are aware of the importance of sports in terms of its effect to increase the life quality of people, its dimension with a functional importance that touches the human being with what they do by identifying them with a colorful life. The metaphors identified in this study may therefore lead to the sport managers and candidates responsible for management of sports services and activities in terms of providing a different perspective on the practice of sport management.
49362770
s2orc/train
v2
2018-06-23T16:20:37.572Z
2016-11-14T00:00:00.000Z
Benchmark levels for the consumptive water footprint of crop production for different environmental conditions: a case study for winter wheat in China Abstract. Meeting growing food demands while simultaneously shrinking the water footprint (WF) of agricultural production is one of the greatest societal challenges. Benchmarks for the WF of crop production can serve as a reference and be helpful in setting WF reduction targets. The consumptive WF of crops, the consumption of rainwater stored in the soil (green WF), and the consumption of irrigation water (blue WF) over the crop growing period varies spatially and temporally depending on environmental factors like climate and soil. The study explores which environmental factors should be distinguished when determining benchmark levels for the consumptive WF of crops. Hereto we determine benchmark levels for the consumptive WF of winter wheat production in China for all separate years in the period 1961–2008, for rain-fed vs. irrigated croplands, for wet vs. dry years, for warm vs. cold years, for four different soil classes, and for two different climate zones. We simulate consumptive WFs of winter wheat production with the crop water productivity model AquaCrop at a 5 by 5 arcmin resolution, accounting for water stress only. The results show that (i) benchmark levels determined for individual years for the country as a whole remain within a range of ±20 % around long-term mean levels over 1961–2008, (ii) the WF benchmarks for irrigated winter wheat are 8–10 % larger than those for rain-fed winter wheat, (iii) WF benchmarks for wet years are 1–3 % smaller than for dry years, (iv) WF benchmarks for warm years are 7–8 % smaller than for cold years, (v) WF benchmarks differ by about 10–12 % across different soil texture classes, and (vi) WF benchmarks for the humid zone are 26–31 % smaller than for the arid zone, which has relatively higher reference evapotranspiration in general and lower yields in rain-fed fields. We conclude that when determining benchmark levels for the consumptive WF of a crop, it is useful to primarily distinguish between different climate zones. If actual consumptive WFs of winter wheat throughout China were reduced to the benchmark levels set by the best 25 % of Chinese winter wheat production (1224 m3 t−1 for arid areas and 841 m3 t−1 for humid areas), the water saving in an average year would be 53 % of the current water consumption at winter wheat fields in China. The majority of the yield increase and associated improvement in water productivity can be achieved in southern China. Introduction Half of the large river basins in the world face severe blue water scarcity for at least one month a year (Hoekstra et al., 2012). Agriculture is the largest consumer of water in the world and therefore responsible for a large part of the water scarcity in the world. Still, global food demand continues to increase, due to growing populations and changing diets. Meeting growing food demands and simultaneously reducing the water footprint (WF) of agricultural production is therefore one of the greatest societal challenges of our time (Foley et al., 2011;Hoekstra and Wiedmann, 2014). In crop pro-duction, individual farmers generally aim to maximize their economic return through raising their productivity per unit of input such as capital, labour, land, and fertilizer. When water is scarce, raising production per unit of water (i.e. increasing water productivity in terms of t m −3 or reducing the WF in m 3 t −1 ) is a key challenge in order to save water and achieve sustainable water use at catchment level. Even when water is not scarce, it makes sense to have a reasonable level of water productivity, i.e. a good amount of "crop per drop". Farmers, however, generally lack incentives for saving water, since they pay little for their water use compared to other input factors, even under conditions of high water scarcity. In order to provide producers with an incentive to reduce the WF of their products to reasonable levels, Hoekstra (2013 has proposed to develop WF benchmarks, which can be used by governments, farmers and customers (crop traders and retailers) for setting WF reduction targets. Setting WF benchmarks for different products, particularly water-intensive products like crops, is fundamental for wise water allocation and fair sharing of water resources among different sectors and users (Hoekstra, 2013). WF benchmarks of crop production could be global, but would preferably be context-specific, given the fact that the WF of growing a crop varies as a function of environmental factors such as climate and soil (Mekonnen and Hoekstra, 2011;Siebert and Döll, 2010;Tuninetti et al., 2015). The WF of a crop is determined by both environmental conditions (e.g. climate, soil texture, CO 2 concentration in the air) that cannot be controlled by humans and managerial factors (e.g. application of fertilizers and pesticides, irrigation technology and strategy, mulching practice) (Zwart et al., 2010;Mekonnen and Hoekstra, 2011;Brauman et al., 2013). Benchmarks for the WF of growing a crop can, for example, be set by looking at what WF level is not exceeded by the best 20-25 % of the total production in an area. Alternatively, benchmarks can be determined by estimating the WF associated with the best available technology and management practice (Hoekstra, 2013. Mekonnen and Hoekstra (2014) followed the first approach and developed global benchmarks for both the consumptive (green plus blue) WF and the degradative (grey) WF for a large number of crops, based on estimated WF values for 1996-2005 at a spatial resolution of 5 by 5 arcmin. Chukalla et al. (2015) followed the second approach and explored reduction potentials of consumptive WFs for a few crops by applying different types of alternative irrigation techniques and strategies and different types of alternative mulching practices. They found that the highest reduction (∼ 29 %) in the consumptive WF of a crop could be achieved when applying drip or subsurface drip irrigation in combination with deficit irrigation and synthetic mulching. Research in developing benchmark levels for the consumptive WF of crop production is still in its infancy. An important question that has been insufficiently addressed is which environmental factors should play a role when devel-oping WF benchmarks. It is nice to have one global benchmark for the consumptive WF per crop, as a global reference, like the ones developed by Mekonnen and Hoekstra (2014), but it remains unclear whether it is reasonable to expect the same water productivity under different environmental conditions. In their global analysis, Mekonnen and Hoekstra (2014) found that a crop in a temperate climate generally has a smaller WF than the same crop in a tropical climate, but this can still be due to other factors (e.g. better management practices in temperate climates), so that this is not a sufficient finding to diversify benchmark levels based on the distinction between temperate and tropical. Besides, even though Mekonnen and Hoekstra (2014) found a difference between different climates, for each crop considered it was found that the 10 % best global production (e.g. with smallest WFs) was always at least partly in the tropics as well. In other words, a WF benchmark developed in the temperate part of the world still offers a reference value that can be achieved in the tropics as well. Next to climate, soil also affects evapotranspiration and yield and thus the WF of a crop. Tolk and Howell (2012), for example, analyse the variation of consumptive WFs of sunflower in relation to different types of soils. There has not been yet, though, a systematic study looking at how environmental factors influence the consumptive WFs of crops and to which extent it makes sense to diversify WF benchmark levels based on specific environmental factors. The current study aims to contribute to this discussion through an explorative study for winter wheat in China. We explore which environmental factors should be distinguished when determining benchmark levels for the consumptive WF of crops. We subsequently determine benchmark levels for the consumptive WF of winter wheat production in China for all separate years in the period 1961-2008, for rain-fed vs. irrigated croplands, for wet vs. dry years, for warm vs. cold years, for four different soil classes, and for two different climate zones. Winter wheat in China accounts for 95 % of total wheat production in China, which is the world's biggest wheat producer (FAO, 2014). Winter wheat covers 96 % of China's harvested wheat area and is grown across China's different climate zones (NBSC, 2013). In order to avoid interference from managerial factors that cause differences in evapotranspiration and yield, we simulate WFs by means of FAO's water productivity model AquaCrop Raes et al., 2009;Steduto et al., 2009), at a resolution of 5 by 5 arcmin, considering only water stress and not taking into account other stresses such as from soil fertility, salinity, frost, or pest and diseases. 2 Method and data 2.1 Estimating consumptive WF of growing a crop The consumptive (green and blue) WF of growing a crop (m 3 t −1 ) equals the total actual evapotranspiration (ET, m 3 ha −1 ) over the cropping period divided by the crop yield (Y , t h −1 ). In the current study, the ET and Y of growing winter wheat in China were simulated on a daily basis, at 5 by 5 arcmin resolution, with FAO's crop water productivity model AquaCrop Raes et al., 2009;Steduto et al., 2009), run for the whole period 1961-2008. Compared to other crop growth models, AquaCrop has a significantly smaller number of parameters and better balances between simplicity, accuracy, and robustness (Steduto et al., 2007;Confalonieri et al., 2016). The model performance on simulating crop growth and water use has been well tested for a variety of crop types under diverse environmental conditions (e.g. Kumar et al., 2014;Jin et al., 2014;Abedinpour et al., 2012;Mkhabela and Bullock, 2012;Andarzian et al., 2011;Stricevic et al., 2011;Heng et al., 2009;Farahani et al., 2009;García-vila et al., 2009). AquaCrop has been applied in WF accounting at field (Chukalla et al., 2015), river basin (Zhuo et al., 2016a), and national level (Zhuo et al., 2016b) at high spatial resolution. AquaCrop simulates water-driven crop water productivity with a dynamic daily soil water balance: where S [t] (mm) refers to the soil water content at the end of day t, PR [t] (mm) the precipitation on day t, IRR [t] (mm) the irrigation water applied on day t, CR [t] (mm) the capillary rise from groundwater, ET [t] (mm) daily actual evapotranspiration, RO [t] (mm) daily surface runoff and DP [t] (mm) deep percolation. CR [t] is assumed to be zero because the groundwater depth is considered to be much larger than 1 m (Allen et al., 1998). The green and blue WFs are determined by green and blue ET over the cropping period, respectively, divided by Y . Following Chukalla et al. (2015) and Zhuo et al. (2016a, b), the daily green and blue ET (mm) were separated by tracking the daily incoming and outgoing green and blue water fluxes at the boundaries of the root zone: where S green and S blue refer to the green and blue soil water content, respectively. The initial soil water moisture at the start of the growing period is assumed to be green water. The contribution of precipitation (green water) and irrigation (blue water) to surface runoff was calculated based on the respective magnitudes of precipitation and irrigation to the total green plus blue water inflow. The green and blue components in DP and ET were calculated per day based on the fractions of green and blue water in the total soil water content at the end of the previous day. Y was determined by multiplying the above-ground biomass (B) and the harvest index (HI, %). HI was adjusted to water and temperature stress depending on timing and extent of the stress by an adjustment factor (f HI ) from the reference harvest index (HI 0 ) (Raes et al., 2011): Only water stress is considered in modelling, which is determined by the water availability in the root zone, thus leaving out the effects of non-environmental factors (e.g. technology, fertilization) on crop growth. For irrigated fields, we assume that the applied irrigation volumes are equal to the net irrigation requirement. We used the same input crop parameters, including a fixed crop calendar, reference harvested index, and maximum root depth as calibrated for China's winter wheat, as in Zhuo et al. (2016b). We simulated winter wheat production per grid cell over the years based on the irrigated and rain-fed harvested areas of around the year 2000, as obtained from Portmann et al. (2010) (Fig. 1) in order to avoid in the simulations the effects of changes in where and how much wheat is grown. Data on monthly precipitation, reference evapotranspiration (ET 0 ), and temperature at 30 arcmin resolution were taken from the CRU-TS 3.10 dataset (Harris et al., 2014). Soil texture data were obtained from Dijkshoorn et al. (2008). For hydraulic characteristics for each type of soil, the indicative values provided by AquaCrop were used. Data on total soil water capacity were obtained from Batjes (2012). Benchmarking consumptive WF of growing a crop Following Mekonnen and Hoekstra (2014), benchmark levels for the consumptive WF of crop production were determined by ranking the grid-level WF values from the small- est to the largest against the corresponding cumulative percentage of total crop production. As in the earlier study, we did not distinguish between green and blue WF benchmarks for two reasons. Firstly, the ratio of green to blue WF of a crop heavily depends on local green water resources availability, which is defined by the climate of a certain time in a certain location. Location-specific blue WF benchmarks can be developed as a function of the overall consumptive WF benchmarks and local green water availability . Secondly, the purpose of the current study is to find out to which environmental factor the consumptive WF benchmark is most sensitive. In order to analyse differences in consumptive WFs in relatively dry vs. relatively wet years, we evenly group the 48 considered years into relative dry, average and relatively wet years. We ranked the years based on the annual precipitation over the cropping area of winter wheat in China (Fig. 2a), classifying the 16 years with the lowest precipitation into the group of dry years and the 16 years with the highest precipitation into the group of wet years, with the other 16 years remaining for the group of average years. The average annual precipitation levels of the relatively dry, average and relatively wet years are 760, 799, and 850 mm yr −1 , respectively. We also grouped the years considered into relatively cold, average and relatively warm years based on annual mean temperature (Fig. 2b) and into years with relatively low, average and high ET 0 (Fig. 2c). The average annual mean temperatures of the relative cold, average and warm years are 10.7, 11.2, and 11.8 • C, respectively. The average annual ET 0 values in the three categories of years are 874, 896, and 927 mm yr −1 . For determining WF benchmarks for different soil texture classes, the soil types in the USDA (US Department of Agriculture) soil texture triangles were grouped into four soil classes (Raes et al., 2011): sandy soils, loamy soils, sandy clayey soils, and silty clayey soils. Each soil class has different ranges of field capacity, permanent wilting point and saturated water content (Table 1). The difference between soil water content and permanent wilting point defines the total available soil water content in the root zone. Given certain soil water content, a soil with a higher field capacity has less deep percolation. With the same water input from precipitation or irrigation and the same soil water content, soils with a smaller saturated soil water content will generate more surface runoff (Raes et al., 2011). Figure 3 shows the spatial distribution of the four soil classes across mainland China. For determining WF benchmarks for different climate zones, we classify climate based on UNEP's aridity index (AI) Thomas, 1997, 1992). The AI is an indicator of dryness, defined as the ratio of precipitation to reference evapotranspiration, with five levels of aridity: hyper-arid (AI < 0.05), arid (0.05 < AI < 0.2), semi-arid (0.2 < AI < 0.5), dry sub-humid (0.5 < AI < 0.65), and humid (AI > 0.65). To determine the geographic spread of the five climate zones in China we used the data on annual precipitation and ET 0 averaged over the period 1961-2008 at 30 by 30 arcmin resolution (Harris et al., 2014) (Fig. 4). In the current study, we group the five climate zones into two broad zones: the arid to semi-arid (Arid) zone (AI < 0.5) and the humid to semi-humid (Humid) zone (AI > 0.5). 3 Result 3.1 Benchmark levels for the consumptive WF as determined for different years and for rain-fed and irrigated croplands separately We calculated the benchmark levels at different production percentiles for the consumptive WF of winter wheat (m 3 t −1 ) for the country as a whole, year by year, for the period 1961-2008. The results are summarized in Fig. 5. The benchmarks, determined per year and per production percentile, generally vary within ±20 % of the long-term mean value over the period 1961-2008. We find that the best 10 % of winter wheat production in China (with smallest WFs) has a maximum long-term average consumptive WF of 777 m 3 t −1 , which is larger than the maximum consumptive WF of the best 10 % of wheat production globally (592 m 3 t −1 ) that was reported by Mekonnen and Hoekstra (2014). We note here that the figures are not fully comparable, because Mekonnen and Hoekstra (2014) consider total wheat (both spring and winter wheat), use another model, and consider another period. We find that the best 20 % of winter wheat production in China has a maximum long-term average consumptive WF of 825 m 3 t −1 , which is smaller than the reported maximum consumptive WF of the best 20 % of wheat production globally (992 m 3 t −1 ). Finally, we find that the best 25 % of winter wheat production in China has a maximum long-term average consumptive WF of 849 m 3 t −1 , which is again smaller than the maximum consumptive WF of the best 25 % of wheat production globally (1069 m 3 t −1 ). The national average consumptive WF of rain-fed winter wheat (1120 m 3 t −1 ) is larger than the national average consumptive WF of irrigated winter wheat (1075 m 3 t −1 ). How-ever, the benchmark levels determined by the best 10, 20, and 25 % of production for rain-fed winter wheat are lower than for irrigated winter wheat. The reason is that the yields in rain-fed production are generally higher than the yields in irrigated production at the same benchmark percentile. The highest rain-fed yields occur in the southern wet area with sufficient precipitation over the cropping period, so that little water stress results in high rain-fed yields. The WF benchmarks for irrigated winter wheat are 8 % (for the 10th production percentile) to 10 % (for the 25th production percentile) higher than for rain-fed winter wheat. Benchmark levels for the consumptive WF for dry vs. wet years In a relatively dry or wet year, when considering winter wheat areas in China as a whole, we do not find typically different consumptive WFs in winter wheat production (Table 2). The WF benchmarks are consistently higher in dry than in wet years (1-3 %), but the differences between benchmark levels for the consumptive WF for dry vs. wet years are small compared to the variations within the dry and wet year categories (±11-14 %). Benchmark levels for the consumptive WF for warm vs. cold years Overall, considering irrigated and rain-fed croplands together, WF benchmarks for relatively warm years are 7-8 % smaller than for relatively cold years, which is not much when seen in the context of fluctuations in the WFs within the three temperature categories (Table 3). In irrigated areas, WF benchmarks for warm years are 11 % smaller, on average, than for cold years. In rain-fed areas, WF benchmarks for warm years are smaller than for cold years as well, but WF benchmarks in average years are not in between the WF benchmarks found for cold and warm years but higher than both. The lower values in cold years relate to lower ET, while the lower values in warm years relate to higher yields. The findings when considering different ET 0 classes are similar when looking at the different temperature classes (Table 4). Overall, considering irrigated and rain-fed croplands together, WF benchmarks for years with high ET 0 are on average 5 % smaller than for years with average ET 0 and only 2 % smaller than for years with low ET 0 . Again, differences between consumptive WFs for years with relatively low or high ET 0 are small when seen in the context of fluctuations in the WFs within the three ET 0 categories (±3-6 %). Table 5 shows the consumptive WFs of winter wheat at different production percentiles in four soil classes in China. The simulated winter wheat production in sandy clayey soils accounts for 60 % of national total, followed by the production in sandy soils (24 %), silty clayey soils (8 %) and loamy soils (8 %) on average over the studied period. No consistent trends can be observed when we compare the benchmarks across the different soil classes. Overall, when we take irrigated and rain-fed fields together, the WF benchmarks for sandy soils are 10-12 % lower than the WF benchmarks for loamy soils. More specifically, we find that the WF benchmarks for irrigated winter wheat in sandy soils are about 15 % smaller than the WF benchmarks for the other three soil classes, due to relatively low ET. Without water stress, as is the case in the irrigated croplands, soil evaporation from sandy soils is less than from the other soil types because of the fast percolation of water below the root zone in the sandy soils, causing lower ET over the cropping period (Asseng et al., 2001). At rain-fed fields with limited water availabil- ity, crop yields are mainly affected by the soil water holding capacity. Therefore, consumptive WFs in sandy soils are larger than in the other three soils, due to the smaller crop yield in case of poorer water holding capacity. The observed differences in WFs of winter wheat in different soil classes agree with the experimental observations by Tolk and Howell (2012) for the case of irrigated sunflower in a semiarid environment as well as with the fieldwork-based simulations by Asseng et al. (2001) for irrigated and rain-fed wheat in the Mediterranean climatic region of Western Australia. Benchmark levels for the consumptive WF for different climate zones Consumptive WFs of winter wheat at different production percentiles in arid and humid zones in China are shown in Table 6. Significant differences between the benchmarks for different climate zones can be observed. Overall, considering irrigated and rain-fed croplands together, WF benchmarks for the humid zone are 26 % (for the 10th production percentile) to 31 % (for the 25th production percentile) smaller than for the arid zone. The WF benchmarks for winter wheat in China as a whole (when we take the arid and humid zones together) are close to the benchmarks for the humid zone, caused by the fact that most (96 % on average over the study period) of the simulated winter wheat production in China occurs in the humid zone. In the irrigated areas, WF benchmarks for the humid zone are 26-30 % smaller than for the arid zone; in the rain-fed areas, they are 29-43 % smaller. The relatively large WFs in rain-fed fields in the arid zone logically follow from the water stress and resultant low yields. For the irrigated fields, the larger WFs in the arid zone are caused by the relatively high ET 0 and ET. The results confirm the findings from previous studies that the WF of crops, especially rain-fed crops, is negatively correlated with precipitation and positively correlated with ET 0 (Zwart et al., 2010;Zhuo et al., 2014). The differ- Figure 6. Simulated consumptive water footprints (WFs) of winter wheat, categorized into four classes (the best 10 % of production, the next best 10 %, the second next best 5 %, and the worst 75 % of production), accounting for different benchmark levels for humid vs. arid parts of China, for the year 2005 (climatic average year). ences between the WF benchmarks for irrigated and rain-fed winter wheat are 7-9 % in the humid zone and 3-11 % in the arid zone. Figure 6 shows, for both the humid and arid part of China and for the various winter wheat production areas, whether they contribute to the best 10 % of national winter wheat production in that climate zone (in the sense of having smallest WFs), to the next best 10 %, to the best 5 % after that, or to the worst 75 % (with WFs beyond the 25th percentile benchmark). Within the arid zone, consumptive WFs below the 25th percentile benchmark level were mostly located in Xinjiang province, with relatively high irrigation density (∼ 98 % of the harvested area). In the humid zone, consumptive WFs below the 25th percentile benchmark level were gathered in the southwest, where ET 0 is smaller than in other places (Fig. 4b). 3.6 Water saving potential by reducing WFs to selected benchmark levels The WF benchmarks for different climate zones differ much more significantly (26-31 %) than for different soils (10-12 %). WF benchmarks differ even less if we compare irrigated vs. rain-fed fields (8-10 %), warm vs. cold years (7-8 %), or wet vs. dry years (1-3 %). Therefore, when determining benchmark levels for the consumptive WF of a crop, it seems most useful to primarily distinguish between different climate zones, at least in the case of winter wheat in China. In this section, we analyse the potential water saving if actual consumptive WFs of winter wheat throughout China were reduced to the climate-specific benchmark levels set by the best 10 % of Chinese winter wheat production (1042 m 3 t −1 for arid areas and 776 m 3 t −1 for humid areas), the best 20 % of Chinese winter wheat production (1170 m 3 t −1 for arid areas and 819 m 3 t −1 for humid areas), or the best 25 % of Chinese winter wheat production (1224 m 3 t −1 for arid areas and 841 m 3 t −1 for humid areas). Taking the estimated actual consumptive WFs of winter wheat in 2005, an average climatic year, as calibrated by the provincial statistics on yield of winter wheat (NBSC, 2013), we find that consumptive WFs in 75 % of the planted grids in arid zones and in 96 % of the planted grids in humid zones are over the 25th percentile benchmarks. This is largely due to low actual vs. potential yields. Figure 7 shows differences between actual provincial yields of winter wheat and the simulated yield potentials from the current study (assuming no crops stresses except water stress in rain-fed areas). The largest yield gaps occur in the southern provinces in the humid zone. The largest yield gap was observed in Fujian province. South China has 81 % of national blue water resources (Jiang, 2015). However, the risk of water shortage is increasing in the wet south with the operation of the Southto-North Water Transfer Project and the increasing competition for water resources between different sectors. Therefore, reducing WFs down to benchmark levels is as important for the relatively wet south of China as it is for the drier north. Table 7 shows the (green plus blue) water saving that would be achieved if actual consumptive WFs of winter wheat everywhere in China were reduced to the climatedifferentiated WF benchmark levels set by the 10th, 20th and 25th percentiles of production, in an average year (2005). We find that if in both the arid and humid zones the actual consumptive WFs were reduced to the respective 25th percentile benchmark level, the water saving in an average year would be 53 % of the current water consumption at winter wheat fields in China, which is 201 billion m 3 yr −1 in absolute terms. We further find that the water saving potential in the arid zone is substantially higher than in the humid zone. Discussion The consumptive WF of a crop in m 3 t −1 most strongly depends on the crop yield in t ha −1 and much less on the evapotranspiration from the crop over the growing period in m 3 ha −1 (Tuninetti et al., 2015;Mekonnen and Hoekstra, 2011). The simulated consumptive WFs of winter wheat in China have been based on modelling under a hypothetical condition without effects of managerial factors on crop growth. For evaluating our simulations of crop growth, we compared the simulated averaged yields of winter wheat of Chinese provinces for to the corresponding agroclimatic attainable yields at different agricultural input levels in the GAEZ database (FAO/IIASA, 2011) (Fig. 8). The GAEZ agro-climatic attainable yields account for different levels of yield constraints from four factors in addition to water stress: (i) pest, disease, and weed damage on plant growth, (ii) direct and indirect climatic damages on quality of produce, (iii) efficiency of farming operations, and (iv) frost hazards. Current simulated yields of irrigated winter wheat are closest to the agro-climatically attainable yields with intermediate input levels and the yields of rain-fed winter wheat are closest to the agro-climatically attainable yields with high input levels. The simulated national average yield in the current study (6.5 t ha −1 ) is 23 % higher than the attainable wheat yield for China in the year 2000 (5.3 t ha −1 ) estimated by Mueller et al. (2012). The study shows that climate is the primary factor to be considered when setting consumptive WF benchmarks. This finding is probably a little sensitive to the model used; the precise WF benchmark figures found per climate zone, however, will be more sensitive to the model used. Subsequent studies, comparing WF benchmark estimates per cli- Figure 8. Comparison between the simulated yield of winter wheat and the agro-climatically attainable yield according to FAO/IIASA (2011) at provincial level in China. Averaged over the period 1961-1990. mate zone using different models, are necessary to quantify the uncertainty in the WF benchmarks presented in this study. Further research could also explore whether crop varieties used should play a role when developing WF benchmarks, given the fact that some crop varieties may inherently be more productive than others. On the other hand, one could also consider that choosing a productive crop variety is part of the managerial choices. Since crop variety is not a given environmental condition but a choice, one could argue that accepting a less strict WF reference level for a less productive crop variety cannot be justified. An important remaining research question is also how combinations of specific techniques and practices can actually lead to the WF reductions that will be necessary in different locations if the Chinese government were to adopt certain WF benchmarks as targets to achieve greater water productivity. Suppose, for example, that two WF benchmarks for winter wheat were adopted in China: 1224 m 3 t −1 for arid areas and 841 m 3 t −1 for humid areas. Although the simulations suggest that these levels are feasible throughout the arid and humid zone, respectively, whatever the type of soil, whether fields are rain-fed or irrigated, whether it is a cold or warm year, and whether it is a dry or wet year, in some places it will be harder and more would need to be done than in other places. We studied benchmarks for combined green and blue WFs and did not look at each colour separately. For rain-fed lands, the benchmark levels presented in this study are obviously green WF benchmarks. For irrigated lands, the presented benchmark levels for overall consumptive WFs would need further specification into green and blue. Further research would need to be done to translate a certain benchmark level for the overall consumptive WF of a crop into a specific blue WF benchmark level per specific location as a function of the amount of rain per location, recognizing that the blue ratio in the WF will need to be larger if less green water is available. Conclusions Based on the case of winter wheat in China we find that (i) benchmark levels for the consumptive WF, determined for individual years for the country as a whole, remain within a range of ±20 % around long-term mean levels over 1961-2008; (ii) the WF benchmarks for irrigated winter wheat are 8-10 % larger than those for rain-fed winter wheat; (iii) WF benchmarks for wet years are on average 1-3 % smaller than for dry years; (iv) WF benchmarks for warm years are on average 7-8 % smaller than for cold years; (v) WF benchmarks differ by about 10-12 % across different soil texture classes; and (vi) WF benchmarks for the humid zone are 26-31 % smaller than for the arid zone, which has relatively higher ET 0 in general and lower yields in rain-fed fields. Therefore, we conclude that when determining benchmark levels for the consumptive WF of a crop, it is useful to primarily distinguish between different climate zones. We estimated that when in both the arid and humid zones, the actual consumptive WFs are reduced to climate-specific benchmark levels set by the 25th percentile of production and the water saving in an average year would be 53 % of the current water consumption at winter wheat fields in China, with the greatest relative savings in the arid zone. Data availability Data used in this paper is available upon request to the corresponding author. Author contributions. Arjen Y. Hoekstra, La Zhuo, and Mesfin M. Mekonnen designed the study. La Zhuo carried it out. La Zhuo prepared the manuscript with contributions from all coauthors.
213192230
s2orc/train
v2
2020-03-12T10:20:09.437Z
2020-03-10T00:00:00.000Z
Severe community acquired adenovirus pneumonia in an immunocompetent host successfully treated with IV Cidofovir Adenovirus is a common cause of acute febrile respiratory infection in children and are generally self-limiting although pneumonia can occur in neonates and adults with compromised immunity. However, severe adenovirus pneumonia in healthy adults has been rarely described. Here, we report a case of severe community-acquired adenovirus pneumonia in a previously healthy patient successfully treated with intravenous Cidofovir. Case presentation A previously healthy 36-year-old Filipino male presented with fever, diarrhea and cough 7 days after a trip to the Philippines. He was treated with intramuscular Ceftriaxone and Clarithromycin by his family physician for 4 days without improvement. On day 7, he developed shortness of breath and was admitted to the hospital. He was febrile at 40.9 � C, normotensive but tachycardic at 100bpm, tachypneic at 25 breaths/min and saturating at 96% on facemask (FM) 5L/min. He was dehydrated with decreased air entry over the left lung. Infective markers C-Reactive Protein (CRP) and procalcitonin were markedly elevated at 245mg/dL and 5.3 μg/L respectively. There was leukopenia 2.1x10 9 /L with 9% atypical mononucleosis, hyponatremia 128 mmol/L and hypoalbuminemia 34 g/L. Platelet and glucose were normal whilst SGPT and SGOT were both elevated at 114 U/L and 353 U/L, [ Fig. 1]. Mycoplasma IgM, influenza A & B IF, dengue NS1 Ag, dengue serology and urine Streptococcus Ag and Legionella Ag were all negative. Melioidosis serology, Legionella antibody, Leptospira IgM and HIV were also negative. Stool was watery brown and had no leukocytes, ova, cysts or parasites. His arterial blood gas (ABG) on FM 5L/min was pH 7.59, PaCO 2 23.9 mmHg, PaO 2 108.5 mmHg, bicarbonate 23 mmol/L, BE 1.3 mmol/ L and oxygen saturation 99.1%. Chest x-ray (CXR) showed consolidation of the left upper lobe and the superior segment of the left lower lobe [Image 1A]. He was started on IV Moxifloxacin 400mg daily and IV Meropenem 1g 8 hourly. White cell count fell to 1.7x10^9/L the following day and Oseltamivir 75mg 12 hourly was added. He became more hypoxic requiring 100% oxygen in a non-rebreather mask and then non-invasive positive pressure ventilation the next day. He was eventually intubated on day 4 of hospitalisation. His CXR had progressed to consolidation of the right upper lobe and the whole of the left lung [Image 1B]. Respiratory virus multiplex PCR from the throat detected Adenovirus serotype 7 and IV Cidofovir 5mg/kg was given with 1L normal saline hydration before Cidofovir infusion and 1L hydration immediately thereafter to prevent renal toxicity. Probenecid 2 g was given 3 hours before, 1g at 2 and 8 hours after the Cidofovir infusion. IV hydrocortisone 100mg 6 hourly was added. Prone ventilation was employed during the first two days of mechanical ventilation. His respiratory support came down on the fourth day of mechanical ventilation. Fever stopped swinging and CXR showed less infiltrates over the left lower zone [Image 1C]. He was given a second dose of Cidofovir 8 days after the first dose and was successfully extubated the same day. Of note, Adenovirus PCR was positive in his blood (day 11) and stool (day 12). It remained detectable in the sputum on day 19 but finally cleared on day 24. As his fever persisted around 38 � C despite negative cultures from the blood, sputum, stool and CVP tip, he was empirically covered with IV Levofloxacin 750 mg daily and IV Tedizolid 200 mg daily [ Fig. 1]. He was weaned off intranasal oxygen and fever lysed on day 23 and 27 of illness respectively. CXR showed residual patchy shadowing only in the lower zones [Image 1D] and he was discharged on day 27 of illness, after 21 days in hospital. Outpatient review showed complete resolution of CXR infiltrates and the blood tests normalised. Lung function however showed impaired diffusion but recovered to 81% predicted 4 months after the illness [ Table 1]. Discussion Adenovirus is a double stranded DNA virus from the Family Adenoviridae, with more than 50 serotypes. It was first isolated in 1950 from adenoid tissue derived cell culture. This virus can survive long outside the host and is also resistant to disinfectants, gastric and biliary secretions [1]. It is transmitted through droplets, direct contact (i.e. conjunctiva, water, surface) and oral faecal route. The incubation period ranges 4-8 days. The clinical presentations include acute respiratory disease, gastroenteritis (both present in our case), pharyngo-conjuntival fever, epidemic keratoconjunctivitis and acute hemorrhagic cystitis. Most infections are self-limited. However, adenovirus pneumonia in immunocompromised hosts usually leads to rapid deterioration with a high fatality rate. While rare, life threatening and even fatal outcomes in adenovirus pneumonia have been reported in immunocompetent hosts [2][3][4]. In an acute respiratory disease presentation, it is predominantly serotype 1, 2, 4, 5 and 6. It is highly contagious [1]. It accounts for 80% of the adenoviral infections in children below 4 years old [1] and 1-7% of adult respiratory tract infections. Treatment often requires just supportive care. A review conducted by Clark et al. (n ¼ 21) [5] demonstrated that adenovirus pneumonia usually presents with fever (90%), cough (80%), dyspnoea (70%) and respiratory failure within hours to days, requiring mechanical ventilation in 67% of the cases. This was associated with a mortality rate of 24%. Common laboratory findings include lymphopenia, leukopenia, thrombocytopenia, elevated transaminases and occasionally leukocytosis with neutrophilia [5]. The radiological findings are usually widespread interstitial shadows but there may be ground glass, pleural effusion as well as consolidation. Lobar consolidation, a pattern more suggestive of bacterial infection was observed in one quarter of the cases [6,7]. Likewise, our case presented with consolidations. Histology can illustrate a necrotizing bronchitis or bronchiolitis picture which will start to resolve two weeks after the onset of the illness. Fibrosis is not common. In the past, diagnosis was made by culture and specific immunofluorescent staining which were costly and time consuming. Since the advent of multiplex PCR assays, early detection has led to the option of early initiation of antiviral therapy. Kim et al. reported favourable outcomes to Cidofovir in 7 non-immunocompromised adults with severe adenovirus pneumonia when the drug was administered early in the course of respiratory failure [8]. Cunha et al. also reported successful response to Cidofovir in a 22-year old male [9]. Two other reports [14,15] also showed successful outcome with Cidofovir. Although the efficacy of antiviral therapy for severe adenovirus pneumonia has not been established [4], ECIL-4 guideline recommends the use of Cidofovir at 5mg/kg with probenecid in leukemic patients with adenovirus pneumonia which we used in our patient [12]. Unfortunately, outcomes were often poor when Cidofovir was given late in the course of illness [2,10,11]. It is also expensive, not always available and associated with renal and hematologic toxicity. However, Cidofovir may have a role in the early treatment of severe pneumonia due to adenovirus. Possible antiviral alternatives include Ribavirin which was given to a 39-year old Korean male on day 4 of hospitalisation with a favourable outcome [13]. Conclusion Firstly, severe viral pneumonia caused by adenovirus can occur in immunocompetent hosts and present with radiological findings of consolidation more typical of a bacterial etiology. Secondly, respiratory virus PCR assays which allow for the rapid and accurate diagnosis of viral etiology should be used whenever available as it can guide the implementation of infection control measures, avoid the unnecessary use of antibacterial antibiotics and allow the option of early antiviral therapy. Lastly, the early use of IV Cidofovir should be considered in severe adenovirus pneumonia. More work needs to be done to understand the relationship between who and when to use IV Cidofovir in this cohort of patients. Consent to publish Written informed consent was obtained from the patient for publication of this Case Report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.
11224340
s2orc/train
v2
2014-07-01T00:00:00.000Z
1992-02-23T00:00:00.000Z
Dividing and Conquering Long Sentences in a Translation System The time required for our translation system to handle a sentence of length l is a rapidly growing function of l. We describe here a method for analyzing a sentence into a series of pieces that can be translated sequentially. We show that for sentences with ten or fewer words, it is possible to decrease the translation time by 40% with almost no effect on translation accuracy. We argue that for longer sentences, the effect should be more dramatic. Introduction In a recent series of papers, Brown et aL introduce a new, statistical approach to machine translation based on the mathematical theory of communication through a noisy channel, and apply it to the problem of translating naturMly occurring French sentences into English [1,2,3,4]. They develop a probabilistic model for the noisy channel and show how to estimate the parameters of their model from a large collection of pairs of aligned sentences. By treating a sentence in the source language (French) as a garbled version of the corresponding sentence in the target language (English), they recast the problem of translating a French sentence into English as one of finding that English sentence which is most likely to be present at the input to the noisy channel when the given French sentence is known to be present at its output. For a French sentence of any realistic length, the most probable English translation is one of a set of "This work was supported, in part, by DARPA contract N00014-91-C-0135, administered by the Office of Naval Research. English sentences that, although finite, is nonetheless so large as to preclude an exhaustive search. Brown et aL employ a suboptimal search based on the stack algorithm used in speech recognition. Even so, as we see in Figure 1, the time required for their system to translate a sentence grows very rapidly with sentence length. As a result, they have focussed their attention on short sentences. The designatum of some French words is so specific that they can be reliably translated almost anywhere they occur without regard for the context in which they appear. For example, only the most contrived circumstances could require one to translate the French techndtium into English as anything but technetium. Alas, this charming class of words is woefully small: for the great majority of words, phrases, and even sentences, the more we know of the context in which they appear, the more confidently and eloquently we are able to translate them. But the example provided by simultaneous translators shows that at the expense of eloquence it is possible to produce satisfactory translation segment by segment seriatim. In this paper, we describe a method for analyzing long sentences into smaller units that can be translated sequentially. Obviously any such analysis risks rupturing some organic whole within the sentence, thereby precipitating an erroneous translation. Thus, phrases like (potatoes frites I French fries), (pommes de discorde I bones of contention), (potatoes de terre I potatoes), and (pommes sauvages I crab apples), offer scant hope for subdivision. Even when the analysis avoids splitting a noun from an associated adjective or the opening word of an idiom from its conclusion, we cannot expect that breaking a sentence into pieces will improve translation. The gain that we can expect is in the speed of translation. In general we must weigh this gain in translation speed against the loss in translation accuracy when deciding whether to divide a sentence at a particular point. Rifts Brown et al. [1] define an alignment between an English sentence and its French translation to be a diagram showing for each word in the English sentence those words in the French sentence to which it gives rise (see their Figure 3). The line joining an English word to one of its French dependents in such a diagram is called a connection. Given an alignment, we say that the position between two words in a French sentence is a rift provided none of the connections to words to the left of that position crosses any of the connections to words to the right and if, further, none of the words in the English sentence has connections to words on both the left and the right of the position. A set of rifts divides the sentence in which it occurs into a series of segments. These segments may, but need not, resemble grammatical phrases. If a French sentence contains a rift, it is clear that we can construct a translation of the complete sentence by concatenating a translation for the words to the right of the rift with a translation for the words to the left of the rift. Similarly, if a French sentence contains a number of rifts, then we can piece together a translation of the cbmptete sentence from translations of the individual segments. Because of this, we assume that breaking a French sentence at a rift is less likely to cause a translation error than breaking it elsewhere. Let Pr(e, alf ) be the conditional probability of the English sentence e and the alignment a given the French sentence f = flf2...fM. For 1 < i < M, let I(i; e, a,f) be ] if there is a rift between fi and fi+l when f is translated as e with alignment a, and zero otherwise. The probability that f has a rift between fi and fi+l is given by p(rli;f) _= ~ I(i;e,a,f) Pr(e, alf). eja (1) Notice that p(r[i,f) depends on f, but not on any translation of it, and can therefore be determined solely from an analysis of f itself. The Data We have at our disposal a large collection of French sentences aligned with their English translations [2,4]. From this collection, we have extracted sentences comprising 27,2]7,234 potential rift locations as data from which to construct a model for estimating p(r[i; f). Of these locations, we determined 13,268,639 to be rifts and the remaining 13,948,592 not to be rifts. Thus, if we are asked whether a particular position is or is not a rift, but are given no information about the position, then our uncertainty as to the answer will be 0.9995 bits. We were surprised that this entropy should be so great. In the examples below, which we have chosen from our aligned data,, the rifts are indicated by carets appearing between some of the words. 3. La^Soci6t5 du cr6dit agricole^ fair savoirAce qui suit: The exact positions of the rifts in these sentences depends on the English translation with which they are aligned. For the first sentence above, the Hansard English is The answer to part two is yes. If, instead, it lind been For part two, yes is the answer, then the only rift in the sentence would have appeared immediately before the final punctuation. The Decision Tree Brown et al. [3] describe a method for assigning sense labels to words in French sentences. Their idea is this. Given a French word f, find a series of yesno questions about the context in which it occurs so that knowing the answers to these questions reduces the entropy of the translation of f. They assume that the sense of f can be determined from an examination of the French words in the vicinity of f. They refer to these words as informants and limit their search to questions of the form Is some particular informant in a particular subset of the French vocabulary. The set of possible answers to these questions can be displayed as a tree, the leaves of which they take to correspond to the senses of .f. We have adapted this technique to construct a decision tree for estimating p(r[i,f). Changing any of the words in f may affect p(r]i,f), but we consider only its dependence on fi-1 through fi+2, the four words closest to the location of the potential rift, and on the parts of speech of these words. We treat each of these eight items as a candidate informant. For each of the 27,217,234 training locations, we created a record of the form vl v~ v3 v4 v5 v6 v7 vs b, where vs is the value of the informant at site s and b is 1 or 0 according as the location is or is not a rift. Using 20,000,000 of these records as data, we have constructed a binary decision tree with a total of 245 leaves. Each of the 244 internal nodes of this tree has associated with it one of the eight informant sites, a subset of the informant vocabulary for that site, a left son, and a right son. For node n, we represent this information by the quadruple (s(n),S(n), l(n), r(n)>. Given any location in a French sentence, we construct vl v2 v3 v4 v5 v6 v7 vs and assign the location to a leaf as follows. 1. Set a to the root node. 2. If a is a leaf, then assign the location to a and stop. 3. If v~(~) E 8(a), then set a to l(a), otherwise set a to r(a). 4. Go to step 2. We call this process pouring the data down the tree. We call the series of values that a takes the path of the data down the tree. Each path begins at the root node and ends at a leaf node. We used this algorithm to pour our 27,217,234 training locations down the tree. We estimate p(r[i, f) at a leaf to be the fraction of these training locations at the leaf for which b --1. In a similar manner, we can estimate p(r[i, f) at each of the internal nodes of the tree. We write p¢(n) for the estimate of p(r[i, f) obtained in this way at node n. The average entropy of b at the leaves is 0.7669 bits. Thus, by using the decision tree, we can reduce the entropy of b for training data by 0.2326 bits. To warrant our tree against idiosyncrasies in the training data, we used an additional 528,509 locations as data for smoothing the distributions at the leaves. We obtain a smooth estimate, p(n), ofp(r]i,f) at each node as follows. At the root, we take p(n) to equal pc(n). At all other nodes, we define p(n) = A(bn)p~(n) + (l -A(b~))p(the parent of n), (2) where bn is one of fifty buckets associated with a node according to the count of training locations at the node. Bucket I is for counts of 0 and l, bucket 50 is for counts equal to or greater than 1,000,000, and for 1 < i < 50, bucket i is for counts greater than or equal to zl -ox/~7 and less than x~ + ax/~, with x2 -ax/~72 = 2, x49 + a,~/~ff = 1,000,000, and xl + a,v/~ = xi+l-a~/~qfor 1 < i < 49. ltere, x2 = 438, and a = 21. Segmenting [Jet t(l) be the expected time requited by our system to translate a sequence of I French words. We can estimate t(1) for small values of 1 by using our system to translate a number of sentences of length I. If we break f into m+l pieces by splitting it between fh and fh+l, between fie and f,:2+1, and so on, finishing with a split between fire and fi,,+l, 1 _< il < i2 < "" < im< M, then the expected time to translate all of the pieces is t( il )+t( i2-il )+. . "+ t( im-i,,-l)+t( M-im ). Translation accuracy will be largely unaffected exactly when each split falls on a rift. Assuming that rifts occur independently of one another, the probability of this event is I- I~=lp(r[ik,f). We define the utility, S~(i,f), of a split i = (il,i2,... ,ira) for f by Here, cr is a parameter weighing accuracy against translation time: when c~ is near 1, we favor accuracy (and, hence,, few segments) at the expense of translation time; when oz is near zero, we favor translation time (and, hence, many segments) at the expense of accuracy. Given a French sentence f and the decision tree mentioned above for approximating p(rli,f), it is straightforward using dynamic programming to find the split that maximizes Sa. If we approximate t(l) to be zero for l less than some threshold and infinite for l equal to or greater than that threshold, then we can discard o~. Our utility becomes simply m SO,f) = ~logp(rlik,f) k=l provided all of the segments are less than the threshold. If the length of any segment is equal to or greater than the threshold, then the utility is -exp. Decoding In the absence of segmentation, we employ an anMysis-transfer-synthesis paradigm in our decoder as described in detail by Brown et al. [5]. We have insinuated the segmenter into the system between the analysis and the transfer phases ofour processing. The analysis operation, therefore, is unaffected by the presence of the segmenter. We have also modified the transfer portion of the decoder so as to investigate only those translations that are consistent with the segmented input, but have otherwise left it alone. As a result, we get the benefit of the English language model across segment boundaries, but save time by not considering the great number of translations that are not consistent with the segmented input. Results To test the usefulness of segmenting, we decoded 400 short sentences four different ways. We compiled the results in Table l, where: Tree is a shorthand for segmentation using the tree described above with a threshold of 7; Every 5 is a shorthand for segments made regularly after every five words; Every 4 is a shorthand for segments made regularly after every four words; and None is a shorthand for using no segmentation at all. We see from the first line of the table that the decoder performed somewhat better with segmentation as determined by the decision tree. If we carried out an exhaustive search, this could not happen, but because our search is suboptimal it is possible for the various shortcuts that we have taken to interact in such a way as to make the result better with segmentation than without. The result with the decision tree is clearly superior to the results obtained with either of the rigid segmentation schemes. In Table 2, we show the decoding time in minutes for the four decoders. Using the segmentation tree, the decoder is about 41% faster than without it. We use a trigram language model to provide the a priori probability for English sentences. This means that the translation of one segment may depend on the result of the immediately preceding segment, but should not be much affected by the translation of any earlier segment provided that segments average more than two Words in length. Because of this, we expect translation time with the segmenter to grow approximately linearly with sentence length, while translation time without the segmenter grows much more rapidly. Therefore, we anticipate that the benefit of segmenting to decoding speed will be greater for longer sentences.
230800690
s2orc/train
v2
2020-12-24T09:03:24.206Z
2021-01-01T00:00:00.000Z
Closed versus open reduction of facial fractures in children and adolescents: A systematic review and meta-analysis Background Treatment of facial fractures in children and adolescents has always been a challenge for oral surgeon. The choice of treatment type must take into account several factors. This systematic review aimed to evaluate closed versus open reduction of facial fractures for pediatric facial fractures. Material and Methods A systematic review of the literature was conducted in three databases (PubMed/MEDLINE, Embase and The Cochrane Library) in accordance with the PRISMA statement. The PICO question was: Conservative treatment is more appropriate than surgical treatment for reducing facial fractures in children and adolescents? The full papers of 41 references were analyzed in detail. Eleven papers were included in this systematic review: one prospective study and ten retrospective studies. All studies evaluated the complication rate. Results A total of 73 (7.68%) of the 950 patients experienced complications. Among these patients, 24 (3.85%) had been treated with conservative treatment and 49 (15.03%) with surgical treatment. The fixed-effects model revealed a lower complication rate with conservative treatment than surgical treatment (P<0.00001; RR: 0.18; 95% CI: 0.11–0.28). Heterogeneity was low for the complication rate outcome (X2: 5.64; P = 0.69; I2: 0%). Conclusions The present findings show that conservative treatment is more commonly performed for pediatric facial fractures and complications occur more with surgical treatment. Therefore, surgeons must evaluate all variables involved in choosing the most appropriate treatment method to ensure greater benefits to the patient with fewer complications. Key words:Closed fracture reduction, open fracture reduction, pediatrics, treatment failure. Introduction Facial fractures in children are relatively rare and evaluated separately due to their particular diagnostic and treatment aspects. In children, bones have greater elasticity and there is less pneumatization of the sinuses, grea-ter thickness of the surrounding adipose tissue and good stability of the maxilla and mandible due to the presence of unerupted teeth. Due to these characteristics, considerable energy is required to cause a fracture in developing bones (1). The prevalence of facial fractures in children and adolescents is approximately 10%. The majority of fractures occur past the age of five years, with peaks of incidence at school age and in adolescence, when the characteristics of craniofacial traumas are similar to those found in adults (2). Social, cultural and environmental factors are responsible for altering the epidemiology of craniofacial trauma. The incidence of facial fractures in the pediatric population is higher among boys at almost all ages, with a ratio of up to 3:1 in comparison to girls (3,4). Divergent opinions are found in the literature regarding the treatment of facial fractures in pediatric patients, but there is a consensus that changes in growth should be prevented and more conservative treatment (non-surgical) is indicated, whenever possible (5). In many cases, however, it is necessary to perform open fracture reduction, for which an absorbable fixation system or titanium miniplates are commonly used (6). Within this context, the aim of the present study was to perform a systematic review of the literature to evaluate closed versus open reduction of facial fractures for pediatric facial fractures. Material and Methods -Registry protocol This systematic review was structured following the PRISMA checklist (7) and was performed in accordance with models proposed in the literature (8,9). The methods used in this systematic review are registered with the international prospective register of systematic reviews (PROSPERO: CRD42018094847). Search strategy and information sources Two independent reviewers (CAAL and CCM) performed the article selection process using pre-established eligibility criteria. Studies were pre-selected on the basis of the titles and abstracts and assessed according to the inclusion and exclusion criteria. The reviewers analyzed and discussed the articles until a consensus was reached. Any disagreements were resolved through discussions with a third reviewer (BCEV). The following databases were searched for the identification of relevant articles: PubMed (http://www. ncbi.nlm.nih.gov/pubmed), Web of Science (http:// appswebofknowledge.ez27.periodicos.capes.gov.br/ WOS_GeneralSearch_input.do?product=WOS&-search_mode=GeneralSearch&SID=6AgXsKu6D9Ih-bLBoyku&preferencesSaved=) and The Cochrane Library (http://onlinelibrary.wiley.com/ cochranelibrary/ search/). The following keywords were used: ((((pediatric OR children OR adolescents OR child OR paediatric)) AND (facial trauma OR facial fracture OR maxillofacial fracture OR maxillofacial trauma OR mandibular fracture OR mandibular trauma OR midface)) AND (Open reduction OR Miniplate OR screw devices OR Titanium plate OR Resorbable plate OR internal fixation OR ORIF OR osteosynthesis)) AND (Conservative OR closed reduction OR immobilization OR Arch bar OR Close observation OR non-invasive treatment OR IMF). -Selection criteria The inclusion criteria for the initial selection were publications in English with no restriction imposed on the date of publication, studies involving human subjects, specific studies on treatment for facial fractures in children and adolescents and descriptions of the number of patients treated, proposed treatment (surgical access and osteosynthesis materials), postoperative characteristics, complications, follow up and conclusions. After the pre-selected articles had been submitted to full-text analysis, the criteria listed in Table 1, were used for the final selection of papers for inclusion in the present review. The selection criteria were established by the authors prior to the onset of the study. An inter-examiner test (kappa) was performed to determine the level of agreement regarding the pre-selection of studies based on the titles and abstracts. The following kappa values were found for the different databases: PubMed/MEDLINE: 0.83; Embase: 0.80; Cochrane: 1.0. -Criteria for the selection of studies The first phase of the article selection process was analysis of the titles and abstracts of the papers retrieved during the searches of the databases. Articles having passed this first step were submitted to full-text analysis based on the eligibility criteria. The PICO question recommended in the PRISMA statement was determined as follows: (P) Population: children and adoles-Systematic reviews and meta-analyses, randomized clinical trials, prospective and retrospective clinical studies, cents with facial fracture; (I) Intervention: open treatment (surgery); (C) Comparison: closed (conservative) treatment; (O) Outcome: complications (inflammatory process; facial growth). The following was the guiding question: Conservative treatment is more appropriate than surgical treatment for reducing facial fractures in children and adolescents? -Exclusion criteria The following were the exclusion criteria: in vitro studies, animal studies, reviews, case reports, case series, studies in which complications are not reported, oral communications, posters and studies that do not report the type of treatment performed to reduce fractures. -Analysis of methodological quality The methodological quality of the studies was assessed independently by the same two investigators. The materials and methods, results and discussion sections were analyzed using the Cochrane Collaboration tool for assessing the risk of bias. The quality of the selected studies was evaluated based on the PRISMA criteria, using the 27 questions established by Moher et al. (7). Therefore, the studies were separated into categories of randomized clinical trials and prospective studies. -Meta-analysis The Reviewer Manager 5 (Cochrane Group) software program was used for the meta-analysis, which was based on the Mantel-Haenzel (MH) method. The dichotomous outcome (complication rate) was analyzed using risk ratios (RR) and respective 95% confidence intervals (CI). Data were considered significant when P < 0.05. In cases of statistically significant heterogeneity (P < 0.10), a random-effects model was used, whereas a fixedeffects model was used in cases of a non-significance difference (10). A funnel plot (plot of effect size versus standard error) was created to evaluate the occurrence of publication bias. -Search results The electronic searches were performed in April 2018 and yielded 307 references in PubMed, 80 in Embase, and 20 in The Cochrane Library. No additional studies were identified in the manual searches. After the removal of duplicates, 391 potentially relevant references were assessed, 41 of which were submitted to full-text analysis. The application of the eligibility criteria led to the exclusion of thirty articles. Thus, eleven articles were found to be clinically or technically relevant to the subject of the study and were included in this systematic review. The AQUORUM flow diagram giving an overview of the selection process is presented in Fig. 1. e70 -Types of studies Among the eleven papers included in this systematic review, ten were retrospective studies (11)(12)(13)(14)(15)(16)(17)(18)(19)(20) and one was a prospective study (21). Table 2 displays the characteristics of these studies, that were divided by region of the affected face, mandibular condyle (11,13,20), mandible (12,16,17,19,21) and all regions of the face (14,15,18). The Cochrane Collaboration tool for assessing the risk of bias could not be applied due to the type of studies included in this systematic review. Consequently, the Newcastle-Ottawa scale (NOS) for assessing the quality of non-randomized studies was used (Table 3). -Meta-analysis All studies evaluated the complication rate. A total of 73 (7.68%) of the 950 patients had complications. Among these patients, 24 (3.85%) had been treated with conservative treatment and 49 (15.03%) had been treated with surgical treatment. Two studies reported no complications in either group evaluated. The fixed-effects model revealed a significantly lower complication rate with conservative treatment compared to surgical treatment (P < 0.00001; RR: 0.18; 95% CI: 0.11 to 0.28). Heterogeneity was considered low for the complication rate outcome (X2: 5.64; P = 0.69; I2: 0%) (Fig. 2). The funnel plot demonstrated symmetry in the studies with regard to the complication rate, suggesting an absence of publication bias (Fig. 3). Discussion The issue of facial fractures in children and adolescents is important. However, the difficulty in managing these patients imposes limitations on the types of study conducted to address this subject. Thus, there is an absence of clinical trials due to the required sample size and the variety of types of treatment (conservative or non-surgical and surgical) (6,22). In the studies analyzed, mean age was 10.2 years, demonstrating a greater occurrence of facial trauma in adolescence, as reported previously (23,24). This type of trauma generally occurs as a child becomes more independent from family life and has greater contact with contact sports, urban violence or physical aggression at school or on the street (2,25,26). In nearly all studies, the prevalence of facial trauma was higher in the male sex (11)(12)(13)(14)(15)(16)(17)(18)(19)21). This trend is also observed in studies involving adults, as males in all age groups are more exposed to violence and accidents, such as traffic accidents at an older age and domestic violence at a younger age (24,27). The present study divided the fractures of the face by affected bone to facilitate the analysis and understanding, so that it is possible to show the differences between them, regarding the type of treatment and the complications, for example. In relation to the etiology of trauma, traffic accidents were the most prevalent (13,16,19,21). While this cause has declined due to laws requiring seat belts and car seats for children, such precautions are often neglected, leaving children unprotected and exposed in the event of an accident (28,29). The literature reports that the mandible is the most affected in children and adolescents (30-31). In the present review, eight studies (11)(12)(13)16,17,(19)(20)(21) reported cases of mandibular fracture, which may be explained precisely by this high prevalence rate. Moreover, the treatment of this type of fracture poses a challenge in both children and adults, with different forms of conservative and surgical treatment proposed, depending on the affected region of the mandible (32,33). Regarding the type of treatment, conservative methods were more commonly employed, regardless of age, although it has been reported that conservative treatment is generally used for younger children, depending on the energy and location of the trauma (34,35). Open treatment is generally performed with rigid internal fixation, especially titanium plates (36,37). Among the studies analyzed, the most prevalent conservative treatment was intermaxillary fixation for a period of two weeks, which is in agreement with data reported previously (11,15). However, other forms of conservative treatment were also used, such as intraocclusal block (12,13,19), kinesiotherapy (11,21), splint (16) and orientation (14,17). However, Neff et al. (38) reported that surgical treatment for facial fractures in children has been increasingly more frequent in recent years, especially as children get older. The difference between treatments with regard to complications was significant, with a lower prevalence found for conservative treatment (12)(13)(14)(15)(16)(17)19). This finding was expected, as the possibility of complications in open treatments is inherently greater due to the use of fixation materials and the risk of infection or nerve injury (6). The complications following conservative treatment were generally related to small asymmetries or deviations, which are expected in certain types of trauma (11,19). In cases of conservative treatment for condylar fractures in children, complications such as temporomandibular disorder and ankylosis of the temporomandibular joint (39) may be observed years after the fracture. However, as the follow up of these patients is limited, many of these complications are not reported. With surgical treatment for this type of fracture, the complications are generally related to nerve function, especially the temporal branches, zygomatic branches of the facial nerve and the auriculotemporal nerve (37). The fixed effects model was performed with nine articles, since two reported no complications, and revealed a significantly lower rate of complications with conservative treatment compared to surgical treatment. Moreover, the funnel plot demons- trated symmetry among the studies regarding the complication rate, indicating an absence of publication bias. This systematic review included studies that evaluated the treatment of facial fractures in children and adolescents, but not all the studies were used in the meta-analysis due to methodological heterogeneity, which was mainly related to the complications resulting from treatment, thereby limiting the information available on these outcomes. The present findings show that conservative treatment is more commonly performed for pediatric facial fractures and, as demonstrated by the meta-analysis, leads to a significantly lower occurrence of complications when compared to surgical treatment. The most common forms of conservative treatment are intermaxillary fixation, intraocclusal block, kinesiotherapy, splint and only orientation. The present findings should be cautiously interpreted. All included studies were retrospective and prospective, reducing the level of evidence because of the possible presence of uncontrolled confounding factors. Variables can not be isolated, such as: age, for example, younger children up to 10 years, will usually undergo non-surgical treatment, which can also be observed by the affected region, such as the mandible (more reported in this research), often undergoing non-surgical treatment, such as intermaxillary fixation. Despite the difficulty of working with these patients further studies (preferably RCTs) with longer follow-up periods are recommended to investigate the most appropriate treatment for reducing facial fractures in children and adolescents.
215526610
s2orc/train
v2
2019-05-01T13:07:59.657Z
2013-03-15T00:00:00.000Z
Marketing and semiotic approach on communication. Consequences on knowledge of target-audiences Modern marketing puts the consumer and not the manufacturer in the center, the essence of the marketing approach being the conception, the projection and the making of the product, starting from the consumer towards the manufacturer; this resulting in the fact that the product’s marketing approach seems strikingly similar to the semiotic approach of the message. In the semiotic approach, the message is a construction of signs, which, by interacting with the receiver, produces the meaning. The transmitter (the message transmitter) becomes less important. The focus is centered to the „text" and the way this is „read", the sense being born when the „reader" negotiates the „text". The negotiation takes place when the „reader" filtrates the message through the sieve of his cultural loading. A „target public" is a group which is specific to a certain Cultural Loading, a loading which deals with linguistic, logical, psychological and symbolic structures, which get out to meet the message and „negotiates" with the structures similar to it. When we are thinking in terms of the semiotic approach, we are handling the cultural determinism of communication, using the concepts of Kuhn and Gonseth (paradigm and referential). They open a new path in the market research, in the market segmentation and knowledge of the „target audiences". The marketing approach In reply to the question "What marketing is?", Kotler and Armstrong wrote: "More than any other company function, marketing deals with clients. Creating value and satisfaction for the client represents in itself the essence of modern marketing thinking and practice" [1]. However, this does not mean that the only purpose of marketing is client satisfaction, regardless of the manufacturer's expenses. In the same moment, Kotler and Armstrong offer the simplest definition of marketing: "marketing is the delivery of client satisfaction at a profit." [1]. Therefore, modern marketing places the consumer in the centre of attention, and not the producer, aiming at producing and selling what the consumer wants, when and where he wants it, at a price he is willing to pay. But marketing is not only about "explaining and selling"; sale and advertising is only the tip of the iceberg. Unlike sale, which only begins after the product is manufactured, "marketing begins long before the company has a product" [2]. In Principles of Marketing, Kotler and Armstrong define marketing as it follows: "a social and entrepreneurial process, through which individuals and groups obtain the thing they need and they desire, by creating and exchanging products and value with other groups and people" [2]. It is a social process -an extremely complex process, and sale is, like advertising, only one of its many functions. This is the essence of the marketing approach: conceiving, designing and making the product, starting from the consumer to the manufacturer, which is the exact reverse order compared with sales, which take place from the manufacturer to the consumer. In fact, the starting point is the knowledge (as exact as possible) of needs, desires and demands existing in certain groups ("market segments"), and for building the "product's personality" even knowledge of phantasms or false needs haunting the individuals' conscious or subconscious is a must. In (1967), Kotler was keen on mentioning that it is not the marketers who create false needs, which "are already there before the marketers appear"; at most, they influence some desires that arise from the existing desires: "Marketers could promote the idea that a Mercedes would satisfy a person's need for social status, but they are not the ones creating the need for social status" [3]. Marketing starts with the idea of identifying some needs and desires and ends up satisfying them, thus polarizing signals that the market generates, it chooses the targets to be reached, it studies the consumer's behavior and develops strategies for: production, pricing, distribution and promotion, also using methods from psychology, sociology, anthropology etc., developing at the same time its own information system. In the marketing approach, a decisive importance is held by the company's external environment, which is formed by the microenvironment and the macro-environment. The elements making up the macro-environment are the natural, the demographic, the economic and sociocultural (religion, ethnicity, organizations, etc.) environments. When we speak about the sociocultural environment, we firstly refer to the main cultural values of society, which are found in the people's conceptions: image of the self (some search personal pleasures, escaping daily routine, whereas others accomplish themselves through religion, career, etc.); image of the others (the switch to an "altruistic" society, through charity, social assistance, etc); image of organizations (companies, parties, unions, state institutions, etc.); image of society (patriots defend it, reformists want to change it, the Eastern people want the living standard of the West); image of nature (some live in harmony with nature, others feel dominated and others try to master nature); image of the universe (in some areas, religious practices still exist, in others there is a permanent regression due to the search for immediate satisfactions). Taking into account all the above, we may conclude that the marketing approach is strikingly similar with the semiotic approach on communication. The similarity is so striking that for the one who perceives it, it is hard not to search for its profound significance, fertile at a theoretical level. The semiotic approach on communication In a classic paper, Introduction to Communication Sciences, John Fiske showed that in the semiotic approach the message is a construction of signs which, by interacting with the receptor, generates meaning. The focus is not so much on communication as process, but rather on communication as the generator of meaning. The sender (message transmitter) loses his importance. The focus is directed towards to the "text" and the way it is "read". "The reading" is the process of discovering the meaning that emerges when the "reader" interacts or negotiates with the "text". The negotiation takes place when the "reader" filters the message through the strainer of the cultural pattern, in terms of signs and codes that make up the message. The more we share the same codes and the same sign system, the closer the two significances attributed to the message [4]. Therefore, the message does not occur prior to the communication process, which exists independently from the Sender-Receiver interaction, sent by the sender to the receiver, but it is an element in a structured relationship which includes, among other elements, the external reality as well as the producer/reader. Producing and reading the text are seen as parallel (if not identical); within these processes, the relationship is structured in such a way that they occupy the same place. The concomitance between communication and meaning generation, which we consider to be the essence of the semiotic approach on communication, is of paramount importance for efficient communication. Who does not think of the "Sender" and "Receiver" as co-authors of the message cannot have a professional career in Public Relations. What is the "target-audience" if not a group who has a certain cultural loading? The cultural background or the cultural loading consists of linguistic, logical, psychological and symbolic structures which greet the message and "negotiate" with its similar structures. Semiotics assess communication as generation of significance through messagesgeneration accomplished by either the message codifier or by the message de-codifier. Significance is not a static, absolute concept, clearly delimited in the message. It is an active process; in order to characterize this process, semioticians use verbs such as to create, to generate or to negotiate. Negotiation is perhaps the most useful of them because it implies a "I-give-something -yougive-something / I-make-a-compromise -you-makea-compromise" "meet-you-half-way" between the person and the message. Following the "negotiation" process, Meaning emerges, that is the meaning of the message -in fact, it is the message itself that emerges, as there is no message without meaning (the same way that a sign without meaning does not function as a sign). When the receiver does not have all the necessary structures for the reading, he "sees" but does not know what he sees (the socalled state of confusion/perplexity); when the receiver does not have any of these structures, the message directed at him "is lost in the cosmic obscurity". That is why, in the Public Relations profession, the engineering of effective and efficient communication is mandatory in the research phase, in order to get to know the cultural background of the target-audience. This is how we can graphically represent the generation of significance (of the birth of meaning) after the negotiation process between the message structure and internal structures of the subject: Consequences of target-audiences on knowledge When we think in the terms of semiotic approach, which highlights constant, biunique interactions between the message "producer" and the reference system, between him and the "reader", we deal with the cultural determinism of communication, using the concepts of Kuhn and Gonseth (paradigm and referential respectively). The connection between inter-individual and cross-cultural communication becomes obvious the moment we understand culture in the paradigm of cultural anthropology -for example, as E.B. Tylor, T. Parsons or Chombart de Lauwe define it. In the introductory study of the volume Images de la culture, called "Systemes de valeurs et aspirations culturelles", Paul-Henry Chombart de Lauwe classified the approaches on culture as it follows: 1) culture as development of the individual within society, 2) cultures belonging to societies or to private social environments and 3) the issue of developing an universal culture. It is obvious that out of the three approaches, the only one which does not involve a prior assessment and which does not necessarily lead to a hierarchy of cultures (societies, groups or individuals) is the second one. It will also be the privileged referential of this paper, as it best suits its objectives. Stressing the role of infrastructure generating aspirations and value systems, Chombart de Lauwe considers that "a culture is marked by a series of patterns, guiding images, representations to which all members of a society relate to in their behaviors, work, role and social relationships". He draws attention to the equal importance that techniques, space organization, production and work or consumption have. The cultural paradigm The concept of "cultural paradigm" has been used increasingly more over the last four decades, both in social philosophy, and in anthropology, psychology and sociology [5]. It entered these fields in the form of "concept translation", borrowed from the philosophy of science, where it was imposed by the American philosopher Thomas S. Kuhn. He was the one to realize that theories on the nature of science and the purpose of research in natural sciences do not concord with the scientific practice, as it ensues from the history of science. In practice, he says, the behavior of scientists deviates from the canons which define scientificity and even rationality (canon which we encounter both in science philosophy and in current mentality). Positivists, and even K. Popper (adversary to logic empirics), considered that science differs from speculation by testing -either as a confirmation of the theory (Carnap), or as an invalidation ("falsification") of testing (Popper). For them, the central concept in characterizing the nature and the dynamics of science is that of "scientific theory" and the differentiation criterion between science/ non-science is testability. For Kuhn, the core concept is the one of paradigm and the criterion is problem solving. Paradigms are patterns of scientific practice that can be encountered in classical scientific works and especially in handbooks and treaties; they are at the bottom of a disciplinary group education (physicians, chemists etc.). Based on these, the one who educates himself learns to formulate and to solve new problems. Paradigms are therefore "exemplary scientific accomplishments which, for a period of time, offer model problems and solutions to a community of practicians" [6]. Unlike the knowledge contained in the abstract assertions of theory and in the general methodological rules, knowledge in paradigms is a tacit knowledge. Paradigms guide the members of the scientific group towards solving new problems, without them being aware of the paradigm every step of the way. They apply it -sometimes even creatively -but without being able to speak about it in general statements. Their collective character results from the quasi-conscious character of paradigms. Although the birth of a paradigm is usually linked to the name of a great thinker (Ptolemy, Newton, Franklin or Einstein), it is never the making of a single man. The fact that the members of the disciplinary group share a paradigm explain the fact that they communicate almost perfectly and without major difficulties; moreover, it explains the unanimity of professional judgments. This does not happen however with scientists who share different paradigms, as paradigms are incommensurable (cannot be compared, because there is no common "unit of measure"). The incommensurability of paradigms originates from the following: i) they imply incompatible presuppositions regarding the basic entities of the studied domain and regarding their behavior; ii) they imply different criteria of delimiting "real" problems and "legitimate" solutions; iii) the observations which scientists make on the same reality are also incommensurable. How can the incommensurability of observations be explained? Although they look "in the same direction and from the same standing point" (Kuhn), although the constitution of the sensory apparatus is the same, researchers will perceive different things. This happens because of the tacit knowledge in paradigms; it interposes on the stimulus-perception circuit (Fig. 2). Determination of perception by tacit knowledge inside the cultural paradigm Thus, a "communication fracture" (Kuhn) appears; the adepts of a paradigm cannot convince the adepts of a competitive paradigm of the superiority of their viewpoint and they will not be able to understand and accept the other's point of view. The arguments of the two parties will be circular (they can be understood and accepted only by the researches who are already working in the same paradigm). It suffices to replace Kuhn's concept with the one of "cultural paradigm" in order to realize that the limits of communication between scientists are valid for the communication between any human groupssince any group can be considered a cultural or subcultural community (ethnic communities, social classes, professional guilds, political parties, etc.) [7]. It suffices for two rival (in other words, competing for the same realm of reality) paradigms to exist in order for obstacles to appear in communication. We shall define the cultural paradigm as a constellation of values, beliefs and methods (including "techniques" of problem formulation) shared at a certain moment by the members of a community. One moment after, we shall realize that all of Kuhn's observations regarding "disciplinary groups" remain valid: 1) the partisans of rival paradigms speak of different things, even when they look "from the same standpoint" and "in the same direction"; 2) competition between rival paradigms is not solved with arguments or by resorting to "facts"; 3) the adepts of rival paradigms disagree with respect to "the really important problems"; 4) communication between them is always partial; v) the adepts of rival paradigms are in different worlds (they see different things, in different correlations); 5) absolute communication is possible only inside the same paradigm; 6) the switch from one paradigm to another can occur for various reasons, which are not related to logical demonstration or empiric "proofs". The idea of "paradigm" was developed, in culture and civilization studies, in consonance with the great trends of science and philosophy, by the French thinker Edgard Morin, in The lost paradigm: human nature [8,9]. Maruyana defines four epistemological typologies which correspond to different ways of perception, causality and logic: a) the homogenizing/ classifying/ hierarchical type; b) the atomist type; c) the homeostatic type; d) the morphogenetic type. Each of the types above engenders a mindscape which colours any creation in the sphere of knowledge, esthetics, ethics and religion. Through its radicality and universality, Maruyama's conception resembles that of Michel Foucault, who speaks of epistemes as of likelihood conditions of the cognitive field accesible to a culture: "the array of relations hat unite, at a certain moment, those discourse practices that generate epistemological figures, sciences and virtually formulated systems [of knowledge]" [10]. Foucault postulates the uniqueness of the episteme within a culture. But in "open societies" , a culture presents itself as a "game of paradigms", as a network of paradigms, sub-paradigms and meta-paradigms. There can be no question of a "unifying paradigm", but the existance of some dominant paradigms and some dominated paradigms is obvious. If we find in Morin the same radicalims as in Foucault, the former will not however postulate the uniqueness of a certain paradigm within a culture (in an era or in a community). Morin talks of "large" and "small" paradigms, of "adversary", "intolerant" paradigms etc. In Edgar Morin, a "large paradigm" controls both the theories and the reasoning, as well as the cognitive (intectual and cultural) field where theories and reasoning form. It controls even the epistemology which controls the theory, even the practice to which theory relates to. The individuals of a community know, think and act according to the paradigm that their culture has written in them. We apparently stand no chance of changing their thinking and acting strategies. Tacit communication The only realistic solution is to use tacit communication, which could reach for the "selfimage" of groups and individuals from various social groups, in order to trigger the change of some of the current cultural paradigm's presuppositions, in particular of those which generate perceptions, representations and value-attitude couples, which, in their turn, generate contra-productive behaviors (which oppose the purposes of modernization). In essence, it is the art of talking about something while leaving the impression that you talk about something else. To those who will cry resentfully "This is manipulation!" we reply: 1) manipulation "abhors vacuum", because if we do not manipulate, others will; 2) assuming nevertheless that no one is manipulating them, people would manipulate themselves-which they do, "day after day, hour after hour, and in mass", by virtue of desiderative thinking, of inauthentic thinking (Erich Fromm) and of the "voluptuousness of self-deception" (Jean-Francois Revel); 3) manipulation is not an evil in itself; it can be good or bad, depending on the purpose; 4) nothing great can be accomplished without manipulation -from bringing up a child to educating a people -, and we do not speak of the modernization of the Romanians, a much more complex and difficult task then education. Conclusion: he who fends himself from manipulation is manipulating himself, but against him; he is remain a «part of the problem and will never be "part of the solution". A pre-condition for success is the positive knowledge of value-attitude couples which are at work in today's Romanian society, knowledge that will be ensured by mixing theoretical approaches with national-scale sociologic research, carried out by a specialized institute.
215771920
s2orc/train
v2
2020-04-16T09:18:50.732Z
2020-04-15T00:00:00.000Z
COVID-19 Preparedness in Michigan Nursing Homes. The coronavirus disease 2019 (COVID-19) pandemic has disproportionately high mortality among older adults, particularly those with comorbidities. Nursing homes (NHs) are particularly vulnerable to widespread transmission and poor outcomes. The objectives of this study were (1) to understand preparedness among Michigan NHs in the midst of an ongoing pandemic and (2) to compare with a 2007 survey on pandemic influenza preparedness in Michigan NHs. RESULTS Of the 426 Michigan NHs surveyed, 130 (31%) responded within 1 week of first contact. An additional 27 NHs opened the survey but did not provide any responses. The distribution of reported bed capacity among facilities was unchanged, with 70% reporting 51 to 150 beds in 2020 vs 68% in 2007. An overwhelming majority of respondents in 2020 had a separate pandemic response plan, and only 3 (2%) of NHs reported having no response plan in 2020 compared to (Table 2). A greater portion of NHs were willing to accept hospital overflow of non COVID-19 patients (82% vs 53% in 2007; P < .001) or discharge patients to open up beds (18% vs 9% in 2007; P = .015). NHs in 2020 were more likely to have communication lines established with nearby hospitals (63% vs 49% in 2007; P = .0232) and public health officials (86% vs 56% in 2007; P < .001), suggesting better integration within the healthcare system. As Michigan reported its first case of COVID-19, facilities were most concerned about staffing and supplies. Asked to report their greatest concern regarding preparedness, 42% (35/84) of respondents mentioned lack of supplies (especially personal protective equipment [PPE]), and 32% (27/84) were concerned they would not be able to adequately staff their facility. Facilities were proactive, with more NHs reporting having stockpiled supplies in 2020 (85%) than in 2007 (57%; P < .001). Most facilities reported stockpiling of PPE (Table 2). Staff shortages were anticipated by 79% (67/85) of 2020 respondents, with several facilities already making contingency plans ( DISCUSSION Our results show that Michigan NHs may be better prepared for pandemics now than in 2007. In 2020, NHs were able to make policy and procedure changes within 1 week in response to urgent guidance from the Centers for Medicare and Medicaid Services and CDC, 5,6 which likely helped the facilities prepare for COVID-19 pandemic. Almost all NHs have a dedicated staff member responsible for preparedness and were willing to accept patients from hospitals to assist in their surge capacity planning, particularly for non-COVID patients. NHs did express concerns about staffing shortages and PPE supply constraints as cases rise. Limitations of this study include: self-report bias, limited geographic representation, and likely lower response rate as survey was performed in the early stages of a global pandemic. Assessment of pandemic preparedness at the beginning of an outbreak is a strength. These data will serve as a baseline for future surveys and studies of NHs' experiences during this pandemic. In summary, while NHs in 2020 show greater pandemic preparedness than in 2007, they will face challenges due to limited PPE supplies and staffing shortages. NHs will need to refine their preparedness strategies as the COVID-19 pandemic evolves and is anticipated to have major consequences. For NHs to effectively prepare for a pandemic, real-time data and experiences should be readily available to help inform their response. The World Health Organization confirmed 93,090 cases of novel coronavirus SARS-CoV-2 infections (COVID-19) worldwide on March 04, 2020. 3,198 deaths were declared (3%). In the United States, 108 cases were confirmed. 1 Coronavirus family members are known to be responsible for severe acute respiratory syndrome (SARS-CoV) and Middle East respiratory syndrome (MERS-CoV), associated with severe complications, such as acute respiratory distress syndrome, multiorgan failure, and death, especially in individuals with underlying comorbidities and old age. 2,3 In a recently published large case series of 138 hospitalized patients with COVID-19 infected pneumonia, the 36 patients (26.1%) transferred to an intensive care unit were older and had more comorbidities (median age = 66 years; comorbidities in 72.2% of cases) than patients who did not receive intensive care unit care (median age = 51 years; comorbidities in 37.3% of cases). 4 Comorbidities associated with severe clinical features were hypertension, diabetes, cardiovascular disease, and cerebrovascular disease, which we know are highly prevalent in older adults. Previously, the China National Health Commission reported that death mainly affects older adults, since the median age of the first 17 deaths up to January 22, 2020, was 75 years (range = 48-89 years). 5 Moreover, people aged 70 years or older had shorter median days (11.5 days) from the first symptom to death than younger adults (20 days), suggesting a faster disease progression in older adults. 5 Since COVID-19 seems to have a similar pathogenic potential as SARS-CoV and MERS-CoV, 6 older adults are likely to be at increased risk of severe infections, cascade of complications, disability, and death, as observed with influenza and respiratory syncytial virus infections. 7,8 The consequences of possible epidemics in long-term care facilities could be severe on a population of older adults who are by definition frail and immunologically naïve towards this virus, even if the risk is of course for the moment mainly theoretical. Therefore, it seems essential to
9618370
s2orc/train
v2
2017-09-15T22:07:57.162Z
2017-01-18T00:00:00.000Z
The geometric nature of weights in real complex networks The topology of many real complex networks has been conjectured to be embedded in hidden metric spaces, where distances between nodes encode their likelihood of being connected. Besides of providing a natural geometrical interpretation of their complex topologies, this hypothesis yields the recipe for sustainable Internet's routing protocols, sheds light on the hierarchical organization of biochemical pathways in cells, and allows for a rich characterization of the evolution of international trade. Here we present empirical evidence that this geometric interpretation also applies to the weighted organization of real complex networks. We introduce a very general and versatile model and use it to quantify the level of coupling between their topology, their weights and an underlying metric space. Our model accurately reproduces both their topology and their weights, and our results suggest that the formation of connections and the assignment of their magnitude are ruled by different processes. F) Suggested improvements: The paper should be submitted to a much more specialized journal like Physical Review E. The paper would benefit from a more intuitive discussion of what the abstract notion of geometric embedding plausibly means for real-world networks. G) References: Credit to previous work is not appropriately given. A vast literature on network embedding in various manifolds (including hyperbolic ones and higher-dimensional ones) exists. See the many works by Tomaso Aste and Tiziana di Matteo as an example. Also the main works about spatially embedded networks (see review by Marc Barthelemy and references therein) are not adequately cited. H) Clarity and context: abstract, introduction and conclusions are clear and well written. However they overstate the generality of the model and should more fairly emphasize the specific assumptions made in the paper. Reviewer #2 (Remarks to the Author): The authors approach the problem of whether the edge weights in real networks emerge have a geometrical origin and whether this can be well captured by suitable hidden variables models. In particular, the authors first provide some evidence of the geometric nature of the weights, by studying the distribution of weights for links that are embedded in triangles. They then build a class of embedded weighted networks that is able to reproduce a number of properties of the observed networks. Finally, they show that this class of networks manages to capture and reproduces the observed metrical properties of real networks by checking the triangle violation in an hyperbolic embedding, equivalent to the first one. The topic addressed is interesting, timely and relevant for the journal's audience. It builds on previous work by some of the authors on hidden variable models and the corresponding embeddings that showed that many sparse unweighted networks can be fruitfully embedded in hyperbolic spaces yielding novel effective strategies for navigation and link prediction. The paper is very well written and the topic clearly explained. The references and abstract are appropriate and cover the right existing literature on the subject. The structure of the paper is appropriate and the contribution is novel. Previous work on the same subject focused on the unweighted case, this contribution provides first evidence that the description of weights too can be cast in the same paradigm. Although I am a bit dubious about the long term impact of the paper, I believe it will be of interest to others in the field. I've have a few main comments (listed below), but I think that, once these are met, the manuscript will be fit for publication in Nature Communications. Main comments: -All the analysed networks have beens sparsified via disparity filter before analysis. -How does the analysis generalise to the case of dense networks? -Pushing this argument, one might wonder what it would happen for the case of complete weighted networks, e.g. similarity or correlation networks, where the degree is already fixed. For example, for Pearson correlation matrices, the matrix already yields a distance matrix; how different does the weighted embedding proposed here would come out in that case? -What is the role of D? In this paper the authors provide insights one the geometry of edge weights, but it's not clear the geometry of which space. Most of the analysed networks already live in a number of different dimensions (2, 3 etc) but they already appear to be well described by a D=1 model. It seems thus that the geometrical nature of the weights refers to a different geometrical space than the original network's natural embedding space . So, does this geometry really carry information/meaning or is it just a very general and elegant way to produce hidden variable networks? Alternatively, what would going to higher D grant in terms of network description or degrees of freedom? -What are the atypical features that impeded the embedding of the US airport network? Do they constitute a problem for the general theory? Minor comments: -"particularize" is really an awkward word, maybe something like "we focus/restrict to the D=1" or equivalent would be nicer. -There's a typo on the als page "weigths" instead of "weights" -Same page "On perspective" -> "in perspective" Reviewer #3 (Remarks to the Author): A. Summary of the key results The authors Allard et. al. study the relationship between edge weights and a latent-space hyperbolic geometry for empirical networks. The latent spaces of networks are inferred using topology alone. The authors develop a novel network-generative model which they fit to empirical networks. They find that edge weights can be jointly coupled to the network topology (i.e., node degrees) as well as the geometry of the latent-space embedding. They explore the connection between edge weights and geometry by studying the violation of the triangle inequality for triangles in the network. B. Originality and interest: if not novel, please give references Understanding the origin of weights in weighted networks is a central topic in network science. This research provides the first step toward modeling weighted networks using latent-space, hyperbolic embeddings and is indeed an important contribution that should be published in some form. Publication in Nature Communications, however, requires a substantial advancement, and I believe the paper falls short in this regard. In particular, hyperbolic geometry is inferred from network topology, and since weights are known to depend on topology, it is somewhat unsurprising that there is a connection between the weights and geometry. For example, it has already been established that weights are larger for edges between larger degrees [1](i.e., popularity) as well as for edges that join nodes with overlapping neighborhoods [19] (i.e., similarity). It is unclear whether or not the hyperbolic geometry modeling approach provides further insight than what is possible by studying the dependence of weights on node degrees and triangle participation (i.e., neighborhood overlap). (I note that both [1] and [19] are already cited in the paper, but the authors do not clearly discuss their connection to the geometric notions of 'popularity' and 'similarity.') C. Data & methodology: validity of approach, quality of data, quality of presentation The methodology for hyperbolic space embeddings is state-of-the-art in the field of network science. Their model is indeed the state-of-the-art for modeling weighted networks in hyperbolic spaces. D. Appropriate use of statistics and treatment of uncertainties The article uses appropriate statistics, although it would be helpful to provide further details about their methods for inference. E. Conclusions: robustness, validity, reliability By modeling the coupling between weights, node degrees, and geometry, the authors provide a framework to deeply study these relationships. This is indeed an important contribution that justifies publication in some form. However, outside of observing, model fitting, and measuring the extent of these relations, very little other scientific insight is provided. That is, it is not clear if or how a relationship between weights and geometry will have an impact on any application. F. Suggested improvements: experiments, data for possible revision Main areas of improvement: 1. Section II studies the relationship between weights and triangles. Triangles and clustering reflect geometry due to the triangle inequality, however triangles are an indirect consequence of geometry. For examples, the number of triangles in which an edge is involved (that is, its multiplicity m) also depends on the nodes' degrees (i.e., topology). For example, in configuration models the multiplicity m grows with k_ik_j, since (k_i-1)(k_j-1) gives number of possible triangles and edges are created at random. Given the focus on triangles both in section II and the violation of triangle inequality, the paper needs a much more detailed/systematic exploration and discussion of the relationship between triangles and geometry. This relation is currently vague, and citing the triangle inequality does not provide quantitative evidence of their connection. 2. In contrast to triangles, edge length is a direct measurement of geometry and may provide a more straightforward description for how weights and geometry are coupled. That is, are weights larger for shorter edges? Studying edge lengths may also help address comment A (the origin of clustering), since it would be helpful to understand if triangles primarily exist between node triples (i,j,k) that are nearby in the metric space, and if so, do they primarily involve nodes with small \Delta \theta or nodes with large degrees. 3. It may also be informative to study the way in which triangle inequalities are violated. For example, is the inequality first violated for triangles involving nearby nodes or those that involve distant nodes? Is the inequality first violated for triangles involving hubs or those that do not involve hubs. Minor issues: 4. abstract line 3: The authors do not 'prove' their model to be the "most" general and versatile model. 5. Sec. II -for many networks, multiplicity m and k_ik_j are highly correlated, implying that sampling of biased on m is similar to sampling with a bias on k_ik_j. It is worth noting how normalization according to k_ik_j overcomes this bias. 6. Sec. II -Does this normalization help address the goal of discerning the dependence of edge weights on \Delta \theta versus k_i and k_j? 7. Sec. III, Eq. 2 -secondary hidden parameters \sigma_i and \sigma_j are defined for edge weights, however, it is later assumed that \sigma_j = ak_j^\eta. Why define them at all? 8. Sec. III -"given second moment <\epsilon^2>" How is this chosen? Is it independent of k_i, k_j, and \alpha? 9. Sec. III -The statement "All the theoretical predictions are confirmed in Supplementary Figure 1." should be made more precise. i.e., what theoretical predictions? Scaling results? 10. Fig. 3 -The authors need to give a complete explanation of "atypical topological features". 11. Fig. 3 -Why do some TIV curves increase when \alpha~1? The authors do a good job of citing previous research. H. Clarity and context: lucidity of abstract/summary, appropriateness of abstract, introduction and conclusions It may be helpful to discuss the triangle inequality and clustering in the abstract/intro given that it is a central topic of the paper. Also, I found the intro/abstract to not clearly identify new scientific insights allowed by the new model. I. Summary This research is an important and exciting area of network science, and the work is very high quality -both in philosophy and execution. However, I find the current paper to be lacking the "wow" factor that would justify publication in Nature Communications. The authors have made an interesting observation and developed a state-of-the-art model for it, but they have not illustrated this observation to have important consequences or provide useful insights. Moreover, I believe the "geometric nature" of weights to be under explored (see comments 1-2). For these reasons, I believe this work to be better suited for another journal. Notice that hidden variables can always be redefined such that they represent the property of interest, in our case nodes' degrees and strengths. That being said, the model that we introduce in Eqs. (1) and (2) guaranties that we can fix the local properties of the nodes, that is, their joint degree-strength distribution similar to the one of a real network under study and, simultaneously, change in an independent manner the coupling between the weights and the metric space. This critical property is the one allowing us to gauge the effect of the metric space in real systems. This very same feature is also present in our different models of networks embedded in hidden (hyperbolic) spaces. There, too, we can fix the degree distribution and modify the coupling between topology and metric space so that different levels of clustering arise. This has been widely acknowledged by the community of network scientists and accepted as a new paradigm to describe and characterize complex networks. Our work here goes in the same direction and we are convinced that it will become the standard model for weighted networks embedded in metric spaces in the near future. As for the particular form of Eq. (2) and the introduction of parameter α, here we follow a long standing tradition of "gravity models" in the Social Sciences, and in particular in Economics where the interaction between two countries is postulated to be proportional to the product of their "masses" (a measure of the importance of countries' economies) and inversely proportional to their (geographic) distance. Eq. (2) is a novel generalisation of this concept to the case of weighted networks. In this case, the role of a given node's "mass" must be played by the factor σ κ 1−α/D ensuring that, once the network has been assembled, that particular node has expected degree and strength κ and σ, respectively. Far from being arbitrary, the choice for this functional form of the "mass" and the identification of the hidden variables with the expected degrees and strengths ensure that, as a first but accurate approximation, we can use observed degrees and strengths in real networks as proxies for the unknown hidden variables. Finally, our claim about the generality and versatility of our model is supported by the following properties: 1. Our model can fix in an arbitrary way the joint distribution ρ(κ, σ), thus allowing to control the degree and strength distributions and any possible form of correlations (positive or negative) among degree and strength. In particular, ρ(κ, σ) can take the form of correlations observed in real networks. 2. With strength and degree fixed, it can tune independently the coupling between weights and metric space through the parameter α and so reproduce the triangle inequality violation curves of real networks, as shown in Fig. 3. 3. The model can adjust the level of noise in the system through the parameter 2 . While noise is always present in real systems, it is usually not even considered in other models of weighted networks. 4. The model reproduces very well many other properties of real networks: degree-degree correlations, degree-dependent clustering coefficient, betweenness centrality (see for instance the supplementary information of Ref. [1]), global weight distribution, disparity measure, etc. To the best of our knowledge, none of the models proposed so far in the literature satisfies all these characteristics simultaneously. In the new version of the manuscript, we have replaced the sentence "we introduce the most general and versatile class of weighted networks..." by "we introduce a very general and versatile class of weighted networks..." 1.C "The problem of modelling both the weights and topology of real networks is not new, but more than a decade old." We agree with the referee. Indeed, this is not a new problem. However, this fact does not make the problem uninteresting, but rather, it makes it more challenging (see for instance a couple of related works recently published in Nature Physics [2] and Nature Communications [3]). We would like to notice that despite the elapsed ten years, no much progress has been made in this field; the reason being the inherent difficulty of the problem. In our paper, we make a quite big step forward that, we are certain, will stimulate further research in this direction. On the one hand, the introduction of genuine gravity laws in networks will be of interest to fields where such laws can only be applied to fully connected structures. This opens a new line of theoretic research on the coupling between topology, weighted structure, and geometry in complex networks. On the other hand, our work opens the possibility to use information encoded in the weights of the links to find more accurate embeddings of real networks. We can then use these improved embeddings to detect network communities, missing links and, for the first time, give estimates of the weights of such missing links. We have modified the discussion section of our paper to emphasise these future research directions. 1.D "What new general insight do we get about network formation? The problem of modelling weights and topology in a realistic and parsimonious way remains fundamentally unsolved." In our opinion, the dependence of the properties of networks, both at the topological and weighted levels, on hidden/latent metric spaces which encode in a geometric distance all the factors that affect the propensity of nodes to establish connections and to supply them with a certain intensity is a new insight about network formation that we consider very interesting and important and that should be taken into account in any future research on this topic. For the first time, the gravity model can be used to explain both the formation of links and weights in networks. On top of that, we provide a general model that, for the first time, accurately reproduces many properties observed in real weighted networks from various origins. Incidentally, our model unveils that the coupling of the topology and the weights with the underlying metric space are in some cases uncorrelated, which in turn suggests that the formation of connections and the assignment of their magnitude can be ruled by different processes. We agree with the reviewer that many questions about the formation of real weighted networks remain open, and we believe that our contribution is substantial enough to encourage other researchers to join the endeavour. 1.E "What the abstract notion of geometric embedding plausibly means for real-world networks?" This is indeed an excellent and difficult question that we have been trying to answer since we published our first work on the subject [4]. In that work, we found that the properties of the degree-dependent clustering coefficient of some real complex networks are compatible with the existence of a hidden metric space ruling the probability of existence of links between nodes. Interestingly, we also found that the Internet fits particularly well within this new paradigm and its inferred embedding in the hyperbolic plane (see Ref. [1]) has excellent routing properties in this space. We have also developed both static and growing models (like the one in Ref. [5]) showing that our models are able to reproduce the topologies of real complex networks extremely well and, at the same time, are mathematically tractable as interactions are still pairwise. The nature of such hidden metric spaces is, however, not totally clear. In the case of social structures for instance, even though it is difficult to quantify, we, as individuals, are able to tell whether one person is close of far from us and, in many cases, we establish social relationships based on this perception. In the social sciences, this concept is called homophily and is responsible for the assortative character of social networks. In the case of economic systems, like countries, the metric distance can be an effective space combining geographic, cultural, historical, and political distances (see for instance Ref. [6]). In the case of the Internet, it is probably a combination of geography and commercial agreements between the different actors at play. Nevertheless, beyond the philosophical discussion about the origin of such spaces, the concept of a hidden metric space can also be seen as a mathematical tool that can be leveraged to generate realistic networks. For instance, it is the only framework that allows to generate strong clustering based on pairwise interactions only (due to the triangle inequality of the metric space). Similarly, our manuscript demonstrates how the metric space allows to realistically and directly assign weights to links given a fixed degree-strength sequence (i.e., without relying on iterative methods), which is a notoriously difficult problem. 1.F "Credit to previous work is not appropriately given. A vast literature on network embedding in various manifolds (including hyperbolic ones and higher-dimensional ones) exists." We thank the reviewer for pointing out potential interesting references, and we now acknowledge the work of Aste and Di Matteo as well as of Barthelemy (whose work was already acknowledged via other references). However, while the work of Aste and di Matteo embed complex networks into hyperbolic manifolds to characterise and filter them (see for instance [7][8]), the information encoded in the distance between nodes remains unexploited and, as such, their many contributions are only weakly related to our work (and most of the work already cited in the manuscript). Replies to the comments of Reviewer #2 We are delighted that the reviewer finds that the "topic addressed is interesting, timely and relevant for the journal's audience", that the "paper is very well written and the topic clearly explained", that our "contribution is novel", and that our "manuscript will be fit for publication in Nature Communications" once a few minor comments have been addressed. The latter are addressed below. The reviewer raises a good point. The analysis of the model presented in the Supplementary Information shows that the networks generated by our model are sparse in the limit N → ∞. This result assumes, however, that the density δ, the average expected degree κ and the integral I 1 are all bounded. Reference [9] studies a special case in which I 1 is not bounded but shows that the average degree of the network scales sub-linearly with the number of nodes. It is not clear, however, how the model behaves in general in the case of dense or complete networks, which would require a different model to generate the topology. In fact, current ongoing research is looking into a variation of our model and its application to correlation matrices. 2.A 2.B "What is the role of D? In this paper the authors provide insights one the geometry of edge weights, but it's not clear the geometry of which space. Most of the analysed networks already live in a number of different dimensions (2, 3 etc) but they already appear to be well described by a D=1 model. It seems thus that the geometrical nature of the weights refers to a different geometrical space than the original network's natural embedding space. So, does this geometry really carry information/meaning or is it just a very general and elegant way to produce hidden variable networks? Alternatively, what would going to higher D grant in terms of network description or degrees of freedom?" The reviewer is right "that the geometrical nature of the weights refers to a different geometrical space than the original network's natural embedding space". In fact, one of the networks for which our model works best is the metabolic network, which does not have a natural embedding space. The hidden metric space used in our framework is an abstract space in which the distance between nodes encodes the likelihood for them to be connected. Notice, however, that not only the geometrical distance (i.e., the arc length in the case of the circle S 1 ) influences the likelihood of nodes to be connected, but also the product of their respective expected degrees [see Eq. (1) and (3)]. In other words, two hubs are effectively closer than two low-degree nodes even if both pairs are separated by the same arc length. This is particularly important in the case of heterogeneous (scale-free) networks because this makes the particular dimension of the metric space, D, not so relevant. The reason is, as we show in Ref. [9], that our model can be mapped into a purely geometric random graph in the (D + 1)-dimensional hyperbolic space H D+1 . In such space, the volume of a ball of radius r grows as V ∼ e Dr and, thus, the value of D only changes the pre-factor of the exponential growth law of the ball but not the fact that it grows exponentially. This implies that, even if the original metric space is not one-dimensional, its embedding in the two-dimensional hyperbolic plane is very good. In other words, H 2 already has enough space to fit any network without violating the triangle inequality. In our paper we chose to leave the parameter D free for the sake of presenting the most general model possible. However, when it comes to studying real networks, and given the considerations above, we chose D = 1 as it simplifies enormously the analytic and computational treatment. Of course, one could however argue that going to higher dimensions-and therefore having more degrees of freedom in the geometrical space-could be useful in the context of communities [10], for instance. While this is perfectly possible from a theoretical point of view, the inverse problem of finding embeddings of real networks would become computationally infeasible. 2.C "What are the atypical features that impeded the embedding of the US airport network? Do they constitute a problem for the general theory?" The atypical features mentioned in the manuscript refer to power-law degree distribution with an exponent below 2 in the case of the U.S. airports network and a short-range repulsion effect in the connection probability for the commute network (i.e., people rarely commute from one suburb to another but rather commute from one suburb to the major city in the area). This does not affect the general theory described in the manuscript but rather prevent the state-of-the-art embedding algorithms to provide us with an embedding of the two networks. We are currently working on a generalisation of a class of embedding algorithms that would allow the embedding of such networks. contribution that should be published in some form". We thank the reviewer for her/his kind words. As for reviewer #1, the reviewer expresses some doubts as to whether our contribution is fit to be published in Nature Communications. We respond point by point to her/his comments in hope to convince her/him of that our contribution is worthy of the high standards of this journal. [19] are already cited in the paper, but the authors do not clearly discuss their connection to the geometric notions of "popularity" and "similarity.")" Please notice that to perform the empirical analysis in Fig. 1, we do not infer the hyperbolic geometry from network topology (which we do in the final part of the paper for self-consistency of the analysis). Instead, we study how normalised weights are distributed over the edges of the network. We agree with the reviewer that weights in complex networks depend on the topology (see Ref. [11]) and this is precisely why we considered normalised weights in Section II of the manuscript (see also our answer to comment 3.I). By normalising the weights by the average valueω(kk ), we factorised out the dependency on the topology, leaving weights that seemingly randomly fluctuate around 1. However, as shown in Fig. 1, these fluctuations are not uniform, as we see that links involved in triangles tend to have larger normalised weights than the average link. Since triangles are a reflection of the triangle inequality in the underlying metric space, we expect nodes forming triangles to be close to one another. Thus, the higher average normalised weight observed on triangles strongly suggests a metric nature of weights, which is not a trivial consequence of the relation between weights and topology. Notice also that our theoretical model provides a counterexample of the referee's observation that "since weights are known to depend on topology, it is somewhat unsurprising that there is a connection between the weights and geometry". Indeed, by changing the parameter α, we can generate networks with an arbitrary coupling between weights and metric space (even zero coupling) even though they share the very same network topology and correlations between strength and degree. However, we found that the weight distribution and the disparity were well reproduced only with the value of α found using the test of the triangle inequality. As for the results in Refs. [1] and [19] (in the old version of the manuscript), it is indeed well known and accepted that weights are higher between nodes of high degrees. In [19], the authors found a positive correlation between weights and link clustering (or neighbourhood overlap). However, since it is also true that link clustering is typically correlated with the degrees of the endpoint nodes, the correlation between absolute weights and degrees is also expected, which prevents from a direct observation of metric properties in the weights. In our work, we filter out such induced correlations by normalising the weights so that genuine correlations with the metric space can be detected. In the revised version of the paper, we have included a discussion to clarify this point. 3.B "The article uses appropriate statistics, although it would be helpful to provide further details about their methods for inference." In section II of the Supplementary Information file, we provide a detailed explanation of the statistical method we have developed to measure parameters α and 2 . The embedding methods of the unweighted versions of the networks is fully described in our previous publication [1]. 3.C "By modelling the coupling between weights, node degrees, and geometry, the authors provide a framework to deeply study these relationships. This is indeed an important contribution that justifies publication in some form. However, outside of observing, model fitting, and measuring the extent of these relations, very little other scientific insight is provided. That is, it is not clear if or how a relationship between weights and geometry will have an impact on any application. " With hindsight, we agree with the reviewer that we may have lacked in explaining clearly the implications of our work for the understanding of real weighted networks, which we believe are remarkable. For instance, our equations can be understood as the new generation of gravity laws applicable to very different domains, including Biology, Information and Communication Technologies, and Social Systems. Current gravity laws are prescribed to the Social Sciences and predict successfully the volume of flows between elements but cannot explain the observed topology of the interactions among them. Our contribution overcomes this limitation and offers for the first time a gravity model that can reproduce both the existence and the intensity of interactions. This opens a new line of theoretic research on the coupling between topology, weighted structure, and geometry in complex networks. On the other hand, our work opens the possibility to use information encoded in the weights of the links to find more accurate embeddings of real networks. We can then use these improved embeddings to detect network communities, missing links and, for the first time, give estimates of the weights of such missing links, and to implement navigation and searching protocols, such as greedy routing, which take into account not only the existence of connections but also their intensity. We have modified the discussion section of our paper to emphasise these future research directions. 3.D "Section II studies the relationship between weights and triangles. Triangles and clustering reflect geometry due to the triangle inequality, however triangles are an indirect consequence of geometry. For examples, the number of triangles in which an edge is involved (that is, its multiplicity m) also depends on the nodes' degrees (i.e., topology). For example, in configuration models the multiplicity m grows with gives number of possible triangles and edges are created at random. Given the focus on triangles both in section II and the violation of triangle inequality, the paper needs a much more detailed/systematic exploration and discussion of the relationship between triangles and geometry. This relation is currently vague, and citing the triangle inequality does not provide quantitative evidence of their connection." We would like to notice that in the configuration model the link multiplicity vanishes in the thermodynamic limit and, therefore, such model is not a good candidate to have an underlying geometry. In networks with finite clustering, like in our model, there is typically a non-trivial relation between m kk and k and k , although this relation is, in general, difficult to calculate. In general, the problem of measuring the metricity of network topologies is an extremely difficult problem that requires a research program on its own. Nevertheless, very useful information can be obtained from the properties of the clustering in the network. The relation between clustering and geometry has been analysed in detail in our previous publications. In the current paper, we take the relation clustering/metric space for granted and we focus on the relation between weights and geometry. It is true that in our geometric models clustering is a by-product of the metric space (and so of the triangle inequality), which is, by the way, very convenient from a mathematical point of view, as it induces effective three body interactions from pairwise ones. However, in our first publication on this topic (Ref. [4]) we found that the properties of the degree-dependent clustering coefficient of some real complex networks are compatible with the existence of a hidden metric space ruling the probability of existence of links between nodes. Interestingly, we also found that the Internet fits particularly well within this new paradigm and its inferred embedding in the hyperbolic plane (see Ref. [1]) has excellent routing properties in this space. We have also developed both static and growing models (like the one in Ref. [5]) showing that our models are able to reproduce the topologies of real complex networks extremely well. 3.E "In contrast to triangles, edge length is a direct measurement of geometry and may provide a more straightforward description for how weights and geometry are coupled. That is, are weights larger for shorter edges? Studying edge lengths may also help address comment A (the origin of clustering), since it would be helpful to understand if triangles primarily exist between node triples (i, j, k) that are nearby in the metric space, and if so, do they primarily involve nodes with small ∆θ or nodes with large degrees." We thank the reviewer for pointing out a concept that may not have been sufficiently well explained in our manuscript. The distance between nodes in the metric space does not correspond to the actual geographical distance between, say, airports in the US airports network. Rather, it is an abstract distance that quantifies the likelihood of interactions between nodes. Consequently, a direct measurement is not available and this is why we turned to triangles-as a reflection of the triangle inequality in the metric space-as a proxy to estimate qualitatively the distance between nodes (i.e., close or distant). To answer directly to the reviewer's first question, Eq. (2) in the manuscript stipulates that the weight between connected nodes should decrease with increasing distance between them in the hidden metric space. Triangles between node triples (i, j, k) exist with probability p(χ ij )p(χ jk )p(χ ik ), a quantity that essentially depends on the ratios In other words, triples with low ∆θs will likely form a triangle regardless of their expected degrees just as triples with high expected degrees will likely form a triangle regardless of their position on the circle S 1 . Similarly, the probability for triangles involving two hubs and one low degree node will not strongly depend on the relative position of the nodes on the circle. However, triples with one hub and two low degree nodes will typically not form triangles unless the two low degree nodes are very close along the circle. In fact, this effect contributes greatly to explain why the degreedependent clusteringc(k) is a decreasing function of k for all networks considered in the manuscript (see Supplementary Figures 6-11). 3.F "It may also be informative to study the way in which triangle inequalities are violated. For example, is the inequality first violated for triangles involving nearby nodes or those that involve distant nodes? Is the inequality first violated for triangles involving hubs or those that do not involve hubs." This is a very interesting question. The triangle inequality violation curves are used to find the parameter α real of a given network. Once this value is found, the violation of the triangle inequality depends essentially on the level of noise in the system 2 through the term in the right hand side of Eq. (7). To a lesser extent, the violation may also be due to the fact that the hidden variables κ and σ are approximated by the actual degree and the strength, respectively, of nodes. For most of the analysed real networks, the percentage of violations is very small (of the order of few percent) whereas in the case of the cargo ships network it is close to 20%, due to the high level of noise present in the system. In short, our model predicts that there should not be any dependence on the degree in the nodes belonging to triangles that violate the triangle inequality. To test this prediction, we have measured explicitly the average degree of such nodes as compared to the average degree of nodes in all triangles (see Fig. 1). In many cases the average degree is very similar, thus confirming our prediction. The largest discrepancy is found in the metabolic network. However, notice that this network has a very small percentage of violations, which makes it more prone to statistical fluctuations. We have added a discussion in the new version of the Supplementary Information to clarify this point. 3.G "Abstract line 3: The authors do not "prove" their model to be the "most" general and versatile model." Our claim about the generality and versatility of our model is supported by the following properties: 1. Our model can fix in an arbitrary way the joint distribution ρ(κ, σ), thus allowing to control the degree and strength distributions and any possible form of correlations (positive or negative) among degree and strength. In particular, ρ(κ, σ) can take the form of correlations observed in real networks. 2. With strength and degree fixed, it can tune the coupling between weights and metric space through the parameter α and so reproduce the triangle inequality violation curves of real networks, as shown in Fig. 3. 3. The model can adjust the level of noise in the system through the parameter 2 . While noise is always present in real systems, it is usually not even considered in other models of weighted networks. 4. The model reproduces very well many other properties of real networks, degree-degree correlations, degree-dependent clustering coefficient, betweenness centrality (see for instance the Supplementary Information of Ref. [1]), global weight distribution, disparity measure, etc. To the best of our knowledge, none of the models proposed in the literature satisfies all these characteristics simultaneously. In the new version of the manuscript, we have replaced the sentence "we introduce the most general and versatile class of weighted networks..." by "we introduce a very general and versatile class of weighted networks..." 3.H "Sec. II -for many networks, multiplicity m and k i k j are highly correlated, implying that sampling of biased on m is similar to sampling with a bias on k i k j . It is worth noting how normalisation according to k i k j overcomes this bias. " Notice also that since we measure weights normalised by the average weightω(kk ), a biased sampling over m should be equivalent to a uniform sampling provided there is no metric space dependence on the weights. Any deviation indicates correlations between clustering and weights. 3.I "Sec. II -Does this normalisation help address the goal of discerning the dependence of edge weights on ∆θ versus k i and k j ?" Yes it does. It has been observed that the average weight of links whose end nodes have degrees k and k scales asω(kk ) ∼ (kk ) τ where τ = 0.5 ± 0.1 in the case of the international airports network (see Ref. [11]). However, we found in all our datasets that while the average weight does depend on the product kk , this dependency cannot be summarised in a form as simple as the one proposed in Ref. [11]. For instance, two different scaling regimes could be observed in some datasets. Consequently, we decided to let the datasets speak by themselves by not imposing a specific analytical form for the dependency of the average weight over kk and simply divide the weights by the averageω(kk ). By doing so, we removed the influence of the topology on the weights, which allows to unveil their metric origin. 3.J "Sec. III, Eq. 2 -secondary hidden parameters σ i and σ j are defined for edge weights, however, it is later assumed that σ j = ak η j . Why define them at all?" We agree with the reviewer that the specific application of our model in the manuscript does not require a second hidden variable σ since it is linked to the first hidden variable κ via a deterministic relation σ = aκ η . However, as demonstrated in the Supplementary Information, the second hidden variables, σ, correspond to the expected strength of nodes regardless of the relation with the first hidden variable κ. In other words, our model is much more general and versatile and we consider that this feature is worth mentioning in the manuscript. 3.K "Sec. III -"given second moment 2 " How is this chosen? Is it independent of k i , k j , and α?" The second moment 2 is a global parameter of the model and, as such, is independent of the degree of nodes. However, it is dependent in the coupling parameter α between the weights and the metric space. (1) as a function of α for the real networks considered in the main text. All the theoretical predictions are derived in the Supplementary Information and are summarised in Sec. III of the manuscript. We have modified the sentence the reviewer is referring to accordingly. 3.M "Fig. 3 -The authors need to give a complete explanation of "atypical topological features"." We refer the reviewer to our answer to the comment 2.C. 3.N "Fig. 3 -Why do some TIV curves increase when α ∼ 1?" The increase of T IV (α) close to α = 1 on Figs. 3a-b is expected and is in fact a consequence of Eq. (7) and of our our choice of the probability of connection [i.e., Eq. (3)]. Indeed, substituting Supp. Eq. (23) in Eq. (7) in the main text and neglecting the noise term (whose mean value is close to zero) we obtain (1) Figure 2 below shows the behaviour of α-dependent terms of the right hand side of Eq. (1) for the real networks considered in the main text. For low values of α, we see that the right hand side of Eq. (1) is an increasing function which implies that T IV (α) decreases with increasing α (i.e., it is more and more difficult to violate the triangle inequality as α increases). However, all curves reach a plateau at α 0.8 after which they start to decrease. As expected, these plateaus correspond to the points where the T IV (α) start to increase (for some networks this increase is not visible due to the linear scale of the y axis). This discussion has been added to the Supplementary Information. 3.P "The authors do a good job of citing previous research." We thank the reviewer for this appreciation. 3.Q "It may be helpful to discuss the triangle inequality and clustering in the abstract/intro given that it is a central topic of the paper. Also, I found the intro/abstract to not clearly identify new scientific insights allowed by the new model." We thank the reviewer for the suggestion and we have mentioned the triangle inequality and emphasised more on the scientific insights in the abstract, in the introduction and in the discussion. 3.R "This research is an important and exciting area of network science, and the work is very high quality -both in philosophy and execution. However, I find the current paper to be lacking the "wow" factor that would justify publication in Nature Communications. The authors have made an interesting observation and developed a state-of-the-art model for it, but they have not illustrated this observation to have important consequences or provide useful insights." We thank the reviewer for her/his kinds words about the quality of our work. However, we believe that a "wow" factor is a very subjective feeling, especially as other readers (like reviewer #2) might think differently. As mentioned in our answer to comment 3.C, we agree with the reviewer that we may have lacked in explaining clearly the implications of our work for the understanding of real weighted networks, which we believe are remarkable. We would like to stress that, in our opinion, the main contribution of our work is the empirical observation that weights in complex networks are influenced in a non-trivial way by some underlying metric structure. As far as we know, this is a novel and important result that extends the hidden/latent geometry paradigm to weighted complex networks. Of course, such empirical result claims for a modelling that would take it into account. This is the reason why we introduce our model, which is able to reproduce the coupling with the metric space in a very simple and elegant way. Our model guaranties that we can fix the local properties of the nodes, that is, their joint degree-strength distribution similar to the one of a real network under study and, simultaneously, change in an independent manner the coupling of the weights with the metric space. This critical property is the one allowing us to gauge the effect of the metric space in real systems. This very same feature is also present in our different models of networks embedded in hidden (hyperbolic) spaces. There, too, we can fix the degree distribution and modify the coupling between topology and metric space so that different levels of clustering arise. This has been widely acknowledged by the community of network scientists and accepted as a new paradigm to describe and characterise complex networks. Our work here goes in the same direction and we are convinced that it will become the standard model for weighted networks embedded in metric spaces in the near future. At the same time, our equations can be understood as the new generation of gravity models applicable to very different domains, including Biology, Information and Communication Technologies, and Social Systems. Current gravity laws are prescribed to the Social Sciences and predict successfully the volume of flows between elements but cannot explain the observed topology of the interactions among them. Our contribution overcomes this limitation and offers for the first time a gravity model that can reproduce both the existence and the intensity of interactions. This opens a new line of theoretic research on the coupling between topology, weighted structure, and geometry in complex networks. On the other hand, our work opens the possibility to use information encoded in the weights of the links to find more accurate embeddings of real networks. We can then use these improved embeddings to detect network communities, missing links and, for the first time, give estimates of the weights of such missing links, and to implement navigation and searching protocols, such as greedy routing, which take into account not only the existence of connections but also their intensity. We have modified the discussion section of our paper to emphasise these future research directions. Reviewers' comments: Reviewer #1 (Remarks to the Author): After having read the new version of the manuscript and the authors' responses to the referees' remarks, I remain skeptical about the significance of these results and their suitability for Nature Communications. As a consequence, I still do not recommend publication of this manuscript. -In their reply 1.E, the authors write "...the concept of a hidden metric space can also be seen as a mathematical tool that can be leveraged to generate realistic networks. For instance, it is the only framework that allows to generate strong clustering based on pairwise interactions only (due to the triangle inequality of the metric space)". This is not true: even random graphs with given degrees can have a large clustering. This is not often recognised, but dates back to Park and Newman's PRE paper "Origin of degree correlations in Internet and other networks". The reason why this result is overlooked is the widespread use of the approximation that factorizes the connection probability into the product of the end-point degrees. As originally showed by Maslov, Sneppen and collaborators, this approximation is inconsistent with the large value of the maximum degree in real-world networks. If realistic degree sequences are to be replicated, one needs to go beyond the naive factorized approximation. The resulting probability of connection is highly nonlinear (it has a Fermi-function shape) and was derived by Park and Newman in the paper above and in many subsequent papers. This probability function is the correct one for a network with broad degree distribution and generates a high level of clustering (often matching perfectly the empirical clustering), even if it only accounts for local (degree) properties of nodes, without resorting to any metric pairwise property. -Therefore the apparent need to introduce metric spaces to replicate high clustering might be merely an artefact of the (incorrect) approximation of the connection probability. Note that, even if it is often said that the factorized probability works well for "sparse networks", this is actually incorrect: a factorized probability does generate sparse networks, but these networks are however unrealistic in terms of their maximum degree. In other words, real-world networks, although sparse in most cases, are incompatible with the factorized approximation. Compensating this unrealistic approximation with the introduction of a metric space in order to retrieve an otherwise unexplained large clustering is scientifically incorrect and misleading. The large clustering (or at least a generous portion of it) would more parsimoniously be explained by using only local properties (e.g. degrees), along with the correct nonlinear connection probability accounting for them. -Even though such a factorized approximation is never introduced explicitly as a building block of the model described in this paper, an equivalent problem is present here in terms of the expected weight being linearly dependent on the expected strengths. Indeed, as a consequence of this approximation, in the model proposed the expected strengths turn out to be proportional to the corresponding hidden variables, apparently justifying the claim that their model can account for any (joint) degree and strength distribution. Again, this claim of generality is not founded and the resulting metric "patterns" might be an artefact compensating for the factorised choice of the expected weights. -Additionally, since the authors want to decouple local node effects (degrees and strengths) from the (postulated) metric properties, it is not clear why they preliminary filter most of their networks with the "disparity filter" (by the way, why don't they do this for all networks? In the SI they say that some networks have been filtered this way, and others not, without explanation). By using this filter, the local effects should in principle vanish, so they should be left with "residual networks" where local node properties are no longer relevant and should not be further controlled for. So why are they applying their model to these filtered networks? This procedure is unclear to me. In any case, it raises the doubt whether the empirical "patterns" that are documented here are actually properties of how the disparity filter operates, rather than properties of the data themselves. -By the way, the disparity filter assumes that the total strength s of a node is uniformly randomly broken up into the weights of the k edges coming out of a node, irrespective of the degrees at the other endpoint of these edges. This again appears to contrast the well-known fact, used also elsewhere in this paper, that connection probabilities should depend on the degrees at both endpoints of an edge. So here I see some inconsistency in the way data are analysed. -Finally, it is not true that this is the first "generation of gravity models" assuming that also the probability of connections should be a gravity-like function. There is vast literature about the socalled zero-inflated gravity models which do have a similar dependence of link probabilities on the gravity equation, thus replicating the observed network density (see for instance the published papers by Fagiolo (http://arxiv.org/abs/0908.2086) and Fagiolo and Duenas (https://arxiv.org/abs/1112.2867) and references therein. In conclusion, I still believe that this papers does not introduce a really new and general paradigm to explain the origin of weights in real networks. It might be forcing the use of metric spaces to compensate for some implicit proportionality assumption (for sure it is partially doing so), it might be partially looking at spurious patterns created by the filtering method used, and it is not the first/only model that has been proposed to understand the empirical weights in weighted networks. Reviewer #2 decided to provide confidential remarks to the editor only. In them, they continue to praise the value of your work, and believes that your contribution deserves publication in Nature Communications. At the same time, they explain that in their view some of the criticisms of Reviewers #1 and #3 may be based on the natural difficulty of the language required to describe hyperbolic embeddings, and because of the objective difficulty in interpreting what an underlying and new hyperbolic metric structure is really telling us about the networks under study. And while they believe that you did already a very good job regarding the former point, in terms of explaining your method, they concede that regarding the latter a solid answer to the origin of the weights and direct intepretation of the uncovered hyperbolic structure has not been yet provided. Nonetheless, they remain positive towards the work in light of the potential to stimulate new work that it has. Reviewer #3 (Remarks to the Author): I have examined the authors' revised manuscript and their responses to my comments. Although several of my concerns have been adequately addressed, the authors did not directly address several of the main issues that I previously raised. As I previously stated, my overall feeling is that the paper provides a nice contribution to this field and deserves publication in some form and at some venue. However, I cannot support publication in Nature Communications until the issues below are carefully addressed in the manuscript. That said, I now believe the manuscript to be sufficiently impactful to warrant publication in Nature Communications. My recommendation is now 'revise and resubmit.' Previous concerns not adequately addressed: (3.A). I believe the authors missed my main concern, which regards my previous statement 'It is unclear whether or not the hyperbolic geometry modeling approach provides further insight than what is possible by studying the dependence of weights on node degrees and triangle participation (i.e., neighbourhood overlap).' I will further explain this concern. Specifically, given the observations that node degree and triangle participation both influence edge weights, the simplest model would be one in which weights depend only on two types of variables: node degrees and triangle participation. My concern regards whether or not the complicated latent-geometry model satisfies the Occam's razor principle. I believe that it does, but given the complexity of their model, the authors should provide strong evidence and a clear discussion for why the hyperbolic-geometry model is superior to a simpler alternative. I point out that a correlation between triangle participation and edge weight is widely believed, despite --as identified by the authors --some results in [19] are lacking evidence since they do not isolate the effect some of their experiments. I agree that the authors conduct a more principled experiment with Fig. 1, but the main message of Fig. 1 (i.e., triangles influence edge weights) is not a new idea. It is actually the focus of [19], which is a paper that includes more results than the single experiment upon which the authors improve. The authors' novel claim with Fig. 1 is that a hidden geometry is the origin of this phenomenon. That is, the correlation between triangle participation and edge weight is (or can be) an artifact of a correlation between geometry and edge weight. I believe such a claim requires two types of support: (i) The hidden geometry model can account for the correlation between edge weight and triangle participation. (ii) The hidden geometry model provides a 'better' explanation versus a much simpler model in which edge weights only depend on node degrees and triangle participation. (i) is strongly supported by their study of TIV curves. In my opinion, (ii) is insufficiently described in the paper. That is, the authors do not clearly explain why adopting a complicated latent-space model for edge weights is superior to a simpler alternative model in which one only takes into account node degrees and triangle participation. Finally, I remind the authors that triangle participation is by definition a topological -not geometrical -property, and in principle, there can simultaneously exist several sources for the appearance of triangles in networks. The authors nicely illustrate one source: a latent geometry. However, it is possible for other sources to exist, such as dynamical processes on the network (e.g., processes for triangle closure that are independent of the metric space). Therefore, the correlation between triangles and weights (e.g., Fig. 1) can indicate a relation between a latent geometry and weights, or it can simply indicate a correlation between triangles and weights (that is, one could argue that the latent-geometry origin of triangles is superfluous). This issue should be addressed in the paper, and I leave it to the authors to decide 'how' to do this. I can suggest some possible extensions that may help support claim (ii). First, I suspect Fig. 3d can be interpreted as a measure for determining whether the correlation between triangles and weights is a fundamental relationship, or if it is an artifact of a latent geometry. Specifically, for E. Coli and the brain, it appears that the correlation between triangles and edge weights can be explained entirely by the latent geometry. If the authors agree, then this should be discussed. Second, I would urge the authors to conduct a small simulation to compare their latent-geometry model to a simpler model in which edge weights only depend on node degree and triangle participation. I believe it would be interesting (and very strong evidence) if TIV curves can discriminate whether the organization of edge weights is better explained by triangle participation or by a latent geometry. (3.C) The authors do a good job of further describing potential applications of their work. They may find it helpful to briefly discuss the implications of this work toward previous research on link prediction, since triangle participation is widely-adopted as a leading approach: Liben-Nowell, D., & Kleinberg, J. (2007). The link-prediction problem for social networks. Journal of the American society for information science and technology, 58 (7) Importantly, the last method specifically aims to predict edge weights, and so the authors' claim that their method 'for the first time, will provide estimates for the weights of such missing links' is false. In fact, the authors do not actually use their method to do link prediction, so this claim about a potential application is an overstatement. (3.D). The authors have chosen to still not provide a technical description in the paper for how clustering arises for their new model. Sec. II begins: 'Clustering, as a reflection of the triangle inequality, is the key topological property coupling the bare topology of a complex system and its effective underlying metric space [6]. In this context, the triangle inequality stipulates that if nodes A and B are close, and nodes A and C are also close, we expect nodes B and C to be close as well; triangles are therefore more likely to exist between nodes that are nearby.' This extremely simplistic explanation is appropriate in Sec. II since the authors have not yet defined their model. However, a similarly simplistic explanation is again stated in Sec. IV.A (even after the model is introduced). At this point, I would have found a more technical description for the appearance of triangles very helpful. If the derivation is identical to that in [6], it would helpful to point to the relevant equations in [6] (of course, this requires the notation to be identical), otherwise I suggest including a brief summary in Sec. IV or an appendix. For example, I believe it would be helpful to include some of the discussion in the authors' second paragraph of their response to my comment (3.E). (3.E). I appreciate the more-in-depth description, which will allow me to more precisely state my main concern, which was not addressed in the authors' response. Specifically, if the latent-geometry model implies that the weight w_{ij} of edge (i,j) depends on the variable \psi_{ij} = \Delta_{ij}/\kappa_j\kappa_j , then the accuracy and inference of the model can be directly explored by studying the relationship between these two variables. Instead, the authors study the nature of edge weights w_{ij} through studying triangles. Again, with the Occam's razor principle in mind, it is important that the authors provide evidence and explain in the manuscript why it is beneficial to validate and fit their model using the more complicated approach of studying triangles versus the simpler approach of studying edges. In other words, the most direct way to determine if there if is a relationship between the latent geometry distances x_{ij} and weights w_{ij} is simply to compare these -why resort to studying triangle inequalities? As a related comment: In their response to my previous comment (3.E), the authors write "Rather, it is an abstract distance that quantifies the likelihood of interactions between nodes. Consequently, a direct measurement is not available ... " I am confused why a direct measurement is not available. If one constructs an embedding, then one has x_{ij}. Issues regarding new material: Paragraph just before II: "This model has the critical ability to discriminate between purely local properties (e.g., related to the degree and strength of nodes) and the coupling of the topology and of the weighted organisation with the metric space." --I would say that triangle participation is a local property too; it depends only on a node and its neighbors. Is local vs. nonlocal really the focus of the paper or is it geometric vs non-geometric? 1.A After having read the new version of the manuscript and the authors' responses to the reviewers' remarks, I remain skeptical about the significance of these results and their suitability for Nature Communications. As a consequence, I still do not recommend publication of this manuscript. We thank the reviewer for his/her comments on our manuscript. Below, we provide detailed responses to all of his/her criticisms and hope that the reviewer will be convinced by our arguments. 1.B In their reply 1.E, the authors write "...the concept of a hidden metric space can also be seen as a mathematical tool that can be leveraged to generate realistic networks. For instance, it is the only framework that allows to generate strong clustering based on pairwise interactions only (due to the triangle inequality of the metric space)". This is not true: even random graphs with given degrees can have a large clustering. This is not often recognised, but dates back to Park and Newman's PRE paper "Origin of degree correlations in Internet and other networks". The reason why this result is overlooked is the widespread use of the approximation that factorizes the connection probability into the product of the end-point degrees. As originally showed by Maslov, Sneppen and collaborators, this approximation is inconsistent with the large value of the maximum degree in real-world networks. If realistic degree sequences are to be replicated, one needs to go beyond the naive factorized approximation. The resulting probability of connection is highly nonlinear (it has a Fermifunction shape) and was derived by Park and Newman in the paper above and in many subsequent papers. This probability function is the correct one for a network with broad degree distribution and generates a high level of clustering (often matching perfectly the empirical clustering), even if it only accounts for local (degree) properties of nodes, without resorting to any metric pairwise property. Therefore the apparent need to introduce metric spaces to replicate high clustering might be merely an artefact of the (incorrect) approximation of the connection probability. Note that, even if it is often said that the factorized probability works well for "sparse networks", this is actually incorrect: a factorized probability does generate sparse networks, but these networks are however unrealistic in terms of their maximum degree. In other words, real-world networks, although sparse in most cases, are incompatible with the factorized approximation. Compensating this unrealistic approximation with the introduction of a metric space in order to retrieve an otherwise unexplained large clustering is scientifically incorrect and misleading. The large clustering (or at least a generous portion of it) would more parsimoniously be explained by using only local properties (e.g. degrees), along with the correct nonlinear connection probability accounting for them. We partly agree with the reviewer in that heterogeneous degree distributions generate clustering. However, his/her complain in this regard is a bit paradoxical given that one of us wrote a paper in 2013 precisely calculating the clustering coefficient of scale-free networks under the configuration model, by explicitly considering the connection probability derived in the work by Park and Newman [see Phys. Rev. E 86, 026120 (2012)]. In that work, we showed that, indeed, clustering can be important in heterogeneous random graphs with γ close to 2. However, we also showed that clustering always vanishes in the thermodynamic limit (even though slowly in some cases). In any case, the reviewer is confused about the origin of clustering as a result of the non-factorization of the connection probability. In fact, it is rather the opposite as the formula for the clustering coefficient obtained by using the factorized connection probability [see Eq. (1) in Phys. Rev. E 86, 026120 (2012)] leads to an overestimation of the clustering coefficient in general and to a diverging clustering coefficient for γ < 7/3, a result that is obviously wrong. The non-factorized connection probability arises as a consequence of the closure of the network when there are degrees above √ N in the network, leading to (negative) structural correlations and, incidentally, to the correct expression for the clustering coefficient, which is obviously non-diverging. In any case, while some portion of the clustering observed in real networks could be explained by these finite size effects, it is typically much higher than the clustering observed in randomized versions of the same networks. To give support to this statement, we have randomized many real world networks, including those used in our study, by preserving, in one case, the degree distribution and, in a second case, the degree distribution and also the degree-degree correlations of the real networks. Randomizations are performed using the software developed in Scientific Reports 3, 2517 (2013) and Nature Communications 6, 8627 (2015). Figures 1 and 2 show the results. We observe that, in all networks, the clustering coefficient is much larger than in the randomized versions (by more than three sigmas). These results are also valid in most of the real networks we are aware of. In the light of these results, it is thus important to have models able to explain clustering that remain high and finite in the infinite size limit. In this respect, metric spaces (hidden or not) underlying complex networks provide the simplest explanation for its origin. The reason is that metric spaces induce many body interactions out of pairwise interactions only. In a network, one can think of triangles as some evidence of three body interactions among the elements of the network. We are then faced with only two possibilities, either we have a mechanism with genuine three (or more) body interactions, which is a priori unknown, or we assume the existence of a metric space. In our opinion, the latter option is the simplest and most natural. It also allows for analytic tractability and, thus, the ability to compare with real systems. In this respect, we would like to mention our result in Phys. Rev. Lett. 100, 78701 (2008), where we show that the self-similarity properties of several real complex networks can be accounted for with the hypothesis of hidden metric spaces underlying the networks. Besides, when mapping real networks into our models, like the Internet [Nature Communications 1, 62 (2010) (2016)], and compare their embeddings with metadata not included in the graph itself, like country affiliation or biological pathway, we find a very strong congruency, suggesting that our embeddings are not an artifact of the method and reflect the real organization of these systems. As a final note, during these years working in the field of complex systems and complex networks, we have gained a solid reputation as serious scientists. In particular, the quality of our studies about the connection between the topology of complex networks and hidden metric spaces is beyond doubt in the community and our works have been published in leading international journals including Nature, Nature Physics, Nature Communications, Physical Review Letters, and others. Therefore, we would like to ask the reviewer to refrain from using statements of the type "...scientifically incorrect and misleading." about our work. 1.C Even though such a factorized approximation is never introduced explicitly as a building block of the model described in this paper, an equivalent problem is present here in terms of the expected weight being linearly dependent on the expected strengths. Indeed, as a consequence of this approximation, in the model proposed the expected strengths turn out to be proportional to the corresponding hidden variables, apparently justifying the claim that their model can account for any (joint) degree and strength distribution. Again, this claim of generality is not founded and the resulting metric "patterns" might be We should stress that, in our model, weights do not factorize because the distance between two nodes in the metric space cannot be factorized. The reviewer may have in mind another of our previous works [Phys. Rev. E 74, 055101(R) (2006)], where we show that weighted networks have structural correlations. It is important, however, to realize that such structural correlations appear when one considers actual degrees and strengths of nodes, and not their expected values. This is quite different from the case of the bare topology. To generate connections in a graph, the connection probability must be bounded between zero and one, and thus the connection probability cannot be factorized even at the level of hidden variables (or expected values) in strongly heterogeneous networks. In the case of weights, there is no such restriction and expected weights among nodes can be defined in an arbitrary way. Nevertheless, structural correlations at the weighted level will appear due to structural constraints (see Fig. 1 in PRE 74, 055101(R) (2006)). As for our claim about the ability of our model to generate networks with desired correlations between strength and degree, we first notice that, in Eq. (11) of the Supplementary Information, we provide the exact probability for a node with hidden variables κ and σ to have degree and strength k and s, respectively. Combining this result with the joint distribution of hidden variables ρ(κ, σ), we obtain the joint degreestrength distribution. Given that we have complete freedom to choose ρ(κ, σ), we can control the level of correlations between k and s, as claimed in the paper. In particular, we can chose σ ∝ κ η , which translate intos(k) ∝ k η , as corroborated by our numerical simulations shown in the Supplementary Information. Note that our model can actually generate networks with any value of the exponent η, even if η < 1 (see Fig. S13), something that, to the best of our knowledge, cannot be accomplished with other models of weighted networks. 1.D Additionally, since the authors want to decouple local node effects (degrees and strengths) from the (postulated) metric properties, it is not clear why they preliminary filter most of their networks with the "disparity filter" (by the way, why don't they do this for all networks? In the SI they say that some networks have been filtered this way, and others not, without explanation). By using this filter, the local effects should in principle vanish, so they should be left with "residual networks" where local node properties are no longer relevant and should not be further controlled for. So why are they applying their model to these filtered networks? This procedure is unclear to me. In any case, it raises the doubt whether the empirical "patterns" that are documented here are actually properties of how the disparity filter operates, rather than properties of the data themselves. The reason to use the disparity filter in some of the networks is related to the huge average degree of some of these networks and the fact that most of the links contributing to such large average degree are not significantly related to the main functionality of the network. For instance, in the US airport network, there are a huge number of connections between airports with a number of seats of the order of tens during one whole year. All these connections are there due to private flights that, obviously do not follow the same patterns of connections of commercial (and so regular) airline connections between airports. The same applies to other networks, like the world trade web, where we find an enormous number of trade interactions between countries with a total amount traded of the order of 1 million dollars or less. Such trade interactions are extremely volatile and appear and disappear every year and cannot represent a solid trade interaction between the countries to be exploited in the recognition of characteristic interaction patterns. The disparity filter is extremely good at removing such noisy links, revealing the fundamental structure of the system. It is not true that, by using the filter, local effects vanish or that it leads to residual networks. In fact, in PNAS 106, 6483-6488 (2009), we showed that the disparity filter retains a significant fraction of the total weight in the system without altering the local properties (e.g. clustering), the non-linear correlations between strength and degree, and the strength and weight distributions, while reducing significantly the number of links and, thus, revealing the fundamental degree distribution of the network. In this respect, it is interesting to mention one of the results in our last paper on the world trade web [Scientific Reports 6, 33441 (2016)]. In this work, we measured the correlation between degrees and countries' gross domestic product (GDP) for the original network and for the network filtered by our disparity filter. Interestingly, the Pearson correlation coefficient in the case of the original network is of the order 0.2 ∼ 0.4 depending on the year, whereas for the filtered network takes values of the order 0.8 ∼ 0.9. This simple test indicates that the topology of the filtered network is significantly more congruent with real economic factors than the original one. In the new version of the manuscript, we have added a similar discussion about the use of the disparity filter. 1.E By the way, the disparity filter assumes that the total strength s of a node is uniformly randomly broken up into the weights of the k edges coming out of a node, irrespective of the degrees at the other endpoint of these edges. This again appears to contrast the well-known fact, used also elsewhere in this paper, that connection probabilities should depend on the degrees at both endpoints of an edge. So here I see some inconsistency in the way data are analysed. Please notice that, in the disparity filter, the relevance of a link is addressed from both end nodes, and that it is removed only if it is deemed irrelevant for both of them. This procedure restores the symmetry the reviewer was concerned about. 1.F Finally, it is not true that this is the first "generation of gravity models" assuming that also the probability of connections should be a gravity-like function. There is vast literature about the so-called zero-inflated gravity models which do have a similar dependence of link probabilities on the gravity equation, thus replicating the observed network density (see for instance the published papers by Fagiolo (http://arxiv.org/abs/0908.2086) and Fagiolo and Duenas (https://arxiv.org/abs/1112.2867) and refer-ences therein. We thank the reviewer for pointing out these interesting contributions. We would like to stress that nowhere in the manuscript do we claim that it is the "first generation of gravity models"; we rather state that our model offers a "new generation of gravity models". The only "first" claimed in the manuscript concerns the possibility to provide estimates of the weights of missing links in the framework of embeddings of real networks. This wording was a bit misleading and has been modified accordingly. In the new version of the manuscript, we have added a citation to the work by Fagiolo and Dueñas. In any case, previous works were not successful in replicating simultaneously the weighted structure and the topology of complex networks using gravity models, as explicitly recognized for instance in one of the papers pointed out by the referee, Fagiolo and Dueñas (https://arxiv.org/abs/1112.2867): "More generally, the GM performs very badly when asked to predict the presence of a link, or the level of the trade flow it carries, whenever the binary structure must be simultaneously estimated. Therefore, the GM turns out to be a good model for estimating trade flows, but not to explain why a link in the ITN gets formed and persists over time". In contrast, our framework, based on gravity models both for the weights and for the existence of links, not only reproduces well the weighted structure of complex networks but also their topological properties, much beyond the network density which is a very rough topological feature easily reproducible if other topological properties are overlooked. 1.G In conclusion, I still believe that this papers does not introduce a really new and general paradigm to explain the origin of weights in real networks. It might be forcing the use of metric spaces to compensate for some implicit proportionality assumption (for sure it is partially doing so), it might be partially looking at spurious patterns created by the filtering method used, and it is not the first/only model that has been proposed to understand the empirical weights in weighted networks. We hope we have clarified all reviewer's concerns. Replies to the comments of Reviewer #3 I have examined the authors' revised manuscript and their responses to my comments. Although several of my concerns have been adequately addressed, the authors did not directly address several of the main issues that I previously raised. As I previously stated, my overall feeling is that the paper provides a nice contribution to this field and deserves publication in some form and at some venue. However, I cannot support publication in Nature Communications until the issues below are carefully addressed in the manuscript. That said, I now believe the manuscript to be sufficiently impactful to warrant publication in Nature Communications. My recommendation is now 'revise and resubmit.' We thank the reviewer for his/her positive opinion about our work and for the very helpful and constructive comments to improve the quality and presentation of our paper. In the new version of the manuscript, we have followed his/her recommendations and we hope that all the reviewer's concerns are fully addressed and clarified. 3.A I believe the authors missed my main concern, which regards my previous statement "It is unclear whether or not the hyperbolic geometry modeling approach provides further insight than what is possible by studying the dependence of weights on node degrees and triangle participation (i.e., neighbourhood overlap)." I will further explain this concern. Specifically, given the observations that node degree and triangle participation both influence edge weights, the simplest model would be one in which weights depend only on two types of variables: node degrees and triangle participation. My concern regards whether or not the complicated latent-geometry model satisfies the Occam's razor principle. I believe that it does, but given the complexity of their model, the authors should provide strong evidence and a clear discussion for why the hyperbolic-geometry model is superior to a simpler alternative. I point out that a correlation between triangle participation and edge weight is widely believed, despite -as identified by the authors -some results in [19] are lacking evidence since they do not isolate the effect some of their experiments. I agree that the authors conduct a more principled experiment with Fig. 1, but the main message of Fig. 1 (i.e., triangles influence edge weights) is not a new idea. It is actually the focus of [19], which is a paper that includes more results than the single experiment upon which the authors improve. The authors' novel claim with Fig. 1 is that a hidden geometry is the origin of this phenomenon. That is, the correlation between triangle participation and edge weight is (or can be) an artifact of a correlation between geometry and edge weight. I believe such a claim requires two types of support: (i) The hidden geometry model can account for the correlation between edge weight and triangle participation. (ii) The hidden geometry model provides a "better" explanation versus a much simpler model in which edge weights only depend on node degrees and triangle participation. (i) is strongly supported by their study of TIV curves. In my opinion, (ii) is insufficiently described in the paper. That is, the authors do not clearly explain why adopting a complicated latent-space model for edge weights is superior to a simpler alternative model in which one only takes into account node degrees and triangle participation. Finally, I remind the authors that triangle participation is by definition a topological -not geometricalproperty, and in principle, there can simultaneously exist several sources for the appearance of triangles in networks. The authors nicely illustrate one source: a latent geometry. However, it is possible for other sources to exist, such as dynamical processes on the network (e.g., processes for triangle closure that are independent of the metric space). Therefore, the correlation between triangles and weights (e.g., Fig. 1 ) can indicate a relation between a latent geometry and weights, or it can simply indicate a correlation between triangles and weights (that is, one could argue that the latent-geometry origin of triangles is superfluous). This issue should be addressed in the paper, and I leave it to the authors to decide "how" to do this. I can suggest some possible extensions that may help support claim (ii). First, I suspect Fig. 3d can be interpreted as a measure for determining whether the correlation between triangles and weights is a fundamental relationship, or if it is an artifact of a latent geometry. Specifically, for E. Coli and the brain, it appears that the correlation between triangles and edge weights can be explained entirely by the latent geometry. If the authors agree, then this should be discussed. Second, I would urge the authors to conduct a small simulation to compare their latent-geometry model to a simpler model in which edge weights only depend on node degree and triangle participation. I believe it would be interesting (and very strong evidence) if TIV curves can discriminate whether the organization of edge weights is better explained by triangle participation or by a latent geometry. Thank you very much for these insightful thoughts, which are indeed very pertinent. First, we would like to comment on our own point of view about the Occam's razor principle. We fully agree with this principle: simpler models should be preferred over more complicated ones if they are able to explain the same empirical facts. However, in this case, it is not totally clear the meaning of a model being simpler than another one. In the case of clustering, for instance, triangles in networks can be interpreted as the signature of three body interactions. To explain this empirical fact, we have two options, either we model the system with a genuine mechanism inducing three body interactions, which is unknown, or alternatively we assume the existence of an underlying metric space combined with pairwise interactions, which induces many body interactions. The question is now: is a model with pairwise interactions on a metric space more complicated than a model with many body interactions without a metric space? In our opinion, metric spaces combined with pairwise interactions are a much simpler explanation for the topologies of real complex networks. Also from the mathematical point of view, this possibility is more interesting as pairwise interactions allow for analytical treatment and, thus, a much simpler comparison with empirical data with a limited number of model parameters. Of course, this discussion mainly applies to the bare network topology and not to the weights. However, if we accept the existence of such metric space as an explanation for the network topology, it seems also reasonable that the same metric space will, somehow, influence the intensity of the interactions. As for the specific suggestions of the reviewer about a model that would use only topological information, we agree that this is certainly a good exercise. However, it is a difficult task, since the number of such models one could define is very large. Paradoxically, the literature is not very generous in terms of credible models suitable for such exercise (e.g., that do not rely on intricate dynamics to assign the weights and, as such, would not pass the Occam's razor principle). In any case, we have opted for the models used in [Nature Physics 8, 429 (2012) andin PNAS 101, 3747 (2004)] and for a new one that generalizes both. If we understand correctly the reviewer's suggestion, the idea is to explain the observed weighted network structure without relying on any metric space whatsoever. Therefore, we first randomize the real network topologies preserving the degree sequence and the average clustering coefficient [for this task, we use the software developed in Scientific Reports 3, 2517(2013) and Nature Communications 6, 8627 (2015]. This step is taken to destroy any dependence on any possible metric space underlying the network. Then, we assign weights to the connections according to the following models: • model A: w ij ∝ (k i k j ) θ , where k i and k j are the degrees of nodes i and j, respectively, and θ is a model parameter; where c i and c j are the clustering coefficient of nodes i and j, respectively, and δ is a model parameter; • model C: w ij ∝ (k i k j ) µ (c i c j ) ν , and µ and ν are model parameters. This model accounts for the fact that weights among high degree nodes are higher but also that weights among highly clustered nodes are also higher. For all models, the exponents θ, δ, µ and ν are chosen as those minimizing the χ 2 statistic for the corresponding dataset, and the results are shown in Figs. S14-S41 in the Supplementary Material. Although the three models preserve the degree sequence, which is an advantage over our model, the degree-dependent clustering of the synthetic networks is worse reproduced as compared to the one obtained with our model. We find that models A and C can reproduce fairly well the strength distribution, or at least its general shape. This is due to the strong influence of the topology over the weighted organization, and it illustrates well the reason why we factorized the weights in Fig. 1 to account for the effect of the topology. However, except for the world trade web and the US airports network, we find that the three models reproduce poorly the distribution of weights and the disparity. This is not particularly surprising in the case of the US airports network since this is the network for which our model predicts a weaker dependence on the metric space, leaving weights mainly a function of nodes' degree. It is not surprising in the case of the world trade web either, given the small size of the network and the strong fluctuations present on it. Nevertheless, our model is the only one that consistently reproduces the properties of the real weighted networks with accuracy. More importantly, the three models perform very badly at reproducing the triangle inequality curves for all networks (as shown on Figs. S14-S41). As pointed out by the referee, this provides a very strong evidence that our assumption about the metric origin of weights is a much better explanation of the real data. We have added a new section in the Supplementary Information to include this new analysis. 3.B The authors do a good job of further describing potential applications of their work. They may find it helpful to briefly discuss the implications of this work toward previous research on link prediction, since triangle participation is widely-adopted as a leading approach: Liben-Nowell, D., & Kleinberg, J. (2007). The link-prediction problem for social networks. Journal of the American society for information science and technology, 58 (7), 1019-1031. Lü, L., & Zhou, T. (2010. Link prediction in weighted networks: The role of weak ties. EPL (Europhysics Letters), 89(1), 18001. Zhao, Jing, Lili Miao, Jian Yang, Haiyang Fang, Qian-Ming Zhang, Min Nie, Petter Holme, and Tao Zhou. "Prediction of links and weights in networks by reliable routes." Scientific reports 5 (2015). Importantly, the last method specifically aims to predict edge weights, and so the authors' claim that their method "for the first time, will provide estimates for the weights of such missing links" is false. In fact, the authors do not actually use their methos to do link prediction, si this claim about a potential application is an overstatement. We thank the referee for pointing out interesting publications (especially the third one which had somehow slipped under our radar) and we agree that our choice of wording is a bit ambiguous. Embeddings of unweighted networks in metric spaces have already been shown to permit the prediction of missing links [see for instance Nature 489, 537-540 (2012)], and we simply meant that our model now allows to extend this powerful methodology to weighted networks. In other words, "for the first time" was referring to "the first time" in the framework of networks embedded in metric space. We have corrected this ambiguity in the main text and added these new references. Thank you very much for this suggestion. In the previous version of the manuscript, we decided not to include such details because they have already been discussed in our previous publications. However, we agree that by adding such discussion the paper becomes more self-contained. Therefore, in the new version of the manuscript we have included the discussion mentioned by the reviewer in Sec. IV. 3.D I appreciate the more-in-depth description, which will allow me to more precisely state my main concern, which was not addressed in the authors' response. Specifically, if the latent-geometry model implies that the weight w ij of edge (i, j) depends on the variable ψ ij = ∆ ij /κ j κ j , then the accuracy and inference of the model can be directly explored by studying the relationship between these two variables. Instead, the authors study the nature of edge weights w ij through studying triangles. Again, with the Occam's razor principle in mind, it is important that the authors provide evidence and explain in the manuscript why it is beneficial to validate and fit their model using the more complicated approach of studying triangles versus the simpler approach of studying edges. In other words, the most direct way to determine if there if is a relationship between the latent geometry distances x ij and weights w ij is simply to compare these -why resort to studying triangle inequalities? As a related comment: In their response to my previous comment (3.E), the authors write "Rather, it is an abstract distance that quantifies the likelihood of interactions between nodes. Consequently, a direct measurement is not available ..." I am confused why a direct measurement is not available. If one constructs an embedding, then one has x ij . There are some subtle issues here. We can, of course, find an embedding of a given network and then compare weights directly against x ij . This is, indeed, the main idea behind the plot in Fig. 3d. However, to do so, we use a statistical inference technique that relies on the assumption that the network topology has been generated by the model. Therefore, one could argue that, since we are fitting the data to the model, it is not surprising to find a metric relation with the weights. This is, of course a misleading concern because to find the embeddings we do not use information from the weights. In any case, for the moment, we only know how to do embeddings of unweighted networks. In this manuscript, we are precisely proposing the geometric model for weighted networks, which is the first step needed to propose an embedding method for weighted networks in the future. The consideration of weights in the embedding process will certainly change the inferred coordinates of the nodes, so that distances inferred from the topology, although significantly correlated with weights, are not enough to explain the weighted structure of networks, as expected. For these two reasons, we wanted a method that would be able to find metric dependencies without relying on any embedding, that is, using only the network topology and the actual weights. We find that our method to compute the triangle inequality curves fits this purpose very well. In the new version of the manuscript, we have tried to be clearer in this respect. 3.E Paragraph just before II: "This model has the critical ability to discriminate between purely local properties (e.g., related to the degree and strength of nodes) and the coupling of the topology and of the weighted organisation with the metric space." -I would say that triangle participation is a local property too; it depends only on a node and its neighbors. Is local vs. nonlocal really the focus of the paper or is it geometric vs non-geometric? Perhaps we have not been very clear with this sentence. What we mean is that our model allows us to fix the degree-strength distribution independently of the coupling with the metric space. Therefore, it allows us to discount the effects of the degree-strength structure to reveal the coupling with the metric space in real networks. Imagine, for instance, a model without this property, one where by changing the coupling with the geometry we would obtain a completely different degree-strength distribution. Such model would be extremely difficult to contrast against real weighted networks. In the new version of the paper, we have rephrased this paragraph to clarify its meaning. I am very sorry if the authors got offended by my use of words such as "misleading" or "incorrect". My intention was not that of offending, of course, but that of pointing out various aspects of this research that may indeed lead to confusing interpretations, and that in my view do not justify its publication in a non-specialised and broad-audience journal like Nature Communications. I also did not want to cast doubts on how respected some of the authors in this manuscript are internationally (I know they are). I just wanted to point out that these authors have carried out much better research in the past, and that this particular work does not raise particular scientific interest in my opinion. Since I have been asked to have a final critical look at the revised manuscript, I hope I can better explain the main reasons for my judgement in this report. I am not going to go over all the points of my criticism again (from the authors' last reply, it is clear that we disagree at many points); I just want to emphasise the main concerns, which do survive and remain serious after the authors' revisions to the manuscript. First of all, I hope the authors will agree that the value of their work has to be assessed NOT in relation to the mathematical model itself (proposing an abstract mathematical model of embedded weighted networks is certainly interesting but not exciting, and does not per se deserve publication in Nature Communication), BUT to the degree to which such model can explain the empirical structure of real-world networks. So I should not judge their results for synthetic networks, as these results are irrelevant for the value of the paper. This restricts the relevant assessment of the paper to the last four figures, i.e. the single figure with the triangle inequality violation (TIV) curve for the 3 real networks and the three 6-panel figures with the distributions/spectra of various topological networks properties. When it comes to this crucial empirical analysis, my concerns are really serious. The consistency between the model and real networks is not convincingly studied, in my opinion, for the following reasons. 1) Only three networks are analysed, while the authors claim that their mechanism might be general and explain the nature of weights in generic real-world networks. Replicating the analysis AS IT IS on more networks would however still not be enough, because the tests of the consistency between model and data are unsatisfactory, due to the other two reasons below. 2) In the 6-panel figures, the consistency is studied only in terms of overall properties like the degree distribution, the strength distribution, the strength-degree relationship, the weight distribution, the disparity, and the clustering-degree relationship. Now, for the first three properties the agreement is totally unsurprising, given that their model (as they repeatedly mention throughout the paper) can control for any degree distribution, any strength distribution, and any form of degree-strength correlation. Coming to the remaining three properties (weight distribution, disparity and clustering), my main concern is that, for many real networks, it turned out that these properties (or very similar ones) can be explained very well even WITHOUT invoking any coupling to an underlying metric space. See for instance the works by Garlaschelli and coauthors about maximum-entropy models of weighted networks, where it was shown that many properties of real weighted networks (including the WTW studied here) can be explained on the basis of strength and degrees alone. (By the way, I now realise that a recent extension of these models to the case of distance-dependent networks, http://arxiv.org/abs/1506.00348, appeared prior to the manuscript under review here but is not cited). Now, given that the model proposed here has an extra parameter (the coupling alpha with the postulated metric space), and that this parameter has a special value for which no coupling is realised, it is obvious that, purely because of the presence of an extra parameter, the model can fit the data better than a model without such a parameter, but where the degrees and strengths can be equally controlled for. Moreover, the agreement with the empirical weight distribution is not a strong test of their model, as one would like the latter to replicate (modulo the noise) each individual weight one by one, and not the statistical distribution alone (the weight distributions of the model and the data can be identical even if no single weight is correctly replicated by the model). This leads me to the conclusion that the 6-panel figures are not conclusive about the agreement between the model and the data: simpler models without the metric hypothesis may lead to equally good results. 3) The only remaining real test of their hypothesis is the TIV curve. Now, I did not realise in my first reading of the manuscript that this quantity only tests the triangular inequality on the REALISED TRIANGLES in the network. It is clear that this restricts the analysis to the triples of nodes for which, at a purely topological level, the triangular inequality is already most likely to be realised. So the TIV (which is the ratio of the REALISED triangles that violate the triangular inequality to the total number of REALISED triangles) is a very weak measure of their hypothesis. I understand that the authors propose a sort of separation between the topology (which is predetermined assuming a metric coupling) and the weights (which are established on the realised links, again assuming a metric coupling). It is however difficult to become convinced that one should not base the analysis of the violation of triangular inequality in real weighted networks to ALL the triples of nodes, including those that are not realised triangles. Clearly, if V-shaped triples of nodes are included in the analysis, the violation of triangular inequality can presumably only get much bigger, leaving us little room to believe that hidden metric spaces are indeed at play behind real weighted networks. Note that, while an analysis of Vshapes (or wedges) would be not so informative for binary networks, it would be very informative for weighted networks, given that I expect many V-shapes with two links with a strong weights (e.g. two peripheral nodes connected to the same hub) and a missing third link (between the two peripheral nodes. These weighted patterns appear to be in stark contrast with the metric hypothesis for weighted networks, and the fact that they are omitted in this analysis can be quite deceptive (again, no offence meant). Reviewer #3 (Remarks to the Author): I have reviewed the manuscript "The geometric nature of weights in real complex networks" for the third time, and now recommend publication of the manuscript in Nature Communications. The authors have carefully and adequately addressed the concerns I previously raised. Overall, I find the paper to be a pioneering contribution for hyperbolic embeddings of weighted networks, a very important topic for network science. 1. Replies to the comments of Reviewer #1 We thank the referee for his/her last report although we are sorry for not being able to convince him/her about the importance of our work. Below, we provide detailed responses to the last comments by the referee that we hope will help to clarify all his/her doubts about our work. We fully agree with the referee that the value of our work is not in our model but on the fact that it can explain very well the patterns that we observe in real networks. We are, however, a bit surprised about the description that the referee makes of our work. Our manuscript starts with an empirical analysis of seven (not three) real weighted networks from very different domains. This is shown in figure one, where we show that there is a different weighted organization of links that participate in triangles with respect to those that do not participate in triangles. We interpret this empirical finding as a signature of an underlying metric space and, then, we introduce our geometric model to explain such empirical observations. Our work is not focused on the model as the referee suggests, even though we strongly believe that our model is, at present, the best model for weighted networks in the market. We do not understand either why the referee talks about the last four figures because our manuscript has only four figures. Besides, the single figure with the TIV curve is actually a 4-panel figure with seven (not three) real networks and we only show one 6-panel figure with the various topological networks properties, the rest of 6-panel figures are included in the Supplementary Information. 1.A When it comes to this crucial empirical analysis, my concerns are really serious. The consistency between the model and real networks is not convincingly studied, in my opinion, for the following reasons. 1) Only three networks are analysed, while the authors claim that their mechanism might be general and explain the nature of weights in generic real-world networks. Replicating the analysis AS IT IS on more networks would however still not be enough, because the tests of the consistency between model and data are unsatisfactory, due to the other two reasons below. Please notice that we analyse seven different real weighted networks from very different domains and not three. 1. A 2) In the 6-panel figures, the consistency is studied only in terms of overall properties like the degree distribution, the strength distribution, the strength-degree relationship, the weight distribution, the disparity, and the clustering-degree relationship. Now, for the first three properties the agreement is totally unsurprising, given that their model (as they repeatedly mention throughout the paper) can control for any degree distribution, any strength distribution, and any form of degree-strength correlation. Of course, these measures where included as a consistency check of the model so that we are sure that, indeed, our model does what is claimed in the manuscript it does, that is, to have full control of the joint degree-strength distribution, regardless of the level of coupling with the metric space. 1.A Coming to the remaining three properties (weight distribution, disparity and clustering), my main concern is that, for many real networks, it turned out that these properties (or very similar ones) can be explained very well even WITHOUT invoking any coupling to an underlying metric space. See for instance the works by Garlaschelli and coauthors about maximum-entropy models of weighted networks, where it was shown that many properties of real weighted networks (including the WTW studied here) can be explained on the basis of strength and degrees alone. (By the way, I now realise that a recent extension of these models to the case of distance-dependent networks, http://arxiv.org/abs/1506.00348, appeared prior to the manuscript under review here but is not cited). Now, given that the model proposed here has an extra parameter (the coupling alpha with the postulated metric space), and that this parameter has a special value for which no coupling is realised, it is obvious that, purely because of the presence of an extra parameter, the model can fit the data better than a model without such a parameter, but where the degrees and strengths can be equally controlled for. We strongly disagree here. The fact that we have an extra parameter by any means implies that the model can fit the data better. Imagine that you have a model that explains some empirical observations but it fails in some others. Now you add a new mechanism that is totally opposite to the real nature of the system under study. Such model, even with more parameters, would not improve the agreement with the data. In any case, the models that you mention are not good in general at reproducing the local heterogeneity of weights, as measured by the disparity measure, or the weight distribution. This can be checked in the new set of numerical experiments that we performed in response to the second referee and that we included in the Supplementary Information in the previous resubmission. First notice that simpler models cannot reproduce very well the weight distribution, whereas our model is very good at this job (in fact the shape of the weight distribution is strongly dependent on the coupling, please see Supplementary Figure 2). Second, what you mention about replicating weights one by one is, in fact, related to the disparity measure, that is very well reproduced by model, as opposed to models without a metric space. 1.A 3) The only remaining real test of their hypothesis is the TIV curve. Now, I did not realise in my first reading of the manuscript that this quantity only tests the triangular inequality on the REALISED TRIANGLES in the network. It is clear that this restricts the analysis to the triples of nodes for which, at a purely topological level, the triangular inequality is already most likely to be realised. So the TIV (which is the ratio of the REALISED triangles that violate the triangular inequality to the total number of REALISED triangles) is a very weak measure of their hypothesis. I understand that the authors propose a sort of separation between the topology (which is pre-determined assuming a metric coupling) and the weights (which are established on the realised links, again assuming a metric coupling). It is however difficult to become convinced that one should not base the analysis of the violation of triangular inequality in real weighted networks to ALL the triples of nodes, including those that are not realised triangles. Clearly, if V-shaped triples of nodes are included in the analysis, the violation of triangular inequality can presumably only get much bigger, leaving us little room to believe that hidden metric spaces are indeed at play behind real weighted networks. Note that, while an analysis of V-shapes (or wedges) would be not so informative for binary networks, it would be very informative for weighted networks, given that I expect many V-shapes with two links with a strong weights (e.g. two peripheral nodes connected to the same hub) and a missing third link (between the two peripheral nodes. These weighted patterns appear to be in stark contrast with the metric hypothesis for weighted networks, and the fact that they are omitted in this analysis can be quite deceptive (again, no offence meant). Please notice that our test of the triangle inequality is performed without the embedding of the network. Instead, we use the relation between hyperbolic distances and weights to perform it by estimating distances on the basis of observed weights. Therefore, the referee's suggestion of using wedges is not possible because we cannot infer the distance between the two disconnected nodes. However, we can perform a similar test to the TIV curve on wedges to check the hidden metric space hypothesis. Suppose that nodes i, j and k form a wedge in which nodes j and k are not connected. According to the hypothesis, we expect the distance x jk between j and k, the disconnected pair, to be larger than the other two distances, x ij and x ik . Therefore, out of the three possible orderings to test the triangle inequality, the one that can be violated is x ij + x ik ≥ x jk . Even though we have no access to the value of x jk without a value for ω jk , we expect it to be larger than R because j and k are not connected and, thus, Therefore, the only clear violation we can detect in wedges is In other words, assuming x jk ≥ R for disconnected pairs, the inequality (2) implies the violation of the triangle inequality: x ij + x ik < R ≤ x jk ⇒ x ij + x ik < x jk . However, notice that the violation of the triangle inequality does not necessarily imply inequality (2); if x jk > x ij + x ik ≥ R, the triangle inequality is violated but inequality (2) is not satisfied. Consequently, contrary to the referee's intuition, if the hidden metric space hypothesis is true, not only the fraction of wedges satisfying inequality (2) should be very small for α > α real , but it should also be smaller than the fraction of violations of the triangle inequality computed over topological triangles-the TIV curve. Figure 1 shows the comparison between the TIV curve and the fraction of wedges satisfying inequality (2), as a function of α, for a synthetic network as well as for the E. Coli network. This new curve decays, in both cases, to zero at the same value of α as the TIV curve. These results confirm the conclusions presented above, yet providing further evidence of the metric nature of weights in real weighted networks.
122049200
s2orc/train
v2
2019-04-20T13:13:53.410Z
2009-01-01T00:00:00.000Z
GLOBAL ANALYTIC FIRST INTEGRALS FOR THE SIMPLIFIED MULTISTRAIN/TWO-STREAM MODEL FOR TUBERCULOSIS AND DENGUE FEVER We provide the complete classification of all global analytic first integrals of the simplified multistrain/two-stream model for tuberculosis and dengue fever that can be written as with β1, β2, b, γ1, γ2, ν ∈ ℝ. Introduction The nonlinear ordinary differential equations or simply the differential systems appear in many branches of applied mathematics, physics, and in general in applied sciences. Since generically the differential systems cannot be solved explicitly, the qualitative information provided by the theory of dynamical systems is, in general, the best that one can expect to obtain. For a two-dimensional differential system the existence of a first integral determines completely its phase portrait, i.e. the description of the domain of definition of the differential system as union of all the orbits or trajectories of the system. To provide the phase portrait of a differential system is the main objective of the qualitative theory of the differential systems. Thus for two-dimensional differential systems one of the main questions is: How to recognize if a given planar differential system has a first integral? In this paper we characterize for a planar differential system depending on six parameters what are the values of these parameters for which there exists a global analytic first integral. For those systems having such a first integral it is possible to describe their phase portraits, and to understand all their qualitative dynamics. Moreover these integrable systems also provide some information about the dynamics of the nearest systems. Now we shall present the planar differential system whose integrability we shall study in this paper. Incomplete treatment of patients with infectious tuberculosis may not only lead to relapse but also to the development of antibiotic resistant, which is one of the most serious health problems in our society. In this direction there are some different models. The differential system studied here is the simplified multistrain model of [2] for the transmition of tuberculosis and to the coupled two-stream vector-based model in [3]. This model has been poorly studied up to now, in the sense that only the behavior of the dynamics near the equilibrium points has been well understood, see [3]. Better contributions for understanding the global dynamics of this model has been done in [7] from the viewpoint of symmetry analysis and in [8] using the singularity analysis theory, where the authors identify some combinations of parameters for which the system has a first integral. Our goal is to provide a complete description of all global analytic first integrals that this system exhibits for different sets of values of the parameters. More precisely, the model which we want to discuss in detail was presented in [9] (see Eq. (14)). This model isẋ where β 1 and β 2 represent the infection rates for the two strains in the case of the tuberculosis model and for the two vectors in the Dengue fever model, ν is the common contact rate of infection, b is the common birth and death rate and γ 1 and γ 2 are the recovery rates. This model is a caricature of the system in [2] and has two infections compartments corresponding to the two infectious agents. The variables represent proportions of a constant population which has been scaled to unity, that is, x + y + z = 1. Then imposing the constraint z = 1 − x − y to our model (1) we get that the three-dimensional system (1) becomes the two-dimensional systeṁ in R 2 . Our objective is to characterize the existence of global analytic first integrals of system (2). Here a global analytic first integral or simply an analytic first integral is a non-constant analytic function H : R 2 → R whose domain of definition is the whole R 2 , and it is constant on the solutions of system (2). This last assertion means that for any solution (x(t), y(t)) of (2) we have that We shall provide a full classification of the existence of global analytic first integrals for system (2). Our main theorem is the following. The explicit expressions for the global analytic first integrals can be found in the proof of the theorem. Along this paper N denotes the set of positive integers, Z + denotes the set of non-negative integers, Q + denotes the set of non-negative rational numbers and Q − denotes the set of negative rational numbers. Theorem 1. The unique systems (2) having a global analytic first integral are the following ones. The proof of Theorem 1 is given in Sec. 2. Proof of Theorem 1 Note that system (2) is a special case of the quadratic Lotka-Volterra systemṡ where a, b, c, A, B, C ∈ R. The existence of global analytic first integrals for system (3) was been studied in [5]. The authors in [5] reduce the study of the 6 parameter family of the quadratic Lotka-Volterra system (3) to the study of 12 subfamilies having 1, 2, 3 or 4 parameters. More precisely, for completeness of the paper, we provide the main results concerning the global analytic integrability of system (3) in the appendix of the paper. The strategy of the proof of Theorem 1 will be as follows: we will put system (2) in one of the subfamilies of system (3) in Theorem 4 and then we will apply the results of Theorem 5. Preliminary results In this section we introduce two auxiliary results that will be used through the paper. We write Eq. (2) as the systemẋ Let f (x, y) = (f 1 (x, y), f 2 (x, y)). We will denote by Df (0) the Jacobian matrix of system (14) at (x, y) = (0, 0) and by Df the Jacobian matrix of system (14) at an arbitrary point (x, y) that will be explicitly specified. The following result is due to Poincaré (see [1]) and its proof can be found in [4]. Theorem 2. Assume that the eigenvalues λ 1 and λ 2 of Df at some singular point (x,ȳ) do not satisfy any resonance condition of the form Then system (14) has no global analytic first integrals. The following result was proved in [6]. Proof of Theorem 1 We separate the proof into different cases. We consider different subcases. (2) we obtain the systeṁ We consider three subcases. Subcase 1.1.1: ν = 0. Then system (2) becomeṡ Theorem 4). We can take α = β = 1. In view of Theorem 5, we have a global analytic first integral if and only if: (1) b = −γ 1 and in this case a global analytic first integral is H = x. which is system (lv3), after a redefinition of the parameters (see Theorem 4). In view of Theorem 5, and since the coefficient of y 2 inẏ is 1, in this subcase there is no analytic first integral. We consider two subcases. It remains to study the case pp 1 − qq 1 > 0. System (6) becomeṡ We will proceed by contradiction. Assume that F (x, y) is an analytic first integral of system (7). Without loss of generality we can always assume that it has no constant term. Then F (x, y) must satisfy We write We will prove by induction that Clearly, F 0 (y) satisfies (8) restricted to x = 0, that is, and since F has no constant term F 0 (y) = 0 and (10) is proved for k = 0. Now we assume that (10) is true for k = N − 1 (with N ≥ 1) and we will prove it for k = N . In view of (9) we have where G(x, y) satisfies, after simplifying by x N , Restricting (11) to x = 0 (since G(0, y) = F N (y)) we get Since qq 1 − pp 1 < 0 and F N (y) must be global analytic it follows that K N = 0 and then F N (y) = 0 which concludes the proof of (10). Subcase 1.2.1: ν = 0. We take α = β = γ = 1 and we obtain the systeṁ which is a particular case of system (lv2) (see Theorem 4). By Theorem 5 we have a global analytic first integral if and only if: (1) b + γ 1 = 0, and a global analytic first integral is H = x. (2) β 2 = pν/q with p, q ∈ N and a global analytic first integral is x = x(y + 1),ẏ = y We consider two subcases. (2) we obtaiṅ We consider four different subcases. Subcase 1.3.1: β 2 + ν = 0. Then we take α = β = γ = 1 and we obtain the systeṁ which is system (lv6) (see Theorem 4). In view of Theorem 5 it has a global analytic first integral if and only if β 2 = 0 and in this case a global analytic first integral is H = y. Subcase 1.3.2: β 2 + ν = 0 and β 2 = 0. Taking α = − 1 ν , γ = 1 and β = − 1 ν we get the systeṁ which is system (lv8) (see Theorem 4). In view of Theorem 5 we have that it has a global analytic first integrals if and only if one of the following conditions hold: (1) β 1 = 0 then a global analytic first integral is H = y b+γ 1 e x−νy . (2) β 1 = −p/q with p, q ∈ N then a global analytic first integral is which is the same as system (lv9) (see Theorem 4). In view of Theorem 5 it has a global analytic first integral if and only if β 1 β 2 +ν , β 1 −ν and β 1 −ν β 2 − β 1 β 2 +ν have all the same signs, and a global analytic first integral is which is system (lv10) (see Theorem 4). In view of Theorem 5 we have no global analytic first integrals. Then, doing the rescaling of variables (3) we obtain the systeṁ We distinguish the following two subcases. , system (13) goes over to the systeṁ We first consider the case in which In this case the eigenvalues of Df (0) are β 1 −b−γ 1 β 2 −b−γ 2 and 1. By the hypotheses for any k 1 , k 2 ∈ Z + with k 1 + k 2 > 0, we have k 1 + k 2 Thus by Theorem 2 system (15) has no analytic first integrals. Now we consider the case in which β 1 − b − γ 1 = 0. In this case the eigenvalues of Df (0) are 1 and 0. Therefore, since (x, y) = (0, 0) is isolated, by Theorem 3 we get that system (15) has no analytic first integrals. Subcase 2.2: β 2 (β 1 −b−γ 1 ) = 0. Doing the change of variables (x, y) → (y, x) it is immediate to check that the expressions of β 1 and β 2 and γ 1 and γ 2 are interchanged. This change of variables pass to a new system satisfying the condition of Case 1. So this case has been studied and we do not obtain new cases of analytic integrability by checking it. (lv9) with (a, b) = (1, 1) and a − 1, (1 − b)a and b − a have all the same signs, then H = x |a−1| y |a(1−b)| ((a − 1)x + (b − 1)y) |b−a| .
15143150
s2orc/train
v2
2012-10-06T22:51:57.000Z
2012-10-06T00:00:00.000Z
Novel Framework for Mobile Collaborative learning (MCL)to substantiate pedagogical activities Latest study shows that MCL is highly focusing paradigm for research particularity in distance and online education. MCL provides some features and functionalities for all participants to obtain the knowledge. Deployment of new emerging technologies and fast growing trends toward MCL boom attract people to develop learning management system, virtual learning environment and conference system with support of MCL. All these environments lack the most promising supportive framework. In addition some of major challenges in open, large scale, dynamic and heterogeneous environments are not still handled in developing MCL for education and other organizations. These issues includes such as knowledge sharing, faster delivery of contents, request for modified contents, complete access to enterprise data warehouse, delivery of large rich multimedia contents (video-on-demand), asynchronous collaboration, synchronous collaboration, support for multi model, provision for archive updating, user friendly interface, middleware support and virtual support. To overcome these issues; the paper introduces novel framework for MCL consisting of four layers with many promising functional components, which provide access to users for obtaining required contents from enterprise data warehouse (EDW). Novel framework provides information regarding the course materials, easy access to check the grades and use of labs. The applications running on this framework give substantial feedback for collaboration such as exchange for delivery of communication contents including platform for group discussion, short message service (SMS), emails, audio, video and video-on-demand to obtain on-line information to students and other persons who will be part of collaboration. INTRODUCTION Computer-supported collaborative learning (CSCL) is one of the most promising pedagogical paradigms, which supports science of learning and research trend in education. This paradigm is highly acknowledged and implemented in schools, colleges, universities and different organizations across the world. From other side; CSCL applications are very expensive and not affordable to deploy for many educational institutions and small types of organizations. Even most of the active communities are still completely unable to use this technology for delivering and stimulating the knowledge. In addition, busy life and hard schedule do not allow the people to use static technology. They want to obtain the learning materials in dynamic environment anytime and anywhere. Mobile is only attractive device with rich features that provides MCL in free environment to meet pedagogical objectives [1]. The concept of mobile-based learning is completely different from classroom-based learning method. Collaboration is synchronous and coordinated activity that creates the bridge of perpetual efforts to build and maintain a shared notion of a problem. MCL in education involves with joint intellectual efforts initiated by students or teachers. It can play phenomenal and increasingly important part in education. It creates an interaction and promotes awareness [4,13]. MCL enhances critical analysis and helps the students to clarify their concepts from other students [12]. MCL is key factor of fostering brainstorming in groups because students with different thoughts on variety of possible variable can perform their activities in peers. It provides sufficient time to students for interaction in order to discuss and exchange the knowledge through higher degree of thinking. The major focus of MCL is to create cooperation instead of competition and take a task as challenge to accomplish as graphic organizer. Students also explore various new things and they are less reliant on teacher's feedback [6,11]. MCL also exhibits intellectual synergy of various combined minds coming to handle the problems and stimulate the social activity of mutual understanding [2]. MCL contributes a larger pedagogical agenda. This approach provides many possibilities, such as increasing literacy rate, motivating the people to improve their education, capturing the market money and providing the opportunities to group of persons, working in same or different organizations [14]. Implementation of novel framework will enable educational institutions and other organizations to deploy at minimal cost. It replaces the class based learning and benefits the persons who are far from the educational institutions and other organizations. Students do not need to attend a class for just listening lectures and attending video conferences. They can get the contents on mobile by just registering with server of educational institution or related organizations. The novel framework will motivate different walks of people to establish social networks, community group, law enforcement agencies, and small groups of security force to ensure the safety of the sensitive places of city. It will also combine various branches of defense department for coordinating with troops for special causes. It helps the military for launching new projects and sharing strategic based information about any secret plans. Novel framework can substantiate health department for introducing new health projects to share the information and create awareness. RELATED WORK Xiaoyong Su et al. [5] propose the four layer framework for multimedia content generation and prototype for multimedia mobile collaborative system. The proposed framework gives an idea how to handle user, device and session management. They also suggested that mobile collaborative environment could be possible by upgrading the devices and network technologies. Lahner F and Nosekabel H [8] have implemented the program in University of Regensburg, Germany, which supports e-learning contents to be displayed on mobiles. The structure of system provides the facility to users to get same contents via mobiles. Authors claim [15] that their proposed approach will improve academic and learning activities. The prototype is based on user profile which stores information regarding learning process. The second is location system, which is used to identify the physical location of user supported by generic architecture. Third is personnel Assistant (PA), which resides in the mobile. Fourth is learning object repository, which stores the contents, related with the process of pedagogical teaching and fifth is message sending system to be controlled automatically or using administrative interface by operators. Final component is Tutor that searches learning opportunities. The proposed prototype provides idea of online learning. Stanford University [7] launched pilot project the Dunia Moja project"one world" in Swahili by July 5, 2007 with three partner universities in South Africa, Tanzania and Uganda. The purpose of this project was to introduce web-based courses and materials. Sony Ericson and Ericsson provided free mobile smart phones. The focus of the project was to use the mobiles to access the course-web site and send text messages. Allision Druin et al. [10] have discussed the prototype for their ongoing participatory design project with intergenerational design group to create mobile application and integrate into iP Phone and iPod touch platforms. Authors claim that designed application can provide the opportunities to bring the children and grand parents together by reading and editing the books. Ericsson Education Dublin [9] initiated a project with €4 million and focused on e-learning to m-learning. The main objectives of project were to produce a series of courses for mobiles, PDAs and smart phones. All of these studies in MCL environment show that neither any contribution particularly worked on architecture based design and nor on functional systems to support mobile application efficiently. All of the previous survey mostly developed the courses for mobiles and few contributions touched the technology but not implemented. Novel framework supports to server and client side systems with promising features to meet the pedagogical requirements. Novel framework is handy in introducing rich MCL client-server based management systems, virtual learning environments and conference systems within educational institutions and outside of the institutions. The challenging issues motivating this research are: to what degree novel framework is effective and result oriented? Can it handle all important features necessary for effective MCL? Can this novel framework support to mobile collaborative applications and other software threats sufficient to meet pedagogical needs and other collaborative requirements? How the findings can be used to engineer the requirements of Students, teachers and other administrative staff of UB and other institutions? What are socio-economic impacts of the novel framework on USA and beyond the world whether claimed objectives have been achieved or not if yes then to what extend and how? The goal of this study is to obtain the learning materials in order to meet pedagogical needs and other collaborative requirements of educational institutions including different organizations. NOVEL FRAMEWORK FOR MCL To make successful collaboration; we need organized architecture with support of latest technologies to meet our expectations. Various conceptual collaborative architectures have been proposed so for. Xiaoyong Su et al. [5] proposed four layered components for collaborative framework, which consists of content generation layer, communication layer, content regeneration layer and content visualization layer. Each layer has been assigned different responsibility. The architecture of [5] has been optimized and extended with inclusion of new sub components. Figure:1 explains the novel framework for mobile collaborative learning environment. BASE LAYER The content generation layer is main component of collaborative framework. The Figure 2 shows the working process of this layer. If client requires any contents, sends the request message to content server for delivery of required contents. The request message includes device profile, status of previous network condition and required URL. The mobile information device profile (MIDF) is supported with Java platform. J2ME is standard model for mobile technology. The base of J2ME is on three different layers, which are profiles, Java virtual machines and configurations. MIDP is based on Java runtime environment supported with profiles and configuration layers. These both layers sit on operating system of client's mobile and personnel digital assistants (PDAs). The goal of profile is to represent the application programming interface (API) for same type of mobile devices with similar features. From other side, content generation layer has also support of different programming language such as C++, Java, HTML, XML, Flash, XHTML, and Structured Query Language (SQL). When content server gets request from client, it searches the requested contents into enterprise data warehouse; if requested content is found then forwarded to message server for further process. COORDINATION LAYER This layer works as transport layer. When message server gets requested content from base layer; first checks network connection. If network connection is enabled, then checks the size of requested content. If requested content contains small amount of data, forwards the content to content integrating manager (CIM) to next layer. If size of requested content is large, then sends the content to message forwarding manager (MFM) for fragmentation of the message. MFM fragments the message into chunks (small amount of data) and transports to (CIM). MFM continuously keeps on transporting the chunks until last chunk means whole fragmented message is delivered to (CIM). In case, if network is disabled then message server (MS) has also option to send the messages (content) to message manager (MM). When MM gets message from MS, it stores the messages into the buffer message. When network connection retains, MM forwards the stored messages to MS and figure 3 shows working process of coordination layer. MODIFICATION LAYER This layer performs two types of tasks; first, it forwards the content to application layer for next process. Second, if client sends modification request for content in middle of the process, then also starts to work on modified request. The process of this layer starts from CIM. The CIM has dual functionalities; it integrates fragmented chunks received from MFM of previous layer. The CIM forwards integrated chunks in form of messages (content) to CFM. CIM also receives content from MIS that comprises of small amount of data. When CFM gets content from CIM, it forwards to media manager (MM), in case no request for content modification is received from Client. If content modification request is received, CFM sends the content to content modifier manager (CMF) with requested modification. The CMF sends content to content deciding manager (CDM) in both conditions. If CMF modifies the content, then sends modified content to content deciding manager (CDM). If modified content meets request of client, CDM forwards requested modified content to CFM for further process. Otherwise it sends back to CFM to same layer. CDM also deals with same process with original content and gets back to CFM. The CFM fragments the contents in chunks and forwards to CIM of previous layer shown in figure 4. APPLICATION LAYER The function of this layer is to display and hand over the contents to client. Application layer obtains contents from media manager and displays the contents in form of data, graph, image and voice as per request of client. Actually media manager is a tool, which organizes files to be shared between mobile devices. The main function of layer is to translate the source program into object program that is done with support of parse engine. The used engine is faster and robust to handle the data errors. Parsing includes set of components, which are scanner component, parser component and document type definition (DTD) component. Scanner component is first major component of parsing engine, which provides push-based API. An API gets characters from input stream (mostly URL) and obtains proper sequences then remove useless data by using methods. Parser component coordinates and handles activities of other components of system. DDT explains the tags, associated set of attributes and hierarchical rules for well defined and valid documents in form of target grammar. Parse engine requires five phases to transform multimedia contents to document object model (DOM). Before delivering document to requested client, each document passes from object construction phase, opening input stream phase, tokenization phase, token iteration phase and object destruction phase. In object construction phase, an appropriate application starts process by creating URL, tokenizer objects and language tags. Since the parser is provided sink and DTD. The function of DTD is to understand the grammar of documents. Sink provides interface for DTD to make content and model properly. Opening an input scream phase gets URL in shape of network input stream. The parse engine provides input stream to Scanner, which controls whole process. This process continues till End-OF-File (EOF) occurs. Token iteration phase validates documents and builds content model. Finally object destruction phase destroys the objects that are in parse engine after tokenization and iteration process. Application layer parses document with support of parse engine to display an object. Simple process of parse engine is shown in figure 5. The following limitations of parse engine has some resolved. Netlib (Input Scream and URL); The XP_COM systems: nsString; nsCore.h and prototypes.h To validate the novel framework; application is run on it that is developed with combination of SDK and android for delivery of text, images, graphs and voice media contents in form of the message. The function of message service is to mange and delivers the message. The message server has two parts; message agent and message server. Profile management is sub component, which falls within content generation, delivery and coordination components. This handles profile of mobile users. A mobile user provides the information regarding the hardware, resource, operating system and connection status. The system architecture of collaboration consists of four logical parts in [3]. The four parts includes with infrastructure, service function, advance service function and application. These logical parts provide different functionalities; infrastructure includes operating systems, communication modules; service function consisting of floor management, media management and session management. Advance service function provides creation and deletion of shared and video window. CONCLUSION We conclude that novel framework has made significant progress in design and development of the MCL environment. First, the paper presents novel framework with integration of new components, such as Content integrating manager, content deciding manager, content modifying manager and content fragmenting manager. These components collectively protect loss of data and make the fast and robust process of delivery of contents to users. Second, we have implemented the first layer of the novel framework to obtain the learning materials on hand held devices particularly on mobile devices. With implementation of framework, students get the contents of the courses anywhere and anytime. We have made significant modifications in the four layer architecture previously explained in [5]. Our framework provides an efficient and faster way of delivering the contents to mobile node. If any modified request is made by client side that is also smoothly handled on third layer rather than going to first layer. These characteristics of novel framework give fast provision of content to mobile node. In future, we will test our new group application with rich features on this novel framework.
251771470
s2orc/train
v2
2022-08-25T13:39:03.249Z
2022-08-25T00:00:00.000Z
Efficacy of pramipexole on quality of life in patients with Parkinson’s disease: a systematic review and meta-analysis Background Quality of life (QoL) in patients with Parkinson’s disease (PD) is increasingly used as an efficacy outcome in clinical studies of PD to evaluate the impact of treatment from the patient’s perspective. Studies demonstrating the treatment effect of pramipexole on QoL remain inconclusive. This study aims to evaluate the effect of pramipexole on QoL in patients with PD by conducting a systematic review and meta-analysis of existing clinical trials. Methods A systematic literature search of PubMed, Embase and the Cochrane Library was performed from inception to 30 April 2022 to identify randomised, placebo-controlled trials of patients with idiopathic PD receiving pramipexole, who reported a change from baseline in their QoL as measured by the 39-item Parkinson’s Disease Questionnaire (PDQ-39). Risk of bias was independently assessed by two reviewers using the Cochrane Collaboration’s tool for bias assessment. Results Of 80 eligible articles screened, six trials consisting of at least 2000 patients with early or advanced PD were included. From the synthesis of all six selected trials, a significant mean change from baseline in the PDQ-39 total score of –2.49 (95% CI, –3.43 to –1.54; p < 0.0001) was observed with pramipexole compared with placebo. A trend toward improvement in QoL was consistently observed among patients who received optimal doses of pramipexole (≥ 80% of the study population on 1.5 mg dosage), regardless of disease severity (advanced versus early) or baseline QoL levels. Conclusion This meta-analysis provides evidence for the potential treatment benefit of pramipexole in improving QoL in patients with PD. Supplementary Information The online version contains supplementary material available at 10.1186/s12883-022-02830-y. Background Parkinson's disease (PD) is a chronic, progressive disease that has no curative therapy and it is the second most common neurodegenerative disease worldwide [1]. The disease is typically characterised by motor symptoms, including difficulty with movement, slowness, freezing, dyskinesia and fluctuations, as well as the increasing recognition of non-motor symptoms [2], all of which play an important role in patients' quality of life (QoL) [3,4]. As a multidimensional concept, QoL broadly reflects the impact of the disease and treatment from the patient's perspective through self-reported measures and is increasingly recognised as an important outcome in the management of PD [5][6][7]. Pharmacological treatments such as dopamine agonists have been shown to improve motor symptoms of PD and also improvements in patients' QoL [8,9]. Pramipexole is a non-ergolinic, D3-preferring dopamine agonist that is generally well tolerated and efficacious in treating motor symptoms in both early and advanced PD, available as both immediate or extended release formulations with similar clinical efficacy and safety profiles [10,11]. Pramipexole, as initial monotherapy or adjunctive therapy to levodopa, is efficacious in PD in preventing or delaying motor fluctuations or dyskinesia associated with long-term use of levodopa [8]. Prior literature reviews [5,12] and a systematic review [9] have attempted to investigate the effect of dopamine agonists (including pramipexole) on QoL in patients with PD. These findings remain inconclusive due to the high variability in study designs and heterogeneous outcome measures of QoL applied in the selected trials [9]. Of the many generic measures of QoL, few are PD-specific. The 39-item Parkinson's Disease Questionnaire (PDQ-39) is the most widely used PD-specific health-related QoL questionnaire [13] and has demonstrated good reliability, validity, responsiveness and reproducibility [14,15]. The present study aimed to investigate whether pramipexole is more efficacious than placebo in improving QoL in patients with PD, as measured by PDQ-39, through an updated systematic review and meta-analysis of existing trials. Additionally, we explored whether these benefits are consistent in patients with different PD characteristics. Eligibility criteria For this meta-analysis, we included studies that met all of the following criteria: (i) randomised, placebo-controlled trial; (ii) comprised patients who were diagnosed with idiopathic PD in any stage, regardless of age, sex, location and race; (iii) comprised patients who were receiving pramipexole, alone or in combination with other anti-Parkinsonian treatments as an intervention; (iv) included the assessment of change in QoL from baseline as measured by PDQ-39. Studies were excluded if they were not published in English or consisted of a specific clinical population, such as those with existing comorbidities. Literature search A computerised systematic search for all eligible studies from inception up to 30 April 2022 was conducted using PubMed, Embase and the Cochrane Library. The search strategy was adapted according to database and included combinations of the following terms: (Parkinson OR Parkinson's disease) and (pramipexole OR sifrol OR Mirapex) and (randomized controlled trial OR rct OR placebo OR drug therapy OR random* OR trial OR controlled OR group OR trials OR groups). The review format adhered to the updated Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [16]. Study selection and data extraction Two investigators (ZZ and SZ) independently retrieved data and assessed study eligibility according to titles and/ or abstracts. Relevant data (general information, study design, patient characteristics, treatment details, mean differences in PDQ-39 total score from baseline to last available follow-up) were extracted (primarily by ML) into an Excel database and independently examined by a second reviewer (ZZ). The estimate was the mean difference between the pramipexole versus placebo group, and a corresponding standard error (SE) was extracted effective at treating Parkinson's disease symptoms. However, studies examining its effect on quality of life are inconclusive. This meta-analysis of existing clinical trials therefore aimed to evaluate the effect of pramipexole on quality of life in patients with Parkinson's disease. Six trials consisting of at least 2000 patients with early or advanced Parkinson's disease receiving treatment with pramipexole were included in this meta-analysis. Analysis of these six trials found a significant improvement in PDQ-39 total score with pramipexole compared with placebo. This meta-analysis provides new evidence for the potential treatment benefit of pramipexole in improving quality of life in patients with Parkinson's disease. Keywords: Parkinson's disease, Pramipexole, Meta-analysis, PDQ-39 or derived from confidence intervals (CIs) if the SE was not available from the source publication. For studies where mean differences were not available, the data were requested from the study authors, public sources or owners of the study database; these requests were documented. Risk of bias assessment Two reviewers (ZZ and ML) independently evaluated the selected studies using the Cochrane Collaboration's tool for assessing risk of bias (RoB 2 tool) in randomised trials [17,18]. The quality of each study was evaluated for methodological bias in selection, performance, detection, attrition and reporting, as well as for other potential sources of bias. Inter-rater disagreement was minimal and resolved through discussion and re-examination of each particular study in question with the input of a third reviewer (ZL). Statistical analysis To examine differences between treatments, we compared the mean change in the PDQ-39 total score between treatment groups from baseline to last available follow-up. The means were pooled where multiple pramipexole dose groups or formulations (immediate/extended release) were tested in a single study. The common effect was calculated using the inverse variance method, which derives a weighted average of treatment effects estimated from individual studies. The weights were chosen to be the reciprocal of the variance of the effect estimate from a single study, reflecting the amount of information from individual studies [17]. The Q and I 2 statistics were used to assess heterogeneity for the variation in true-effect sizes across the included studies; a p-value below 0.10 for the Q statistic and/or I 2 index above 50% indicated significant heterogeneity. If heterogeneity was detected, a random-effects model was applied as a supplementary analysis to compare the pooled means; otherwise, a fixed-effects model was applied, which used the inverse variance method to estimate the pooled mean difference between the pramipexole and placebo groups for all eligible trials, along with 95% CIs and two-sided p-values. Because of the few studies (< 10) identified within the scope of this analysis, funnel plot assessment for potential publication bias was not undertaken as the statistical power would be too low to determine asymmetry [19]. For continuous data, pooled means, 95% CIs and twosided p-values were calculated using a fixed-effects model to estimate mean differences between the pramipexole and placebo treatment groups for all eligible trials. Sensitivity analyses explored whether the outcome of interest was influenced by studies with risk of bias or non-normal distribution. Additional exploratory subgroup meta-analyses were conducted after grouping selected studies according to: (i) pramipexole treatment dose: studies with ≥ 80% of patients receiving the recommended daily dose of ≥ 1.5 mg were categorised as "optimal dose" groups [20,21] and those with a daily dose < 1.5 mg were considered "low dose" groups; (ii) baseline disease stage: advanced or early PD, according to the disease severity of the respective original studies; (iii) baseline QoL level: the studies were grouped into high (≥ 25) or low (< 25) QoL groups according to the baseline total PDQ-39 mean scores. All analyses were performed using SAS version 9.4 software package (Statistical Analysis System, Cary, NC). Study selection and characteristics A total of 5227 potential studies were identified following systematic literature searches of the three databases, which were filtered for duplicates (n = 696) and screened for eligibility (Fig. 1A). Full-text versions of the remaining 80 articles were further assessed, from which six studies were deemed eligible and included in the final meta-analysis [22][23][24][25][26][27]. The main characteristics of the six selected studies are summarised in Table 1. All the included studies were randomised, placebo-controlled, multicentred and parallel-group, and consisted of at least 2000 patients. Among these, four studies included patients with early PD and two involved those with advanced PD. Total treatment duration for each study ranged from 12 weeks to 9 months. Patients who were assigned to pramipexole received a dosage of 0.375-4.5 mg daily. The mean change in PDQ-39 between pramipexole and placebo was not available for one study [27] and two studies did not publish baseline PDQ total scores [22,25]. Accordingly, the authors/study sponsors were contacted for the relevant PDQ scores although the final treatment differences in PDQ-39 between pramipexole and placebo for this analysis were derived from existing available treatment difference data that were published. Figure 1B displays the risk of bias assessment for all six selected studies, which includes assessment of selection, performance, detection, attrition and reporting, as well as other potential sources of bias. The general methodological quality of the six studies was rated as acceptable, with adequate sequence generation, allocation concealment and double-blinded design. Only one study [25] was considered to carry a potential risk for selective reporting since missing data were not adequately addressed. Meta-analysis of the effect of pramipexole on QoL In all six selected trials, PDQ-39 was reported as a secondary or other outcome; none reported PDQ-39 as a primary outcome. The mean change in PDQ-39 total score from baseline to last available value was extracted from all of the included studies, except Schapira et al. [27], which only published the median change in PDQ-39 total score (Table 2). Sensitivity analyses excluded studies that may have included selective reporting bias [24] and the median values were reported instead of the mean [27]. Excluding these studies did not alter the main findings on the effect of pramipexole. Exploratory subgroup analyses on QoL Further subgroup analyses were conducted to explore the effects of pramipexole on QoL according to treatment dose, baseline disease stage and baseline PDQ-39 total score. Discussion This is the first comprehensive systematic review and meta-analysis to assess the effect of pramipexole on QoL in patients with PD. Our analysis pooled PDQ-39 total scores from six eligible pramipexole randomised Table 1 Main characteristics of the included studies in the meta-analysis a Total N is based on the number of available PDQ-39 data at the timepoint of interest according to the original publication b Calculated from total mean age using the individual ages provided for separate treatment groups c In this randomised delayed-start trial, patients were randomly assigned to receive PPX or placebo for 9 months. After 9 months, the placebo group received PPX and followed-up until 15 months. For this analysis, only the first 9-month data were included, which compared PPX to placebo [27] d PDQ-39 data were only available at 18 weeks; the study follow-up periods include both 18 weeks and 33 weeks [23] e The original publication included two randomised, controlled, double-blind studies with different follow-up periods ( controlled trials (RCTs) in patients with PD and demonstrated a greater mean change from baseline with pramipexole. In addition, a trend toward improvement in QoL with pramipexole was observed in patients who received optimal dosing (1.5 mg), regardless of their disease severity (advanced versus early) or baseline QoL levels. Dopamine agonists can be administered either as monotherapy or adjunctive therapy for managing PD symptoms and also for reducing the risk of levodopa-related motor fluctuations and dyskinesia [8]. An earlier review that investigated the effect of pharmacotherapy for PD on QoL included three double-blind controlled studies of pramipexole that reported inconsistent effects of pramipexole versus the control on QoL. This was possibly due to the different types of control (levodopa or placebo) or QoL measures utilised in the reviewed studies (e.g., specific: PDQ- 39 [12]. Nonetheless, a greater improvement in QoL with pramipexole (monotherapy or add-on) had been observed in early PD [12]. This was further investigated in a systematic review that included eight pramipexole clinical trials with QoL as a study outcome using a variety of QoL measures [9]. This systematic review provided some evidence that pramipexole can improve patients' QoL when administered as an adjunct therapy; however, the magnitude of change could not be calculated based on the available data [9]. A subsequent narrative review evaluated the effect of a wider range of pharmacological treatments for PD on QoL, measured by PDQ-39 or PDQ-8 questionnaires [5]. The efficacy of pramipexole on QoL was demonstrated in two Level I studies of pramipexole vs other dopamine agonist or monoamine oxidase-B inhibitors and placebo; however, differences remained for other Level III studies [5]. Recent randomised studies have further suggested a beneficial effect on QoL with pramipexole, including a randomised trial comparing pramipexole/rasagiline combination therapy with different pramipexole doses [28] and a controlled trial with pramipexole as an adjunct therapy in PD inpatients; however, in the latter study, details of the randomisation, such as treatment dose and length of hospitalisation were unclear [29]. Since these previous reviews, our current search now consists of high quality eligible RCTs to investigate and quantify the effect of pramipexole alone versus placebo in patients with stable PD [22][23][24][25][26][27]. A comprehensive review by the Movement Disorder Society Task Force assessed a variety of PD-specific scales for QoL and found that PDQ-39 was the most widely used measure in PD research, with sufficient psychometric properties and validated in different populations [30]. The PDQ-39 correlates with motor disability and is impacted negatively by the total number of nonmotor symptoms [31], and it has also been used to track longitudinal change in QoL in PD [32]. PDQ-39 was also used in the previous systematic review [9], and in consideration of its established and validated utility as a PD-specific QoL assessment tool, PDQ-39 was selected for this review. Briefly, scores for each dimension in the PDQ-39 are expressed as a summed item score ranging from 0 to 100, with a lower overall score reflecting better QoL [33]. Given the different baseline PDQ-39 scores for each study, this meta-analysis was performed on the change from baseline scores. The authors of the PDQ-39 also determined minimal clinically important difference (MCID) thresholds for the PDQ-39 total score and individual dimension scores based on responses to a survey of 728 patients with PD [34]. According to this study, a minimum of 1.6 points was considered to be the MCID threshold to indicate a clinically meaningful change in the PDQ-39 total score [34]. MCIDs are developed by benchmarking measurements or scores against subjective assessments reported by patients, and have been found to carry less bias than calculated effect sizes [35]. Using PDQ-39 as single disease-specific measure for QoL, our current analysis shows that pramipexole improves QoL in patients with PD compared with placebo, with a significant mean difference from baseline of -2.49 (95% CI, -3.43 to -1.54), which includes the MCID of -1.6. Many factors related to QoL have been identified in patients with PD [36,37]. We wanted to explore whether the beneficial effects of pramipexole on QoL remain in various subgroups of patients, such as patients at early/ advanced disease stage or with higher/lower baseline QoL, as well as any effects of pramipexole dosing. The current analysis included four studies of patients with early PD and two studies of patients with advanced PD as defined at study inclusion. Our analysis revealed that treatment with pramipexole significantly improved QoL in patients with PD regardless of disease stage. Baseline QoL status has been shown to be associated with the level of decline in QoL over time in previous PD research [38,39]. The current analysis demonstrated that QoL improved with pramipexole regardless of whether patients had better or worse QoL status at baseline. The labelling of pramipexole in the USA, UK and China [10,11,20] indicates that the treatment dose should be gradually titrated to achieve an optimal dose of 1.5-4.5 mg per day based on efficacy and tolerability. Our analysis found significant improvement in QoL among those who received the optimal maintenance dose of pramipexole (≥ 1.5 mg). Pramipexole 1.5 mg was determined as a critical dose since patients receiving ≥ 1.5 mg per day showed greater improvements in motor function and fewer adverse events than patients who received < 1.5 mg per day [21]. However, a daily dose lower than 1.5 mg has been reported in a previous clinical trial in China [40] and this remains a concern. The critical dose of 1.5 mg should be recognised and applied in clinical practice to ensure that the best treatment outcomes can be achieved for patients. Overall, our findings suggest that the benefits of pramipexole in patients with PD were generally consistent across groups with different baseline factors. Despite the predominantly motor manifestations of PD, non-motor symptoms have been documented across the spectrum and disease stage of PD [36,41,42], including depression [36,43] and sleep disturbance [39,44]; these are identified as significant predictors of QoL scores. Studies have demonstrated a greater longitudinal influence of non-motor symptoms on QoL scores compared with motor symptoms [41,42]. Pramipexole is established as a safe and effective treatment for PD across all disease stages [22][23][24]45]. Among the dopamine agonists, pramipexole has a unique antidepressant effect and has demonstrated improvements in sleep disturbance independent of motor symptom improvement in PD [46][47][48]. Our analysis found significant improvements with pramipexole in both early and advanced PD subgroups. This suggests that pramipexole provides clinically meaningful improvements in QoL in patients with PD, which is important for long-term care and subjective overall well-being, in addition to disease symptom control, for this chronic incurable disease. Further investigations are needed to understand the effect of pramipexole on QoL through different motor and non-motor factors, as well as the change in contributing factors over the course of the disease. This systematic review and meta-analysis applied rigorous and reproducible methods, and can be generalised to a broader profile of patients with PD. Nonetheless, a number of caveats should be considered when interpreting these results. The current results are based on six eligible trials, which may limit the overall power, although the total number of patients included was more than 2000. A fixed-effects model was applied to derive a weighted average of treatment effects estimated from individual studies, since heterogeneity was not detected for the included studies [17]. Due to the PDQ-39 mean difference from the 2013 study by Schapira et al. having the smallest variability (smaller SE and narrower CI) compared with the other studies included, the model assigned the greatest weight to the mean difference in this study for the metaanalysis calculation [27]. PDQ-39 as a self-reported measure of QoL outcome could have been affected by patients' subjectivity [3] and does not capture environmental changes (e.g., clinical picture of disease, social status, social and living conditions, the number and intensity of social contacts) [5] that may have been experienced by a patient during the study. The PDQ-39 data for this analysis were not primary outcomes and were pooled from various RCTs with different study durations and designs and thus may have been influenced by other factors. For example, in the 2013 study by Schapira et al., patients in the delayed treatment arm (placebo) received the study drug (pramipexole) after 9 months; therefore, the treatment assignment is no longer true [27]. Additionally, in an earlier study by Schapira et al. [23], the PDQ data were not available at 33 weeks. We therefore performed a sensitivity analysis (Fig. 3) using 18 weeks as a cut-off (a standard pramipexole treatment follow-up duration [10]) and found a consistent effect on QoL with pramipexole. Finally, only studies published in English were considered and included in this analysis. Conclusion This meta-analysis of pooled data from six pramipexole RCTs evaluating the change in PDQ-39 scores from baseline between pramipexole and placebo in patients with PD provides new evidence for a QoL benefit with pramipexole in PD. These results could be confirmed in larger pramipexole RCTs that investigate QoL as a major efficacy outcome of interest.
251105250
s2orc/train
v2
2022-07-28T01:15:45.745Z
2022-07-27T00:00:00.000Z
One-Trimap Video Matting Recent studies made great progress in video matting by extending the success of trimap-based image matting to the video domain. In this paper, we push this task toward a more practical setting and propose One-Trimap Video Matting network (OTVM) that performs video matting robustly using only one user-annotated trimap. A key of OTVM is the joint modeling of trimap propagation and alpha prediction. Starting from baseline trimap propagation and alpha prediction networks, our OTVM combines the two networks with an alpha-trimap refinement module to facilitate information flow. We also present an end-to-end training strategy to take full advantage of the joint model. Our joint modeling greatly improves the temporal stability of trimap propagation compared to the previous decoupled methods. We evaluate our model on two latest video matting benchmarks, Deep Video Matting and VideoMatting108, and outperform state-of-the-art by significant margins (MSE improvements of 56.4% and 56.7%, respectively). The source code and model are available online: https://github.com/Hongje/OTVM. Introduction Video matting is the task of predicting accurate alpha mattes from a video. This is an essential step in video editing applications requiring an accurate separation of the foreground and the background layers such as video composition. For each video frame I, it aims to divide the input color into three components: the foreground color, the background color, and the alpha matte. Formally, for a given pixel, it can be written as, I = αF + (1 − α)B, where F and B are the foreground and background color, and α ∈ [0, 1] represents the alpha value. Here, only 3 values (I) are known, and the remaining 7 values (F , B, and α) are unknown. Given the ill-posed nature of the problem, traditional methods utilize trimaps as additional inputs that indicate pixels that are either solid foreground, solid background, or uncertain. The trimap provides a clue for the target object and effectively improves the stability of the alpha prediction. Leveraging the latest progress in trimap-based image matting [ learning-based video matting techniques. They decouple video matting into two stages, trimap propagation and alpha prediction. They re-purpose the latest mask propagation network [39] to propagate the given trimaps throughout the video, then design alpha prediction networks that take multiple trimaps as input and predict the alpha matte ( Fig. 1(a)). While the decoupled approach effectively simplifies the task, it has a critical limitation as illustrated in Fig. 2. By the nature of the trimap, the unknown region of one frame may be changed into foreground or background and vice versa at a different frame. Therefore, if we propagate a trimap based on visual correspondences without the knowledge for alpha matte [39,5], it may produce inaccurate trimaps and the error can be easily accumulated as shown in Fig. 2(c), leading to the failure of alpha prediction. With this challenge, the existing decoupled methods require multiple user-annotated trimaps to prevent drifting at trimap propagation. In this paper, we aim to tackle video matting with a single trimap input. To cope with the challenging scenario, we propose One-Trimap Video Matting network (OTVM) that performs trimap propagation and alpha prediction as a joint task, as illustrated in Fig. 1(b). Starting from baseline trimap propagation and alpha prediction modules, we cascade the two modules to alternate trimap propagation and alpha prediction auto-regressively at each frame. We employ the The intrinsic challenge of trimap propagation. As shown in (b), the same part of an instance can have varying trimap labels at different times, e.g., the label of the left eyelid has changed from the unknown to the foreground. If trimap propagation is conducted using visual correspondences without the knowledge for alpha matte, then it may produce inaccurate trimaps and the error can be easily accumulated as in (c). space-time memory (STM) network [39] and the FBA matting network [14] as the baseline modules, respectively. To facilitate information flow within the pipeline, we add a refinement module and re-engineer STM accordingly. In addition, we present an end-to-end training pipeline to make OTVM learn the joint task successfully. The major advantage of OTVM is robust trimap propagation that is critical for the practical video matting scenario. Since an alpha matte contains richer information than a trimap, we are able to update the trimap after alpha prediction and this update step prevents error accumulation in the trimap, resulting in robust trimap propagation and accurate alpha prediction. OTVM produces accurate video mattes even in the challenging one-trimap scenario. We demonstrate that OTVM outperforms previous matting methods with large margins on two latest video matting benchmarks: 56.4% improvement on Deep Video Matting (DVM) [52] and 56.7% improvement on VideoMatting108 [65] in terms of MSE. We also conduct extensive analysis experiments and show that the proposed joint modeling and learning scheme are crucial for achieving robust and accurate video matting results. Related Work Image Matting. The image matting task was introduced in [42]. Unlike the image segmentation task that predicts a binary alpha value, the image matting problem aims to predict a high-precision alpha value in a continuous range. Therefore, the matting problem is harder to solve, and most existing works are addressed under some conditions. The most common condition is assuming a human-annotated trimap is given. The trimap is annotated into three different regions: definitely foreground region, definitely background region, and unknown region. The trimap serves to not only reduce the difficulty of the matting problem but also allows some user control over the results. Some approaches try to find good alternatives to human-annotated trimap. Portrait matting [49,67,24] can extract an alpha matte without any external input, but they are only applicable for human subjects. Background matting [45,31] proposes to take complete background information instead of trimap input and predicts high-resolution alpha matte. However, this method is hard to extend to general video matting because it can work only with a near-static background. Mask guided matting [64] proposes to replace the trimap with a coarse binary mask that is more accessible. All image matting methods mentioned above can be extended to video matting by applying frame-by-frame, but the constraint must be met for each frame (e.g., trimap for every frame). Video Matting. Early video matting methods largely extended traditional image matting methods by either extending the propagation temporally [1,8,29,47] or by sampling in other frames [53,47]. While there was some work that would generate trimaps automatically by deriving them from segmentations [16,55] or using interpolation [9] or propagation [53], these were computed independently from the video matting method. Bai et al . [2] propagates trimaps based on predicted alpha mattes. Tang et al . [56] would use the alpha matte of one frame to help predict the trimap of the next frame, but did so by using the alpha matte to compute a binary segmentation from which a new trimap would be computed. Recently . The network predicts a trimap based on the information from the previous frames and predictions that are embedded by the trimap memory encoder. From the given or generated trimap, we initially predict the alpha matte using our alpha prediction network inspired by [14]. Then we refine the generated trimap and predicted alpha matte via two light-weighted residual blocks. The refined trimap, alpha matte, and hidden features (dimension = 16) are fed to the trimap memory encoder so that they can be used for the next frames as a new memory. The framework is trained in an end-to-end manner. We further illustrate the details of each module in the supplementary material. studies while continuing the success of learning-based video matting, we propose OTVM that jointly learns trimap-based alpha prediction and alpha-based trimap propagation. Our method shows that high-quality and temporally consistent alpha prediction in video is achievable using only a single trimap input. Method The overall architecture of OTVM is illustrated in Fig. 3. Our method aims to perform video matting robustly with only a single trimap. From the given user-provided trimap at the first frame, we sequentially predict a trimap and an alpha matte for every frame in a video sequence. Starting from the user-provided trimap, we first predict the initial alpha matte by feeding the RGB frame and trimap to our alpha prediction network. Then, a lightweight refinement module is followed to correct errors in alpha matte and trimap, resulting in a refined alpha matte and trimap. The refinement module also produces a hidden latent feature map, and all the outputs from the refinement module are encoded as memory by the trimap memory encoder for trimap propagation. On the next frame, our trimap propagation module predicts the trimap by reading relevant information from the memory. This procedure -alpha prediction, refinement, and trimap propagation -is repeated until the end of the video sequence. Note that recent works [65,52] also generate trimaps and predict alpha mattes, but our approach is completely different from those. In the existing works, the trimap propagation and alpha matting are totally decoupled. They tried to propagate trimap naively (i.e. without consideration of the challenge introduced in Fig. 2), resulting in inaccurate trimap propagation. Therefore, multiple groundtruth (GT) trimaps should be provided to achieve good results. In contrast, our OTVM can extract accurate alpha matte results even with a single humanannotated trimap, thanks to our joint modeling. Alpha Prediction with Trimap Given an RGB image and (either a propagated or user-provided) trimap, we predict an alpha matte. Here, we opt for the state-of-the-art image-based matting network, FBA [14], to simplify the problem. The alpha matting network is an encoder-decoder architecture. The alpha encoder first takes a concatenation of the RGB and trimap along the channel dimension as input. Then, the resulting pyramidal features of the alpha encoder are fed into the alpha decoder that produces an alpha matte. To exploit the advantage of the coupled network trained end-to-end, we directly use the soft trimap from the trimap propagation module without binarization when a propagated trimap is given. We can use any alpha matting network, however, we empirically observe that advanced alpha networks (e.g. video alpha networks) make our framework complex and hinder end-to-end training given limited training data for video matting, while the latest image-based alpha networks work surprisingly well as long as a reliable trimap and end-to-end training are provided. Therefore, we take the simple image-based alpha prediction model and focus on developing the joint framework that can reliably propagate the trimaps. Alpha-Trimap Refinement In our video matting setting, which takes only one GT trimap, naively propagating the trimap may result in severe drifting as depicted in Fig. 2. To take the advantage of our coupled framework, we have an additional refinement module following the alpha network to provide refined information to the trimap propagation module afterward. The refinement module is light-weighted as it is composed of two residual blocks. The refinement module takes all available information for the current frame: an input RGB frame, the generated trimap, the predicted alpha matte, and the alpha decoder's latent features. Then, the module produces an updated trimap and a refined alpha matte along with unconstrained hidden features. The hidden features are intended for information that cannot be expressed in the form of trimap and alpha. These features are learnable through end-to-end training. All the outputs of the refinement module will be used for trimap propagation. Trimap Propagation with Alpha To propagate trimaps, we repurpose a state-of-the-art video segmentation network, space-time memory network (STM) [39], with important modifications. In the original STM [39], input images and corresponding masks in the past frames are set to the memory, while the image at the current frame is set to the query. Then, the memory and query are embedded through two independent ResNet50 [20] encoders. The embedded memory and query features are fed to the space-time memory read module. In the module, dense matching is performed and then a value of memory is retrieved based on the matching similarity. The decoder takes the retrieved memory value and query feature and then outputs an object mask. This approach can effectively exploit rich features of the intermediate frames and achieve state-of-the-art performance in video binary segmentation. STM [39] simply can be extended from binary mask to trimap by increasing the input channel dimension of the memory encoder and the output channel dimension of the decoder. However, there is a fundamental limitation to apply STM directly for trimap propagation. In the binary mask, the foreground region and background region can be estimated by propagating from past binary masks. The trimap, however, cannot be estimated only with propagation because the unknown regions are frequently changed by the view of the foreground object (see Fig. 2) and trimap-only supervision does not provide a consistent clue for the changes. To address this problem, we additionally impose rich cues for generating the trimap into the memory encoder. Since the trimap has been determined by the alpha matte, it effectively helps to learn for trimap generation. We additionally impose a hidden feature extracted from the refinement module. With the hidden features, any errors can be easily propagated backward at training time, resulting in stable training. By imposing those into the memory encoder, we significantly reduce errors that occurred by drifting of the unknown regions. End-to-End Training To make OTVM work, it is critical to train the model end-to-end because each module depends on each other's outputs. However, video matting data are extremely difficult to annotate and existing video supervisions are not sufficient to train the model directly. As a practical solution, we train each module stage-wise then fine-tune the whole network in an end-to-end manner. First, we initialize the trimap propagation and the alpha matting modules with the pretrained weights of off-the-shelf STM [39] and FBA [14], respectively. Specifically, both pretrained models leverage ImageNet [44]. In addition, the STM is trained using image segmentation datasets [13,33,18, , depending on the target evaluation benchmark (Stage 4). Stage 1: Training the alpha matting module and trimap propagation module separately. As two modules depend on each other, if we train the alpha matting module and the trimap propagation module simultaneously from scratch, this can lead the model to either poor convergence or simply memorizing training data (i.e., overfitting). It is because both modules cannot learn meaningful features from almost randomly initialized input data which is as the output of other modules. Therefore, we first separately train two modules without the connections between two. Specifically, we train the alpha matting module with GT trimaps and train the trimap propagation module without taking inputs of an alpha matte and hidden features. Stage 2: Training the alpha matting and refinement modules with propagated trimaps. We train the alpha matting model and refinement modules together while the trimap propagation module is frozen. This stage enables the refinement module to take a soft and noisy trimap as input and learn to predict accurate trimap and alpha matte. Stage 3: Training the trimap propagation module. In the trimap propagation module, we activate all input layers for alpha matte and hidden features. Then, we train the trimap propagation module while the parameters for the remainders -alpha prediction and refinement -are frozen. In this stage, we leverage not only the loss from the predicted trimap but also the losses from the alpha prediction. This enables the trimap propagation module to predict a more reliable trimap for estimating the alpha matte. While we are not updating the alpha network and refinement module in this stage, we can leverage the gradients from their losses for updating the trimap propagation module. Stage 4: End-to-end training. Finally, we train the whole network end-to-end using a video matting dataset. With the stage-wise pretraining, we can effectively leverage both image and video data, and achieve stable performance improvement at the end-to-end training. Training Details Data preparation. During training, we randomly sampled three temporally ordered foreground and background frames from each video sequence. If an image dataset (e.g., AIM [61]) is used, we simulate three video frames from a pair of foreground and background images by applying three different random affine transforms into both foreground and background images. The random affine transforms include horizontal flipping, rotation, shearing, zooming, and translation. For each foreground and background frame, we randomly crop patches into 320 × 320, 480 × 480, or 640 × 640, centered on pixels in the unknown regions. And then we resize the cropped patches into 320 × 320. Additionally, we employ several augmentation strategies on both foreground and background frames: histogram matching between foreground and background colors, motion blur, Gaussian noise, and JPEG compression. Then we composite foreground and background on-the-fly to generate an input frame. The GT trimaps are generated by dilating the GT alpha matte with a random kernel size from 1 × 1 to 26 × 26. Loss functions. We set objective functions for all outputs of the models, except for the hidden features. For both initially predicted and refined trimaps, we use the cross-entropy loss to compare with the GT. For the first frame where the GT trimap is provided as the input, we only apply the loss to the refined trimap. Ideally, there should be no change after refinement. We find penalizing any change after the refinement is helpful to prevent it from corrupting already accurate trimap. For the alpha predictions, we leverage the temporal coherence loss [52] and image matting losses used in FBA [14]. Different from some previous methods that only compute alpha losses on unknown regions (e.g. [61]), we compute our losses on every pixel. In addition to trimap and alpha losses, we also employ losses for the foreground and background color predictions. We estimate foreground and background colors from the alpha decoder and refinement module following [14]. We minimize all foreground and background losses used in [14], and additionally employ temporal coherence loss on both foreground and background. For the foreground color, we compute the losses only where an alpha value is greater than 0 because the exact foreground color is available only in those regions. More detailed explanations of loss functions are given in the supplementary material. Other training details. We opt for RAdam optimizer [34] with a learning rate of 1e-5. We drop the learning rate to 1e-6 once at 90% iteration for each training stage. We freeze all the batch normalization layers in the networks. We used a mini-batch size of 4 and trained with four NVIDIA GeForce 1080Ti GPUs. At the first pretraining stage, we trained the alpha matting model about 100,000 iterations and we trained the trimap propagation model about 400,000 iterations. We trained about 50,000 iterations at each of the second and third training stages. Finally, we trained about 80,000 iterations at the last end-to-end training stage. Inference Details We used full-resolution inputs to achieve high-quality alpha matte results. For the memory management in the trimap propagation module, we generally follow STM [39] that stores the first and the previous frames to the memory by default, and additionally saves new memory periodically. We add the intermediate frames to the memory for every 10 frames. To avoid GPU memory overflow, we store only the last three intermediate frames and discard old frames. Evaluation Datasets and Metrics We present experimental results and analysis on two latest benchmarks, Video-Matting108 [65] and DVM [52]. VideoMatting108 dataset [65] includes 28 foreground video sequences paired with background video sequences in the validation set. The evaluation is conducted with three different trimap settings: narrow, medium, and wide. The groundtruth trimaps are generated by discretizing the groundtruth alpha mattes into the trimaps, followed by dilating the unknown regions with different kernel sizes; 11 × 11 for narrow, 25 × 25 for medium, and 41 × 41 for wide. This benchmark contains long video sequences and the average length of the videos is about 850 frames. Therefore, predicting alpha mattes with only a single trimap at the first frame is challenging in this dataset. Evaluation metrics. For a fair comparison, we follow the evaluation metrics from two large-scale video matting benchmarks [65,52]. To evaluate on VideoMat-ting108 [65], we compute SSDA (average sum of squared difference), MSE (mean squared error), MAD (mean absolute difference), dtSSD (mean squared difference of direct temporal gradients), and MESSDdt (mean squared difference between the warped temporal gradient) [12]. To evaluate on DVM [52], we compute SAD (sum of absolute difference), MSE, Grad (gradient error), Conn (connectivity error), dtSSD, and MESSDdt. For all computed metrics, lower is better. In the original works [65,52], the metrics are computed only on the unknown regions of the GT trimaps. We follow this rule for fair comparisons with the existing methods. However, using only the unknown region cannot capture the errors in the foreground and background regions that occurred by inaccurate trimap propagation, which is important for evaluating the performance of endto-end video matting methods. Therefore, we present the modified versions of the metrics that compute the scores on the full-frames, suffixed with "-V". The modified metrics are used for our analysis and ablation experiments. Analysis Experiments To validate our hypotheses experimentally, we conduct a set of analysis experiments. We use the one-trimap setting, where GT trimap is given only at the first frame. All these analysis experiments are conducted on the VideoMatting108 benchmark with the medium trimap setting. Effectiveness of the joint modeling. We first validate the importance of the joint modeling of trimap propagation and alpha prediction in video matting. For this purpose, we design a simple baseline model for video matting, STM+FBA, that cascades STM [39] for trimap propagation and FBA [14] for alpha prediction from the propagated trimap. Note that we do not use OTVM because our proposals (i.e., trimap and alpha refinement and using hidden features) are only applicable for the joint modeling. To show the effect of joint modeling, we train the baseline model with two different training strategies. One is obtained by training trimap propagation (STM) and alpha prediction (FBA) separately (i.e., decoupled), and the other is by training both modules jointly (i.e., joint). As shown in Table 1(a), the joint modeling greatly improves the alpha matte quality in the practical one-trimap scenario. Efficacy of the stage-wise training. We evaluate the importance of the proposed stage-wise training in OTVM and summarize the result in Table 1 When we activate all input layers for alpha matte and hidden features and end-to-end train OTVM from the pretraining stage (i.e., joint), the model marginally surpasses the STM+FBA (joint). If our stage-wise training is applied, OTVM significantly outperforms STM+FBA (joint). The results demonstrate the superiority of our stage-wise training. Efficacy of each training stage. In this experiment, we use OTVM and validate our training strategy. The result is summarized in Table 1(c). To learn the trimap propagation model with the hidden features of the refinement module, we applied the last training stage (i.e., Stage 4: end-to-end training) for all cases. Table 1. Analysis experiments on VideoMatting108 validation set. For all experiments, we use 1-trimap setting where GT trimap is given only at the first frame. "-V" denotes the error has been computed in all regions of the frames (see Sec. 4.1). Model Training method Comparison with state-of-the-art methods on public benchmarks. The trimap setting indicates how many GT trimaps are given as input, i.e., "full-trimap" for all frames, "20/40-frame" for every 20/40th frames, "1-trimap" for only at the first frame. . Ideally, the unknown area in a trimap needs to cover the entire soft matte area in the GT matte while being tight enough not to be trivial. To measure the quality of trimaps, we present two metrics: (1) Precision-T, precision of the estimated unknown area compared with the widely dilated GT unknown (dilation kernel size of 41 × 41) and (2) Recall-T, recall of the estimated unknown area compared with the minimum GT unknown (i.e., no dilation of the GT unknown regions). As shown in Table 1(f), OTVM significantly outperforms the state-of-the-art approach and achieves high precision and high recall. More analysis in the supplementary material. We present additional results on the input of the trimap encoder, a visual analysis of the hidden feature, the effect of the refinement module, an analysis of runtime and GPU memory consumption, an analysis of temporal stability, and quantitative results with image matting metrics (i.e. without "-V") in the supplementary material. Qualitative results on real-world videos. Fig. 4 shows qualitative results on a real-world video. We compare OTVM with the cascaded STM and FBA model, denoted by STM+FBA. As shown in the figure, the STM+FBA model cannot sharply separate foreground and unknown regions on the object boundary. The cascade baseline model fails to predict accurate trimaps in the challenging scenes, resulting in poor alpha mattes. In contrast, OTVM predicts trimaps reliably, resulting in accurate alpha mattes. More qualitative results are provided in the supplementary material. In addition, we provide full-frame results online: https://youtu.be/qkda4fHSyQE. Comparison with Comparison with non-deep learning methods. Fig. 5 shows a comparison with [11,8,29] on the Amira benchmark [9]. The benchmark [9] does not provide the GT alpha matte and we took the results of the previous methods from [29,11]. As shown in the figure, SVM [11] Video Matting [29] fail to predict hair strand details. In contrast, OTVM predicts the precise alpha matte. Limitations Since our framework takes only a single user-annotated trimap, not only the input trimap quality but also the rich cues of the object in the frame are important. In Fig. 6(a), OTVM struggles to generate accurate trimaps if the user-annotated frame contains almost no object information, resulting in failure to predict the alpha matte. In Fig. 6(b), although the object is presented in the annotated frame, we may struggle to predict precise alpha mattes if there is no strong signal for the foreground object in the given trimap. Conclusion In this paper, we present a new video matting framework that only needs a single user-annotated trimap. In contrast to the recent decoupled methods that focus on alpha prediction given the trimaps, we propose a coupled framework, OTVM, that performs trimap propagation and alpha prediction jointly. OTVM with one user-annotated trimap significantly outperforms the previous works in the same setting and even achieves comparable performance with the previous works using full-trimaps as input. OTVM is simple yet effective and works robustly in the practical one-trimap scenario. We hope that our research motivates follow-up studies and leads to practical video matting solutions. A Network Structure Details In this section, we describe detailed network structures for trimap propagation, alpha prediction, and alpha-trimap refinement. Trimap propagation network. Fig. S1 shows a detailed architecture of the trimap propagation network. The architecture is based on STM [39]. We employed two independent ResNet50 [20] encoders to embed memory and query. Here, the last layer (res5) is omitted to extract fine-scale features. The extracted memory and query features are embedded into keys and values via four independent 3 × 3 convolutional layers. Using the memory key and query key, the similarity is computed via non-local matching. Then the memory value is retrieved based on the computed similarity. The retrieved memory value and query value are concatenated along the channel dimension, and it is fed to the trimap decoder. In the trimap decoder, several residual blocks [21] and upsampling blocks [40,38] are employed. Finally, the propagated trimap is output from the trimap decoder. Alpha prediction network. A detailed implementation of the alpha prediction network is given in Fig. S2. We follow the architecture of FBA [14]. The ResNet50 [20] with Group Normalization [60] and Weight Standardization [43] is used for the alpha encoder. The alpha encoder takes an RGB frame and (either a generated or user-provided) trimap. The three channels of the trimap are encoded into eight channels that are one channel for softmax probability of the foreground mask, one channel for softmax probability of the background mask, and six channels for three different scales of Gaussian blurs of the foreground and background masks [25]. In the encoder structure, the striding in the last two layers (res4 and res5) is removed and the dilations of 2 and 4 are included, respectively [36]. The alpha decoder takes the resulting pyramidal features of the alpha encoder. In the alpha decoder, Pyramid Pooling Module (PPM) [66] is employed to increase the receptive field of the fine-scale feature. And then, several convolutional layers, leaky ReLU [37], and bilinear upsampling are followed. Finally, one channel of the alpha matte, three channels of the foreground RGB, three channels of the background RGB, and 64 channels of the hidden features are output from the alpha decoder. Alpha-trimap refinement module. We illustrate a detailed implementation of the alpha-trimap refinement module in Fig. S3. The module takes an RGB frame, trimap, predicted alpha matte, and hidden feature which is extracted from the alpha decoder. We employed two light-weight residual blocks with Group Normalization [60] and Weight Standardization [43]. The outputs are one channel of the refined alpha matte, three channels of the trimap, three channels of the foreground RGB, three channels of the background RGB, and 16 channels of the hidden features. All the outputs in the module will be used for the input of the trimap memory encoder. B Loss Functions For each predicted trimap, alpha matte, foreground RGB, and background RGB, we leverage several loss functions. In summary, we used cross-entropy loss for the predicted trimaps, as used in STM [39], and we used image matting losses used in FBA [14] and temporal coherence loss [52] for predicted alpha mattes, foreground RGB colors, and background RGB colors. In what follows, the specific definition of each loss function is described. Trimap. We use the cross-entropy loss for propagated trimap (L tri ) as follows: where t and i indicate time and spatial pixel index, respectively; y tri and p tri are GT trimap and propagated trimap, respectively. The loss for refined trimap ( L tri ) is computed by simply replacing p tri in Eq. (S1) with refined trimap p tri . The total loss for the propagated and refined trimap is where the reference frame (where the GT trimap is provided as the input) is given at t = 0. Alpha matte. With the GT alpha matte y α , the predicted alpha matte p α extracted from the alpha decoder, input RGB frame I, GT foreground RGB F , and GT background RGB B, we compute the L1 loss (L α L1 ), compositional loss (L α comp ) [61], Laplacian pyramid loss (L α lap ) [22], gradient loss (L α grad ) [54], and temporal coherence loss (L α tc ) [52] as follows: and the losses for the refined alpha matte ( L α L1 , L α comp , L α lap , L α grad , L α tc ) are computed by replacing p α in Eqs. (S3) to (S7) with refined alpha matte p α . The total loss for the predicted and refined alpha matte is defined as follows: Additionally, the losses for the predicted foreground color are not computed where the GT alpha value is 0 because the exact foreground color is not available in those regions. Each loss function is defined as follows: and the losses for the predicted colors extracted from the alpha-trimap refinement module ( L F B L1 , L F B lap , L F B comp , L F B excl , L F B tc ) are computed by replacing p F , p B in Eqs. (S9) to (S13) with p F , p B , respectively. The total loss for the foreground and background RGB colors is defined by Finally, all loss functions are summarized by L total = L tri total + L α total + 0.25L F B total . (S15) C Trimap Input for the Trimap Encoder Ideally, the hidden feature can subsume the trimap information. We study the effect of the trimap input for the trimap encoder, and the results are given in Table S1. We empirically found that explicitly providing the trimap input is helpful. We conjecture that trimap input facilitates the training of the trimap propagation module under the insufficient video training dataset. D Visual Analysis of the Hidden Feature To analyze what information is contained in the hidden features, we visualize the learned hidden feature through k-means clustering (k=8) in Fig. S4. For a fair comparison, we also apply k-means clustering to the predicted trimap. As shown in the figure, the hidden feature embeds more information than trimap: (1) it subdivides the unknown regions into several levels; (2) it includes semantic information in the background regions that would be helpful to estimate accurate background regions of the next frame. RGB input hidden feature (k=8) predicted trimap (k=8) GT trimap Fig. S4. Visualization of the hidden feature and trimap. For a fair comparison, we apply k-mean clustering to both hidden feature and predicted trimap. E Effect of the Refinement Module on Trimap Estimation To clearly show the effects of the refinement module, we measure a trimap performance of the output from the trimap propagation module in OTVM. The result is given in Table S2. We further show the effect of the refinement module qualitatively in Fig. S5. In the figure, the trimap propagation fails in the zoomed region due to motion blur, while the refinement module corrects it. F Runtime and GPU Memory Consumption In Fig. S6, we show the inference time and memory consumption at each frame. We used high-resolution (1920×1080) video and tested with one NVIDIA GeForce 1080 Ti GPU. As shown in the figure, OTVM slows down and consumes more memory for every 10 G Temporal Stability To demonstrate the superiority of OTVM in terms of temporal stability, we show per frame comparison in Fig. S7. In the figure, we did not cherry-pick the results and show results in all sequences of VideoMatting108 validation set. As shown in Fig. S7, decoupled approach, i.e., STM+FBA, is extremely unstable in sequences (4), (10), (15), (26), and (28). In contrast, OTVM achieves temporally stable results in most sequences. H More Quantitative Results To encourage comparison for future works, we present additional results by measuring errors in a different way from the tables in the main paper. Specifically, we computed errors on the full-frame to capture the errors that occurred by inaccurate trimap propagation, and we denoted it with "-V" in Table 1 of the main paper. We re-measure by computing errors only on the unknown regions according to the official metric and report in Table S3. In contrast to those, Table 2 in the main paper is computed errors only on the unknown regions for fair comparisons with previous works. We re-measure of our implemented methods by computing errors on the full-frame and report in Table S4. Furthermore, following [65], we show results with narrow and wide trimap settings in Tables S5 and S6. As shown in the tables, OTVM always outperforms the state-of-the-art methods in any trimap settings. Table S4. Comparison with state-of-the-art methods on public benchmarks. The trimap setting indicates how many GT trimaps are given as input, i.e., "full-trimap" for all frames, "20/40-frame" for every 20/40th frames, "1-trimap" for only at the first frame. (a) Comparison on VideoMatting108 validation set. In this experiment, we use the medium trimap setting. † denotes our reproduced results using our training setup. I More Qualitative Results We present additional qualitative results on real-world videos in Figs. S8 and S9, results on VideoMatting108 [65] with medium width trimap in Figs. S10 to S12, and results on DVM [52] in Figs. S13 and S14. We provided user-annotated (or GT) trimap only at the first frame. For all qualitative results in this section, we further provide full-frame results online: https://youtu.be/qkda4fHSyQE.
233241670
s2orc/train
v2
2021-04-16T05:11:30.138Z
2021-03-26T00:00:00.000Z
A case of bilateral renal oncocytomas in the setting of Birt-Hogg-Dube syndrome Birt-Hogg-Dube syndrome is a rare autosomal dominant disorder characterized by pulmonary cysts, renal tumors, and dermal lesions. This syndrome results from a mutation in the gene folliculin, located on chromosome 17p11.2. Herein, a case is described in which the presence of bilateral renal oncocytomas led to the diagnosis of Birt-Hogg-Dube syndrome via an interdisciplinary effort by radiology, pathology, and primary care medicine. No radiographic features alone are sufficient to confirm the diagnosis of Birt-Hogg-Dube. A high index of suspicion must be maintained by both the pathologist and radiologist in the appropriate clinical setting. Introduction Birt-Hogg-Dube syndrome (BHD) is a rare autosomal dominant genetic disorder characterized by pulmonary cysts, renal tumors, and dermal lesions [1] . First described in 1977 by Birt, Hogg, and Dube in a family with skin lesions, this condition is due to a germline mutation in the folliculin gene located on chromosome 17p11.2 [1] . The actual incidence of BHD is unknown, and this syndrome typically presents in patients over 20 years of age [1 ,2] . Fibrofolliculomas are benign tumors of the ✩ Acknowledgment: This study was not supported by any funding. ✩✩ Competing interests: The authors have no disclosures. ★ Ethical approval: IRB approval was not required by our institution for this case report. ★★ Not previously presented. * Corresponding author. hair follicle and are the most common skin findings in BHD; they appear as small dome-shaped yellowish-tan papules on the face, neck, and extremities [2] . Pulmonary cysts are an additional hallmark, however, normal chest computed tomography (CT) is reported in up to 12% of patients [3] . Cysts vary in shape, size, and location. Pulmonary cysts may rupture and cause spontaneous pneumothorax [3 ,4] . Bilateral renal cancer was first associated with BHD in 1993 [5] has since been established [4] . Renal tumors are seen in 14-34% of BHD cases, and oncocytomas comprise 5% of the renal masses seen in this syndrome [4] . Herein, a case is described in which the presence of bilateral renal oncocytomas led to the diagnosis of Birt-Hogg-Dube syndrome via an interdisciplinary effort by radiology, pathology, and primary care medicine. Case presentation A 72-year-old gentleman with a past medical history of uncontrolled hypertension, hyperlipidemia, glaucoma, and chronic obstructive pulmonary disease presented to his primary care physician (PCP) for back pain and uncontrolled hypertension. The patient's PCP ordered a magnetic resonance angiogram (MRA) abdomen, which revealed multiple heterogeneous T2 hyperintense, T1 hypointense renal masses with heterogeneous arterial enhancement ( Fig.1 A). The largest renal mass in the left upper pole measured 7.8 × 8.2 × 7.5 cm and contained a hypointense central stellate scar indicative of fibrosis (Fig.B). There was one additional mass with similar imaging characteristics in the left kidney. The right kidney contained a 4.7 × 5.2 × 5.1 cm T2 hyperintense, T1 hypointense heterogeneous mass with a hypointense central stellate scar. There were four additional lesions within the mid and upper pole of the right kidney. The patient was referred to interventional radiology for bilateral percutaneous renal biopsy ( Fig.2 ). Biopsy revealed tumors with densely eosinophilic cytoplasm with round, regular nuclei in a fibrotic background ( Fig.3 A). Colloidal iron and CK7 stains were negative. CD117 was positive ( Fig. 3 B). Pathology confirmed the diagnosis of bilateral renal oncocytomas. Given the peculiar finding of bilateral renal oncocytomas, pathology initiated discussions with radiology over the possibility of BHD. A retrospective chart review showed that the patient had a prior hospital admission for pneumonia. The patient had no history of pneumothorax. Upon review of prior chest CT, a thin-walled pulmonary cyst was discovered in the left upper lobe ( Fig. 4 A). A follow-up thin slice axial chest CT was ordered for further characterization of pulmonary cysts. A second smaller sub-centimeter pulmonary cyst was found in the left upper lobe ( Fig. 4 B). After further discussion with the patient's PCP, the patient was noted to have dome-shaped tan papules on his neck and chest ( Fig.5 ). Pathology confirmed fibrofolliculomas on microscopy ( Fig. 6 ). The patient was subsequently diagnosed with BHD and referred to an outside center for genetic testing and counseling. The patient was not aware of any relevant family history. Discussion Pulmonary cysts, renal tumors, and skin lesions are the hallmarks of BHD [1] . Given the rarity and wide clinical variability of BHD, this syndrome likely remains underdiagnosed. Skin manifestations and fibrofolliculomas may go initially unnoticed by the clinician, as evidenced by the current case [3] . As described, pathological confirmation of bilateral oncocytomas led to interdisciplinary discussions which allowed for retrospective review of patient images and the discovery of pulmonary cysts. After raising the possibility of BHD with the patient's PCP, the patient's skin findings were clinically identified. The appearance of fibrofolliculomas on the chest, neck, and abdomen, in particular, may be easily missed. Thus, radiologists and pathologists must maintain a high index of suspicion for this syndrome in the appropriate clinical setting. From a radiological perspective, multiple unchanging pulmonary cysts with recurrent spontaneous pneumothorax may be the initial presentation of a patient [3] . Spontaneous pneumothorax is thought to occur secondary to cyst rupture [3] . Pulmonary cysts are found in over 85% of patients with BHD [4] . Several radiology studies have attempted to characterize the lung cysts found in BHD and differentiate them from lymphangioleiomyomatosis (LAM), pulmonary Langerhans's cell histiocytosis (LCH), and lymphocytic interstitial pneumonia (LIP) [2] . In contradistinction to these other entities, the pulmonary cysts of BHD do not progress over time and the lungs lack other interstitial changes [2] . Cysts vary in shape, size, and location, and they show no central or peripheral predominance [3] . In one study, thoracic CT was normal in 2 out of 17 patients [3] . The currently described patient had no history of spontaneous pneumothorax, although two small subcentimeter pulmonary cysts were visualized on chest CT in the left upper lobe. Agarwal et al (2010) reported 47% of their patients had fewer than ten cysts [3] . While pulmonary cysts may aid in the diagnosis, they lack the specificity required for diagnostic confirmation. Small pulmonary cysts may be overlooked or considered benign on initial radiological reports. In the current case, a retrospective review of patient imaging after the diagnosis of bilateral renal oncocytomas revealed the patient's pulmonary cysts. Both malignant and benign renal tumors may occur in BHD, and renal malignancy develops in 15-30% of patients [1 ,4 ,6] . Studies show that renal tumors in BHD occurred at an average age of 50.4 years [7 ,8] . Bilateral renal cancer in BHD was first described in 1993 [5] . Various types of RCC, including chromophobe, mixed, clear cell, and papillary subtypes have since been associated with BHD [4] . Oncocytomas account for 5% of the renal tumors in BHD [4] . Bilateral oncocytomas are a peculiar finding in patients, and this finding was the first indication to pathology that an underlying syndrome may be present. BHD is caused by mutations in the follicular gene (FLCN) located on chromosome 17p11.2 [9 ,10] . Folliculin acts as a tumor suppressor via the mTOR signaling pathway [1] . Mutations in folliculin cause a loss of folliculin protein and promote kidney tumorigenesis [1] . All patients with BHD should undergo genetic testing with a DNA panel and receive genetic counseling [1] . Given the autosomal dominant nature of the disease, when a mutation is detected, genetic testing and counseling for at-risk family members is also indicated [1] . Folliculin-mutations may occur in carrier patients without presentation of skin lesions. Genetic testing remains especially important in lieu of this syndromic variability and potential for incomplete penetrance [1] . The lack of family history in the current case may be attributable to patient unawareness, incomplete penetrance of folliculin mutations in family members, or full penetrance in family members who had yet to be diagnosed. The patient was referred for genetic testing and possible testing of his at-risk family members. The prevalence of incomplete penetrance is not currently known, and further genetic prospective studies are needed to investigate folliculin mutation penetrance and variability [1] . Surveillance for renal malignancy is indicated for patients and at-risk relatives, although no standardized guidelines currently exist [1] . Studies have investigated both CT and MRI for surveillance [11] . MRI is thought to be superior to CT, as annual CT surveillance would impart high radiation doses to the patient [1] . Ultrasound is too insensitive for appropriate detection and surveillance [1] . Treatment of BHD depends on the pathology of renal masses present. If renal malignancy is diagnosed, nephronsparing surgery is typically the first-line treatment [1] . Other nephron-sparing techniques such as radiofrequency ablation may be considered given lesion size smaller than 3 cm, and interventional radiology may play a bigger role in the treatment of BHD-related renal cancers in the future [1 ,12] . The current patient required no treatment given the benignity of oncocytomas. In summary, a case of BHD in a 72-year-old male has been presented. Interdisciplinary communication between pathol-ogy and radiology was critical for this diagnosis. While the hallmarks of BHD include skin lesions, pulmonary cysts, and renal tumors, some of these findings may be easily overlooked. This fact places an additional burden upon radiologists and pathologists to have a high degree of suspicion for this entity in the appropriate clinical setting. When a patient presents with recurrent spontaneous pneumothorax with pulmonary cysts on imaging, BHD should be included in the differential. In the described case, the peculiarity of bilateral renal oncocytomas as noted by pathology ultimately led the radiologist to discover pulmonary cysts, the clinician to discover skin lesions, and the interdisciplinary team to arrive at the correct diagnosis. Patient consent statement Informed Consent: Written informed consent was obtained from all individual participants in this study. Written consent for publication was obtained for every individual's data included in this study
219558500
s2orc/train
v2
2020-06-11T01:01:25.983Z
2020-06-09T00:00:00.000Z
Stability properties of a projector-splitting scheme for dynamical low rank approximation of random parabolic equations We consider the Dynamical Low Rank (DLR) approximation of random parabolic equations and propose a class of fully discrete numerical schemes. Similarly to the continuous DLR approximation, our schemes are shown to satisfy a discrete variational formulation. By exploiting this property, we establish stability of our schemes: we show that our explicit and semi-implicit versions are conditionally stable under a parabolic type CFL condition which does not depend on the smallest singular value of the DLR solution; whereas our implicit scheme is unconditionally stable. Moreover, we show that, in certain cases, the semi-implicit scheme can be unconditionally stable if the randomness in the system is sufficiently small. Furthermore, we show that these schemes can be interpreted as projector-splitting integrators and are strongly related to the scheme proposed by Lubich et al. [BIT Num. Math., 54:171-188, 2014; SIAM J. on Num. Anal., 53:917-941, 2015], to which our stability analysis applies as well. The analysis is supported by numerical results showing the sharpness of the obtained stability conditions. Introduction Many physical and engineering applications are modeled by time-dependent partial differential equations (PDEs) with input data often subject to uncertainty due to measurement errors or insufficient knowledge. These uncertainties can be often described by means of probability theory by introducing a set of random variables into the system. In the present work, we consider a random evolutionary PDEu with random initial condition, random forcing term and a random linear elliptic operator L. Many of the numerical methods used to approximate such problems, require evaluating the, possibly expensive, model in many random parameters. In this regard, the use of reduced order models (e.g. Proper orthogonal decomposition [5,6] or generalized Polynomial chaos expansion [37,39,26,10,33]) is of a high interest. When the dependence of the solution on the random parameters significantly changes in time, the use of time-varying bases is very appealing. In the present work, we consider the dynamical low rank (DLR) approximation (see [34,32,7,28,4,22,23,16]) which allows both the deterministic and stochastic basis functions to evolve in time while exploiting the structure of the differential equation. An extension to tensor differential equations was proposed in [25,31]. The DLR approximation of the solution is of the form where R is the rank of the approximation and is kept fixed in time,ū(t) = E[u(t)] is the mean value of the DLR solution, {U j (t)} R j=1 is a time dependent set of deterministic basis functions, {Y j (t)} R j=1 is a time dependent set of zero mean stochastic basis functions. By suitably projecting the residual of the differential equation one can derive evolution equations for the mean valueū and the deterministic and stochastic modes {U j } R j=1 , {Y j } R j=1 (see [34,23]), which in practice need to be solved numerically. An efficient and stable discretization scheme is therefore of a high interest. In [34,23], Runge-Kutta methods of different orders were applied directly to the system of evolution equations for the deterministic and stochastic basis functions. In the presence of small singular values in the solution, the system of evolution equations becomes stiff as an inversion of a singular or nearly-singular matrix is required to solve it. Applying standard explicit or implicit Runge-Kutta methods leads to instabilities (see [20]). In this respect, the projector-splitting integrators (proposed in [29,30] and applied in e.g. [12,11]) are very appealing. In [20], the authors showed that when applying the projector-splitting method for matrix differential equations one can bound the error independently of the size of the singular values, under the assumption that f − L maps onto the tangent bundle of the manifold of all R-rank functions up to a small error of magnitude ε. A limitation of their theoretical result, as the authors point out, is that it requires a Lipschitz condition on f − L and is applicable to discretized PDEs only under a severe condition tL 1 where t is the step size and L is the Lipschitz constant, even for implicit schemes. Such condition is, however, not observed in numerical experiments. In [21], the authors proposed projected Runge-Kutta methods, where following a Runge-Kutta integration, the solution first leaves the manifold of R-rank functions by increasing its rank, and then is retracted back to the manifold. Analogous error bounds as in [20] are obtained, also for higher order schemes, under the same ε-approximability condition on f − L and under a restrictive parabolic condition on the time step. In this work we propose a class of numerical schemes to approximate the evolution equations for the mean, the deterministic basis and the stochastic basis, which can be of explicit, semi-implicit or implicit type. Although not evident at first sight, we show that the explicit version of our scheme can be reinterpreted as a projector-splitting scheme, whenever the discrete solution is full-rank, and is thus equivalent to the scheme from [29,30]. However, our derivation allows for an easy construction of implicit or semi-implicit versions. The main goal of this work is to prove the stability of the proposed numerical schemes for a parabolic problem (2). We first show that the continuous DLR solution satisfies analogous stability properties as the weak solution of the parabolic problem (1). We then analyze the stability of the fully discrete schemes. Quite surprisingly, the stability properties of both the discrete and the continuous DLR solutions do not depend on the size of their singular values, even without any ε-approximability condition on f − L. The implicit scheme is proven to be unconditionally stable. This improves the stability result which could be drawn from the error estimates derived in [20]. The explicit scheme remains stable under a standard parabolic stability condition between time and space discretization parameters for an explicit propagation of parabolic equations. The semi-implicit scheme is generally only conditionally stable under again a parabolic stability condition, and becomes unconditionally stable under some restrictions on the size of the randomness of the operator. As an application of the general theory developed in this paper, we consider the case of a heat equation with a random diffusion coefficient. We dedicate a section to particularize the numerical schemes and the corresponding stability results to this problem. The semi-implicit scheme turns out to be always unconditionally stable if the diffusion coefficient depends affinely on the random variables. The sharpness of the obtained stability conditions on the time step and spatial discretization is supported by the numerical results provided in the last section. A big part of the paper is dedicated to proving a variational formulation of the discretized DLR problem, analogous to the variational formulation of the continuous DLR problem (see [32,Prop. 3.4]). Such formulation is a key for showing the stability properties and, as we believe, might be useful for some further analysis of the proposed discretization schemes. It as well applies to the projector-splitting integrator from [29,30] provided the solution remains full rank at all time steps. However, in the rank-deficient case, our schemes may result in different solutions. We dedicate a subsection to show that a rank-deficient solution obtained by our scheme still satisfies a suitable discrete variational formulation and consequently has the same stability properties as the full-rank case. The outline of the paper is the following: in Section 2 we introduce the problem and basic notation; in Section 3 we describe the DLR approximation and recall its geometrical interpretation with variational formulation. In Section 4 we describe the discretization of the DLR method and propose three types of time integration schemes. We then derive a variational formulation for the discrete DLR solution and show its reinterpretation as a projector-splitting scheme. Section 5 is dedicated to proving the stability properties of both continuous and discrete DLR solution. In Section 6, we analyze the case of a heat equation with random diffusion coefficient and random initial condition. Finally in Section 7 we present several numerical tests that support the derived theory. Section 8 draws some conclusions. Problem statement We start by introducing some notation. Let (Γ, F, ρ) be a probability space. Consider the Hilbert space L 2 ρ = L 2 ρ (Γ) of real valued random variables on Γ with bounded second moments, with associated scalar product v, w L 2 ρ = Γ vw dρ and norm v L 2 ρ = v, v L 2 ρ . Consider as well two separable Hilbert spaces H and V with scalar products ·, · H , ·, · V , respectively. Suppose that H and V form a Gelfand triple (V, H, V ), i.e. V is a dense subspace of H and the embedding V → H is continuous with a continuity constant C P > 0. Let ) is a Gelfand triple as well (see e.g. [27,Th. 8.17]), and we have We define the mean value of a random variable v as E[v] = Γ v dρ, where the integral here denotes the Bochner integral in a suitable sense, depending on the co-domain of the random variable considered. In what follows, we will use the notationv = E[v] and v * := v −v. Moreover, we let (·, ·) V V,L 2 ρ denote the dual pairing between L 2 ρ (Γ; V ) and L 2 ρ (Γ; V ): With this notation at hand, we now consider a random operator L with values in the space of linear bounded operators from V to V that is uniformly bounded and coercive, i.e. a Borel measurable function Associated to the random operator L, we introduce the operator L, defined as Notice that for any strongly measurable u : Γ → V the map ω ∈ Γ → L(ω)u(ω) ∈ V is strongly measurable, V being separable, see Proposition A in the appendix. From the uniform boundedness of L it follows immediately that, if u is square integrable, then L(u) is square integrable as well and L(u) which is coercive and bounded with coercivity and continuity constant C L and Then, given a final time T > 0, a random forcing term f ∈ L 2 (0, T ; L 2 ρ (Γ; H)) and a random initial condition u 0 ∈ L 2 ρ (Γ; V ), we consider now the following parabolic problem: Find a solution u true ∈ L 2 (0, T ; The general theory of parabolic equations (see e.g. [38]) can be applied to problem (6), at least in the case of L 2 ρ (Γ; V ), L 2 ρ (Γ; H), L 2 ρ (Γ; V ) being separable, e.g. when Γ is a Polish space and F is the corresponding Borel σ-algebra. We conclude then that problem (6) has a unique solution u true which depends continuously on f and u 0 . We note that the theory of parabolic equations would allow for less regular data f ∈ L 2 (0, T ; L 2 ρ (Γ; V )) and u 0 ∈ L 2 ρ (Γ; H). However, in this work we restrict our attention to the case f ∈ L 2 (0, T ; L 2 ρ (Γ; H)), u 0 ∈ L 2 ρ (Γ; V ). Dynamical low rank approximation and its variational formulation Dynamical low rank (DLR) approximation, or dynamically orthogonal (DO) approximation (see e.g. [23,34,24]) seeks an approximation of the solution u true of problem (6) in the form whereū(t) ∈ V , {U j (t)} R j=1 ⊂ V is a time dependent set of linearly independent deterministic basis functions, {Y j (t)} R j=1 ⊂ L 2 ρ is a time dependent set of linearly independent stochastic basis functions. In what follows, we focus on the so called Dual DO formulation (see e.g. [32]), in which the stochastic basis are only required to be linearly independent at all times. We call R the rank of a function u of the form (7). To ensure the uniqueness of the expansion (7) for a given initialization u(0) =ū(0) + R j=1 U j (0)Y j (0), we consider the following conditions: and the gauge condition (also called DO condition) (see [19]). Plugging the DLR expansion (7) into the equation (6) and following analogous steps as proposed in [34] leads to the DLR system of equations presented next. Definition 3.1 (DLR solution). We define the DLR solution of problem (6) as are solutions of the following system of equations: with the initial conditionsū(0), is a good approximation of u 0 . In (12), the matrix M ∈ R R×R is defined as M ij := U i , U j H , 1 ≤ i, j ≤ R and P ⊥ Y denotes the orthogonal projection operator in the space L 2 ρ (Γ) on the orthogonal complement of the R-dimensional subspace Y = span{Y 1 , . . . , Y R }, i.e. For the initial condition one can use for instance a truncated Karhunen- are the first R (rescaled) eigenfunctions of the covariance operator C u0 : H → H defined as In what follows we will use the notation U = (U 1 , . . . , U R ) and Y = (Y 1 , . . . , Y R ). Then, the approximation (7) reads u =ū + U Y . The rest of the section gives a geometrical interpretation of the DLR method and derives a variational formulation, following to a large extent derivations from [32]. Such geometrical interpretation and consequent variational formulation will be key to derive the stability results of the numerical schemes, discussed in Section 5.2. We first introduce the notion of a manifold of R-rank functions, characterize its tangent space in a point as well as the orthogonal projection onto the tangent space. The vector space consisting of all square integrable random variables with zero mean value will be denoted by L 2 ρ,0 = L 2 ρ,0 (Γ) ⊂ L 2 ρ (Γ). Definition 3.2 (Manifold of R-rank functions). By M R ⊂ L 2 ρ,0 (Γ; V ) we denote the manifold consisting of all rank R random functions with zero mean It is well known that M R admits an infinite dimensional Riemannian manifold structure ( [15]). where U = span{U 1 , . . . , U R } and P U [·] is the H-orthogonal projection onto the subspace U. For more details, see e.g. [32]. Note that Π U Y [·] can be equivalently written . In the following we will extend the domain of the projection operator Π U Y . Further, we will state two lemmas used to establish Theorem 3.7, which presents the variational formulation of the DLR approximation. The operator Π U Y can be extended to an operator from L 2 . The extended operator satisfies the following. Proof. First, we show that Indeed, where in the forth step we applied Theorem 8.13 from [27]. Now we proceed with proving (17) We are now in the position to state the first variational formulation of the DLR equations. Lemma 3.6. Let U, Y be the solution of the system (11)- (12). Then the zeromean part of the DLR solution u * = U Y satisfies Proof. First, we multiply equation (11) by Y j and take its weak formulation in L 2 ρ . Summing over j results in Analogously, we multiply (12) by U j and take its weak formulation in V Summing over j, this leads to Summing the derived equations we obtain In particular, this holds for any z being a Bochner integrable simple function, the collection of which is dense in L 2 ρ (Γ; V ) (see [27,Th. 8.15]). We can finally state the variational formulation corresponding to the DLR equations (10)- (12). Theorem 3.7 (DLR variational formulation). Letū, U, Y be the solution of the system (10)- (12). Then the DLR solution u =ū + U Y satisfies Proof. Based on Lemma 3.6 and Lemma 3.5 we can write which can be equivalently written as exploiting the fact that u * + L * (u) − f * , w V V,L 2 ρ = 0, ∀w ∈ V . Likewise, equation (10) can be equivalently written as Summing (20) and (21) leads to the sought equation (19). Recently, the existence and uniqueness of the dynamical low rank approximation for a class of random semi-linear evolutionary equations was established in [19] and for linear parabolic equations in two space dimensions with a symmetric operator L in [3]. Discretization of DLR equations In this section we describe the discretization of the DLR equations that we consider in this work. In particular, we focus on the time discretization of (10)- (12) and propose a staggered time marching scheme that decouples the update of the spatial and stochastic modes. Afterwards, we will show that the proposed scheme can be formulated as a projector-splitting scheme for the Dual DO formulation and comment on its connection to the projector-splitting scheme from [29]. As a last result we state and prove a variational formulation of the discretized problem. Stochastic discretization We consider a discrete measure given by {ω k , λ k }N k=1 , i.e. a set of sample points {ω k }N k=1 ⊂ Γ with R <N < ∞ and a set of positive weights {λ k }N k=1 , λ k > 0, N k=1 λ k = 1, which approximates the probability measure ρρ The discrete probability space (Γ = {ω k }N k=1 , 2Γ,ρ) will replace the original one (Γ, F, ρ) in the discretization of the DLR equations. Notice, in particular, that a random variable Z :Γ → R measurable on (Γ, 2Γ,ρ) can be represented as a vector z ∈ RN with z k = Z(ω k ), k = 1, . . . ,N . The sample points {ω k }N k=1 can be taken as iid samples from ρ (e.g. Monte Carlo samples) or chosen deterministically (e.g. deterministic quadrature points with positive quadrature weights). The mean value of a random variable Z with respect to the measurê ρ is computed as We introduce also the semi-discrete scalar products ·, · ,L 2 ρ with = V, H and their corresponding induced norms · ,L 2 ρ . Note that the semi-discrete bilinear form ·, · L,ρ defined as is coercive and bounded, with the same coercivity and continuity constants C L , C B , defined in (4), (5), respectively. Space discretization We consider a general finite-dimensional subspace V h ⊂ V whose dimension is larger than R and is determined by the discretization parameter h. Eventually, we will perform a Galerkin projection of the DLR equations onto the subspace V h . We further assume that an inverse inequality of the type holds for some p ∈ N and C I > 0. Time discretization For the time discretization we divide the time interval into N equally spaced subintervals 0 = t 0 < t 1 < · · · < t N = T and denote the time step by t := t n+1 − t n . Note that the DLR solution u =ū + U Y appears in the right hand side of the system of equations (10)- (12) both in the operator L and in the projector operator onto the tangent space to the manifold. We will treat these two terms differently. Concerning the projection operator, we adopt a staggered strategy, where, given the approximate solution u n =ū n + U n Y n , we first update the meanū n+1 , then we update the deterministic basis U n+1 projecting on the subspace span{Y n }; finally, we update the stochastic basis Y n+1 projecting on the orthogonal complement of span{Y n } and on the updated subspace span{U n+1 }. This staggered strategy resembles the projection splitting operator proposed in [29]. We will show later in Section 4.4 that it does actually coincide with the algorithm in [29]. Concerning the operator L, we will discuss hereafter different discretization choices leading to explicit, semi-implicit or fully implicit algorithms. Fully discrete problem We give in the next algorithm the general form of the discretization schemes that we consider in this work. 1. Compute the mean valueū n+1 such that is the analogue of the projector defined in (13) but in the discrete space L 2 ρ . Reorthonormalize the stochastic basis: find 5. Form the approximated solution at time step t n+1 as The expressions L(u n h,ρ , u n+1 h,ρ ) and f n,n+1 stand for an unspecified time integration of the operator L(u(t)) and right hand side f (t), t ∈ [t n , t n+1 ] and v * denotes the 0-mean part of a random variable v ∈ L 2 ρ with respect to the discrete The newly computed solution u n+1 h,ρ belongs to the tensor product space (25) can be rewritten as a deterministic linear system of R×N equations with R×N unknowns. This system can be decoupled into a linear system of size R × R for each collocation point. If the deterministic modesŨ n+1 are linearly independent, the system matrix is invertible. Otherwise we interpret (25) in a minimal-norm least squares sense, choosing a solutioñ Y n+1 , if it exists, that minimizes the norm Ỹ n+1 − Y n L 2 ρ . This is discussed in more details in Section 4.3. The following lemma shows that the scheme (23)-(25) satisfies some important properties that will be essential in the stability analysis presented in Section 5. Lemma 4.2 (Discretization properties). Assuming that a solution (Ỹ n+1 ,Ũ n+1 ,ū n+1 ) exists, the following properties hold for the discretization (23)-(25): 1. Discrete DO condition: 1. In the following proof we assume that the matrixM n+1 = Ũ n+1 ,Ũ n+1 H is full rank. For the rank-deficient case, we refer the reader to the proof of Lemma 4.12. Let us multiply equation (25) by Y n from the left and take the L 2 ρ -scalar product. Since the second term involves P ⊥ ρ,Y n , the scalar product of Y n with the second term vanishes which, under the assumption thatM n+1 is full rank, gives us the discrete DO condition 2. This is a consequence of the fact that we have Eρ[Y n ] = 0 and 3. This is immediate from the discrete DO property and Y n , Y n To complete the discretization scheme (23)-(25) we need to specify the terms L(u n h,ρ , u n+1 h,ρ ) and f n,n+1 . The DLR system stated in (10)-(12) is coupled. Therefore, an important feature we would like to attain is to decouple the equations for the mean value, the deterministic and the stochastic modes as much as possible. We describe hereafter 3 strategies for the discretization of the operator evaluation term L(u n h,ρ , u n+1 h,ρ ), and the right hand side f n,n+1 . Explicit Euler scheme The explicit Euler scheme performs the discretization It decouples the system (23)-(25) since, for the computation of the new modes, we require only the knowledge of the already-computed modes. The equations for the stochastic modes {Ỹ n+1 j } R j=1 are coupled together through the matrix M n+1 = Ũ n+1 ,Ũ n+1 H ∈ R R×R but are otherwise decoupled between collocation points (i.e.N linear systems of size R have to be solved). Implicit Euler scheme The implicit Euler scheme performs the discretization This method couples the system (23)-(25) in a non-trivial way, which is why we do not focus on this method in our numerical results. We mention it in the stability estimates section (Section 5.2) for its interesting stability properties. Semi-implicit scheme Assume that our operator L can be decomposed into two parts where L det : V → V is a linear deterministic operator such that it induces a bounded and coercive bilinear form ·, and that its action on a function Then, L det is also a linear operator L det : and induces a bounded coercive bilinear form on We propose a semi-implicit time integration of the operator evaluation term whereas for f n,n+1 we can either take f n,n+1 = f (t n+1 ) or f n,n+1 = f (t n ) or any convex combination of both. The resulting scheme is detailed in the next lemma. Proof. The equation for the mean (23) using the semi-implicit scheme (30) can be written as Noticing that gives us equation (31). Concerning the equation for the deterministic modes we derive The term T 3 vanishes since Eρ[Y n ] = 0 and the term T 4 can be further expressed as where we used the discrete DO condition (28). Finally, the stochastic equation (25) can be written as The term T 5 vanishes since L * det (ū n+1 ) = 0. As for T 6 , we derive which leads us to the sought equation (33). We see from (31)-(33) that, similarly to the explicit Euler scheme, the equations for the mean, deterministic modes and stochastic modes are decoupled. If the spatial discretization of the PDEs (31) and (32) is performed by the Galerkin approximation, the final linear system involves the inversion of the matrix where {ϕ i } is the basis of V h in which the solution is represented. Both the mass matrix ϕ j , ϕ i H and the stiffness matrix ϕ j , ϕ i L det are positive definite and do not evolve with time, so that an LU factorization can be computed once and for all at the beginning of the simulation. Concerning the stochastic equation (33), we need to solve a linear system with the matrixM n+1 + t Ũ n+1 ,Ũ n+1 L det for each collocation point ω k , unlike the explicit Euler method, where the system involves only the matrixM n+1 . The matrixM n+1 + t Ũ n+1 ,Ũ n+1 L det is symmetric and positive definite with the smallest singular value bigger than that ofM n+1 . Notice, however, that ifM n+1 is rank deficient, also the matrix M n+1 + t Ũ n+1 ,Ũ n+1 L det will be so. Note that there exists a unique discrete DLR solution for the explicit and semi-implicit version of Algorithm 4.1 also in the rank-deficient case (see Lemma 4.13 below). The existence of solutions for the implicit version remains still an open question. Discrete variational formulation for the full-rank case This subsection will closely follow the geometrical interpretation introduced in Section 3. We will introduce analogous geometrical concepts for the discrete setting, i.e. manifold of R-rank functions, tangent space and orthogonal projection, and will show in Theorem 4.9 that the scheme from Algorithm 4.1 can be written in a (discrete) variational formulation, assuming that the matrixM n+1 stays full-rank. we denote the manifold of all rank R functions with zero mean that belong to the (possibly finite dimensional) space The projection Π h,ρ U Y is defined in the discrete space V h ⊗ L 2 ρ analogously to its continuous version (16). It holds A discrete analogue of Lemma 3.5 holds, i.e. The solution of the proposed numerical scheme (23)-(26) satisfies a discrete variational formulation analogous to the variational formulation (19). To show this, we first present a technical lemma which will be important in deriving the variational formulation as well as in the stability analysis presented in Section 5. Lemma 4.6. Let u n h,ρ , u n+1 h,ρ be the discrete DLR solution at t n , t n+1 , respectively, from the scheme in Algorithm 4.1. Then the zero-mean parts u n, * h,ρ , u n+1, * Proof. 1. The solution u n, * h,ρ can be written as Since 0 , Y n L 2 ρ = 0, using the definition (35) we have 2. The newly computed solution u n+1, * h,ρ can be expressed as Based on (28) Remark 4.7. Note that for any function of the form R is a vector space, it includes any linear combination of u n, * h,ρ and u n+1, * h,ρ . The following lemma is an analogue of Lemma 3.6 and will become useful when we derive the discrete variational formulation. Lemma 4.8. Let u n h,ρ , u n+1 h,ρ be the discrete DLR solutions at times t n , t n+1 as defined in Algorithm 4.1. Then the zero-mean parts u n+1 * h,ρ , u n * h,ρ satisfy Proof. Multiplying (24) by Y n j and summing over j, we obtain Noticing that and taking the weak formulation of (38) Similarly, multiplying (25) byŨ n+1 , and further writing (25) in a weak form in L 2 ρ , we obtain Since taking the weak formulation of (40) Finally, summing equations (39) and (41) results in (37). We now proceed with the discrete variational formulation. Theorem 4.9 (Discrete variational formulation). Let u n h,ρ and u n+1 h,ρ be the discrete DLR solution at times t n , t n+1 , respectively, n = 0, . . . , N − 1, as defined in Algorithm 4.1. Then it holds Proof. Thanks to Lemma 4.6 we have (u n+1 h,ρ − u n h,ρ ) * ∈ TŨ n+1 Y n M h,ρ R , and we can derive and formula (36) gives us Summing (43), (44) and applying Lemma 4.8 results in which is equivalent to the final result (42). In (45) we have employed The preceding theorem applies to a discretization of any kind of the operator L ∈ L 2 ρ (Γ; V ), not necessarily elliptic or linear, as assumed in Section 2, as long as Lemma 3.5 holds. Discrete variational formulation for the rank-deficient case The discrete variational formulation established in the previous section is valid only in the case of the deterministic basisŨ n+1 being linearly independent, since the proof of Theorem 4.9 implicitly involves the inverse ofM n+1 = Ũ n+1 ,Ũ n+1 H . In this subsection, we show that a discrete variational formulation can be generalized for the rank-deficient case. When applying the discretization scheme proposed in step 3 of Algorithm 4.1 with a rank-deficient matrixM n+1 , we recall that the solutionỸ n+1 is defined as the solution of (25) minimizing Ỹ n+1 − Y n L 2 ρ . Note that minimizing Ỹ n+1 − Y n L 2 ρ is equivalent to minimizing the norm Ỹ n+1 (ω k ) − Y n (ω k ) R R for every sample point ω k , k = 1, . . . ,N , where · 2 R R = ·, · R R denotes the Euclidean scalar product in R R . In what follows we will exploit the fact that the vector space L 2 ρ is isomorphic to RN . In particular, it holds that (Ỹ n+1 − Y n ) ∈ R R×N , where each column of (Ỹ n+1 − Y n ) is given by (Ỹ n+1 − Y n )(ω k ), k = 1, . . . ,N . With a little abuse of notation, we useŨ n+1 : R R → V h to denote a linear operator which takes real coefficients and returns the corresponding linear combination of the basis functionsŨ n+1 . ByŨ n+1 : V h → R R we denote its dual. Proof. Seeking a contradiction, let us suppose that where P ker(M n+1 ) [v] ∈ R R×N for v ∈ R R×N denotes the column-wise application of ·, · R R -orthogonal projection onto the kernel ofM n+1 . Then, such constructed Z satisfies and solves (25): where in the last step we used that ker(M n+1 ) = ker(Ũ n+1 ). This leads to a contradiction thatỸ n+1 was the solution minimizing Ỹ n+1 − Y n L 2 ρ . When showing the equivalence between the DLR variational formulation (19) and the DLR system of equations (10)- (12) in the continuous setting, the DO condition (9) plays an important role. In an analogous way, the discrete DO condition (property 1 from Lemma 4.2 for the full-rank case) plays an important role when showing the equivalence between the discrete DLR system of equations and the discrete DLR variational formulation. Proof. LetỸ n+1 be a solution of (25) minimizing Ỹ n+1 − Y n L 2 ρ . Thanks to Lemma 4.11 we know that for any v ∈ ker(M n+1 ) ⊥ , the solutionỸ n+1 of equation (25) satisfies then the statement will follow. But for the column space of ⊂ RN being the orthogonal complement to Y n in the scalar product ·, · L 2 ρ . Now the proof is complete. In the following lemma we address the question of existence of a unique solution when applying the explicit and semi-implicit scheme. Proof. We will start with the semi-implicit scheme. By virtue of Lemma 4.3, under the discrete DO condition (47), applying the semi-implicit scheme to equation (25) is equivalent to solving equation (33). We will first focus our attention to equation (33) and show that there exists a unique solution minimizing Ỹ n+1 − Y n L 2 ρ . This solution will satisfy the discrete DO and consequently is a unique minimizing solution of (25). Equation (33) can be rewritten as where Since RHS above lies in the range ofŨ n+1 , which is the same as the range of B, a solution of (49) exists. Moreover, since the matrix B is positive definite on the space ker(B) ⊥ , any solution can be expressed as (Ỹ n+1 − Y n + W ) with W ∈ ker(B) N and a uniqueỸ n+1 ∈ R R×N such that (Ỹ n+1 − Y n ) ∈ ker(B) ⊥ N . The solutionỸ n+1 minimizes each column (Ỹ n+1 − Y n )(ω k ) R R , k = 1, . . . ,N and thus it is the unique solution of (49) that minimizes norm Ỹ n+1 − Y n L 2 ρ . We observe that the established solutionỸ n+1 of equation (49) satisfies the discrete DO condition (47). The argument is analogous to the proof of Lemma 4.12, but instead ofM n+1 here we take B. Therefore, the statement for the semi-implicit scheme follows. The explicit case can be shown by following analogous steps with Now we can proceed with showing the discrete variational formulation. It is not generally easy to deal with the notion of a tangent space at a certain point on the manifold in the rank-deficient case. In the following theorem we will, however, show that an analogous discrete variational formulation holds. Given U ∈ (V h ) R and Y ∈ (L 2 ρ,0 ) R , we define the vector space T U Y as It is easy to verify that, analogously to Lemma 4.6, the (possibly rank-deficient) discrete DLR solutions u n h,ρ and u n+1 h,ρ at times t n , t n+1 , as defined in Algorithm 4.1 satisfy Theorem 4.14. Let u n h,ρ and u n+1 h,ρ be the (possibly rank-deficient) discrete DLR solution at times t n , t n+1 , respectively, n = 0, . . . , N − 1, as defined in Algorithm 4.1. Then the following variational formulation holds Proof. First, consider equation (24) with v h =Ũ n+1 j . Summing over j results in (52) Let us proceed with the equation (25): Taking a weak formulation in L 2 ρ,0 results in Concerning equation (24), we proceed as follows: where in the second step we applied Eρ[Ỹ n+1 Y n ] = Id which holds thanks to the discrete DO condition from Lemma 4.12. Summing equation (53) and (54) we obtain The rest of the proof follows the same steps as in the proof of Theorem 4.9, i.e. summing the mean value equation (23) and noting that some terms vanish. Remark 4.15. Thanks to the observation that ker(M n+1 ) = ker(Ũ n+1 ), we can easily see that any discrete solutionỸ n+1 of equation (25) leads to the same discrete DLR solution u n+1 h,ρ =ū n+1 h,ρ +Ũ n+1Ỹ n+1 . Therefore, the result of the preceding theorem as well as the stability properties shown in Section 5 hold for the discrete DLR solution obtained by any of the solutions of equation (25). Reinterpretation as a projector-splitting scheme The proposed Algorithm 4.1 was derived from the DLR system of equations (10)- (12). This subsection is dedicated to showing that this scheme can in fact be formulated as a projector-splitting scheme for the time discretization of the Dual DO approximation of (6). Afterwards, we will continue by showing its connection to the projector-splitting scheme of the first order proposed in [29,30] and further analyzed in [20]. In what follows, we will focus on the evolution of u n, * h,ρ , i.e. the 0-mean part of the discrete DLR solution u n h,ρ . Lemma 4.16. The discretized system of equations (24)-(25) can be equivalently reformulated as Proof. These equations are essentially equations (39) and (41), which are shown to hold in the proof of Lemma 4.8. We recall that from Lemma 3.6, the zero-mean part of the continuous DLR approximation u * = U Y satisfies Comparison to the projection scheme in [29] There are several equivalent DLR formulations. The DO formulation, proposed and applied in [34,35,36], seeks for an approximation of the form u R = U Y with {U j } R j=1 ⊂ V h orthonormal in ·, · H and {Y j } R j=1 ⊂ L 2 ρ linearly independent. The dual DO formulation, on the contrary, keeps the stochastic basis {Y j } R j=1 orthonormal in ·, · L 2 ρ and {U j } R j=1 linearly independent. The double dynamically orthogonal (DDO) or bi-orthogonal formulation searches for an approximation in the form orthonormal in ·, · H , ·, · L 2 ρ , respectively, and S ∈ R R×R a full rank matrix (see e.g. [7,8,23]). In [9,32] it was shown that these formulations are equivalent. In our work we consider the dual DO formulation with an isolated mean so that the stochastic basis functions are centered. A first order projector-splitting scheme introduced in [29,30] and further analyzed in [20] is a time integration scheme successfully used for the integration of dynamical low rank approximation in the DDO formulation. This subsection provides a detailed look into the comparison of the Algorithm 4.1 and the discretization scheme from [29,30]. We will see that, if the solution is full rank, these schemes are in fact equivalent. We will adapt the algorithm from [29] to approximate the DLR solution in the DDO form with an isolated mean, i.e. Having an R-rank solution u n h,ρ , the basic first-order scheme from [29] requires the knowledge of the solution u n+1 h,ρ , which is used in evaluating the term A = u n+1 h,ρ − u n h,ρ . To deal with differential equations where u n+1 h,ρ is a-priori unknown, we will consider a general scheme where where f n,n+1 and L(u n h,ρ , u n+1 h,ρ ) can be any of the explicit, implicit or semiimplicit discretizations detailed in Section 4.1. Adopting the notation from [29], the splitting scheme from [29,30] for a DDO approximation of (6) results in the following 6-step algorithm. 1. Compute the mean valueû n+1 such that 2. Solve for K 1 such that SetS The new solutionû n+1 h,ρ is then defined aŝ Now, let us compare the previous steps to Algorithm 4.1. We can easily observe thatû n+1 =ū n+1 . Since Y n = V 0 , we can see that equation (24) is equivalent to step 1 with U n = U 0 S 0 , i.e. K 1 =Ũ n+1 . Further, we havẽ Equation (25) can be reformulated as Note that the expression in brackets in the first term on the right hand side is exactly the transpose ofS 0 from step 3: We conclude that the scheme in Algorithm 4.1 and the scheme in Algorithm 4.17 coincide in exact arithmetic, provided the matrix S 1 is invertible. However, the numerical behavior of the two schemes differs when S 1 is singular or close to singular. ForM n+1 close to singular, solving equation (25) might lead to numerical instabilities. This problem seems to be avoided in the projector-splitting scheme from [29,30], as no matrix inversion is involved. Such ill conditioning is however hidden in performing step 3 of Algorithm 4.17, since the QR or SVD decomposition can become unstable for ill-conditioned matrices (see [17, chap. 5]). In the case of a rank deficient basis {Ũ n+1 }, Algorithm 4.1 updates the stochastic basis by solving equation (25) in a least square sense while minimizing the norm Ỹ n+1 − Y n L 2 ρ . The previous subsection showed that such solution satisfies the discrete variational formulation which plays a crucial role in stability estimation (see Section 5.3). On the other hand, Algorithm 4.17 relies on the somehow arbitrary completion of the basis {U 1 } in the step 3. In presence of rank deficiency, the two algorithms can deliver different solutions (see Section 7.3 for a numerical comparison). Stability estimates The stability of the solution of problems similar to (6) are well analyzed (see e.g. [14]). A natural question is to what extent constraining the dynamics to the low rank manifold influences the stability properties. In Section 5.1, we will first recall some stability properties of the true solution u true of problem (6). Then, in Section 5.2 we will see that these properties hold for the continuous DLR solution as well. It turns out that our discretization schemes satisfy analogous stability properties, as we will see in Section 5.3. In particular, we will show that the implicit and semi-implicit version are unconditionally stable under some mild conditions on the size of the randomness in the operator. We will state two types of estimates: the first one holds for an operator L as described in Section 2 and a second one additionally assuming the operator L to be symmetric. Note that in the second case the bilinear coercive form ·, · L,ρ is a scalar product on L 2 ρ (Γ; V ). In the rest of this section we will assume that a solution of problem (6), a continuous DLR solution and a discrete DLR solution exist. Stability of the continuous problem We state here some standard stability estimates concerning the solution u true of problem (6). Stability of the continuous DLR solution Constraining the dynamics to the R-rank manifold does not destroy the stability properties from Proposition 5.1. Proof. Part 1: with 0, Y i L 2 ρ = 0, we can take u as a test function in the variational formulation (19). The rest of the proof follows the same steps as in the proof of Proposition 5.1. . Asu ∈ V we can consideru as a test function in the variational formulation (19) and arrive at the sought result. Part 3 and 4 are obtained analogously. Stabilty of the discrete DLR solution Now we proceed with showing stability properties of the fully discretized DLR system from Algorithm 4.1 for the three different operator evaluation terms corresponding to implicit Euler, explicit Euler and semi-implicit scheme. For each of them we will establish boundedness of norms and a decrease of norms for the case of zero forcing term f . The following simple lemma will be repeatedly used throughout. Implicit Euler scheme Applying an implicit operator evaluation, i.e. L(u n h,ρ , u n+1 h,ρ ) = L(u n+1 h,ρ ) results in a discretization scheme with the following stability properties. for any time and space discretization parameters t, h > 0 with C L , C P > 0 the coercivity and continuous embedding constant defined in (4), (3), respectively. In particular, for f = 0 and n = 0, . . . , N − 1 it holds: h,ρ L,ρ ≤ u n h,ρ L,ρ . Proof. Thanks to Theorem 4.9, we know that the discretized DLR system of equations with implicit operator evaluation can be written in a variational formulation as n = 0, . . . , N − 1. 1. Based on Lemma 4.6 we take v h = u n+1 h,ρ as a test function in the variational formulation (62). Using Lemma 5.3 results in Using the coercivity condition (4) and summing over n = 0, . . . , N − 1 gives us the sought result. Explicit Euler scheme Concerning the explicit Euler scheme (see subsection 4.1), which applies the time discretization L(u n h,ρ , u n+1 h,ρ ) = L(u n h,ρ ), the following stability result holds. Theorem 5.5. Let {u n h,ρ } N n=0 be the discrete DLR solution as defined in Algorithm 4.1 with L(u n h,ρ , u n+1 h,ρ ) = L(u n h,ρ ). Then the following estimates hold: 2. If L is a symmetric operator we have Here C L , C B , C P > 0 are the coercivity, continuity and continuous embedding constants defined in (4), (5), (3), respectively and C I is the inverse inequality constant introduced in (22). For f = 0 and n = 0, . . . , N − 1 it holds: Proof. Thanks to the Theorem 4.9 we can rewrite the system of equations in the variational formulation 1. Based on Lemma 4.6 we take v h = u n+1 h,ρ as a test function in the variational formulation (65) and using Lemma 5.3 results in We further proceed by estimating where, in the third step, we used the inequality which holds based on assumption (22). Combining the terms, using the condition (63) and summing over n = 0, . . . , N − 1 finishes the proof. 2. Lemma 4.6 enables us to take u n+1 h,ρ − u n h,ρ as a test function in (65). This results in Using Lemma 5.3 we obtain where, in the second step, we used the assumption (22) 4. The proof of the forth property follows the same steps as the proof of part 2. Since there is no need to use the Young's inequality in (67), the condition on t/h 2p is weakened: As for the estimate in the · H,L 2 ρ -norm we can derive where in the last inequality we applied u n+1 h,ρ L,ρ ≤ u n h,ρ L,ρ for t h 2p ≤ 2 C 2 I C B . Semi-implicit scheme This subsection is dedicated to analyzing the semi-implicit scheme introduced in subsection 4.1 which applies the discretization L(u n h,ρ , u n+1 h,ρ ) = L det (u n+1 h,ρ ) + L stoch (u n h,ρ ). Apart from the inverse inequality (22) we will be using two additional inequalities. Let us assume there exists a constant C det > 0 such that This constant plays an important role in the stability estimation as it quantifies the extent to which the operator is evaluated implicitly. Its significance is summarized in Theorem 5.6. In addition we introduce a constant C stoch that bounds the stochasticity of the operator Theorem 5.6. Let {u n h,ρ } N n=0 be the discrete DLR solution as defined in Algorithm 4.1 with L(u n h,ρ , u n+1 h,ρ ) = L det (u n+1 h,ρ ) + L stoch (u n h,ρ ) with L det and L stoch satisfying (68) and (69), respectively. Then it holds 1. 2. If L is a symmetric operator we have Here C L , C B , C P , C I > 0 are the coercivity, continuity, continuous embedding and inverse inequality constants defined in (4), with t, h satisfying a weakened condition Proof. The variational formulation of the discrete DLR problem from Algorithm 4.1 reads in this case 1. We will consider v h = u n+1 h,ρ as a test function in (74) and we derive Combining the terms and summing over n = 0, . . . , N − 1 finishes the proof. Explicit Euler scheme Applying the explicit Euler scheme in the operator evaluation for a random heat equation, i.e. L(u n h,ρ , u n+1 h,ρ ) = −∇ · (a∇u n h,ρ ), results in the following system of equations The stability properties stated in Theorem 5.5 part 2 and 4 hold under the condition Note that the condition (68) is automatically satisfied for a random heat equation, since we have and inf x∈D,ξ∈Γā a ≥ amin amax > 0. The system of equations (31)- (33) can be rewritten as For a further specified diffusion coefficient we can state the following stability properties. we have the stability properties (71) and (72) for any t, h. Proof. The conditionā(x) ≥ a stoch (x, ξ) for every x ∈ D, ξ ∈ Γ implies i.e. C det ≥ inf x∈D,ξ∈Γā a ≥ 1 2 . Together with Theorem 5.6 we conclude the result. Proposition 6.1 tells us that applying a semi-implicit scheme to solve a heat equation with diffusion coefficient as described in (82) results in an unconditionally stable scheme. This result as well as some of the previous estimates will be numerically verified in the following section. Numerical results This section is dedicated to numerically study the stability estimates derived for a discrete DLR approximation in Section 5. In particular, we will be concerned with a random heat equation, as introduced in (78), with zero forcing term and diffusion coefficient of the form (82). We will look at the behavior of suitable norms of the solutions of the discretization schemes introduced in Section 4.1. We will as well look at a discretization scheme in which the projection is performed explicitly to see how important it is to project on the new computed basisŨ n+1 in (25). As a last result we provide a comparison with the projector-splitting scheme from [29]. with λ the Lebesgue measure restricted to the Borel σ-algebra B([−1, 1]). In this case the conditions (77), (29) and (68) are satisfied with a min > 0.04, C det > 1 2 . The initial condition is chosen as u 0 (x, ξ) = 10 sin(πx 1 ) sin(πx 2 ) + 2 sin(2πx 1 ) sin(2πx 2 )ξ 1 + 2 sin(4πx 1 ) sin(4πx 2 )ξ 2 + 2 sin(6πx 1 ) sin(6πx 2 )ξ 2 1 . = 10 sin(πx 1 ) sin(πx 2 ) + 4 3 sin(6πx 1 ) sin(6πx 2 ) + 2 sin(2πx 1 ) sin(2πx 2 )ξ 1 The spatial discretization is performed by the finite element (FE) method with P 1 finite elements over a uniform mesh. The dimension of the corresponding FE space is determined by h-the element size. For this type of spatial discretization we have the inverse inequality (79): Concerning the stochastic discretization we will consider a tensor grid quadrature with Gauss-Legendre points for the case of a low-dimensional stochastic space M = 2 and a Monte-Carlo quadrature for the case M = 10. The time integration implements the explicit scheme and the semi-implicit scheme described in subsection 4.1. We will consider the forcing term f = 0, i.e. a dissipative problem and time T such that the energy norm ( · L,ρ ) of the solution attains a value smaller than 10 −10 . Our simulations were performed using the Fenics library [2]. Explicit scheme Since f = 0, the result in Theorem 5.5 predicts a decay of the norm of the solution Figure 1 shows the behavior of the energy norm ( · L,ρ ) and the L 2 norm ( · H,L 2 ρ ) in 3 different scenarios: in the first scenario we set h 1 = 0.142, t 1 = 0.0018, i.e. the condition t 1 /h 2 1 ≤ K is satisfied and observe that both the energy norm and the L 2 norm of the solution decrease in time (see Figure 1(a)); in the second scenario, we halved the element size h 2 = h 1 /2 and divided by 4 the time step t 2 = t 1 /4 so that the condition (85) is still satisfied. The norms again decreased in time (Figure 1(b)); in the third scenario we violated the condition (85) by setting h 3 = h 1 /2 and t 3 = t 1 /3. After a certain time the norms exploded ( Figure 1(c)). To numerically demonstrate the sharpness of the condition (85), we ran the simulation with 72 different pairs of discretization parameters h, t. The results are shown in Figure 2, where we depict whether the energy norm at time T is bellow 10 −10 , in which case the norm was consistently decreasing; or more than 10 4 , in which case the solution blew up. We observe that a stable t has to be chosen to satisfy t ≤ Kh 2 , which confirms the sharpness of our theoretical derivations. M = 10 In our second example we will consider a higher-dimensional problem: M = 10 for which we use a standard Monte-Carlo technique with 50 points. We observe a very similar behavior as in the small dimensional case. Figure 3 shows that satisfying the condition t 1 /h 2 1 ≤ K with K = 0.085 results in a stable scheme Semi-implicit scheme We proceed with the same test-case with M = 10, same spatial and stochastic discretization, i.e. Monte-Carlo method with 50 samples and employ a semiimplicit scheme in the operator evaluation. Since the diffusion coefficient considered is of the form (82) and f = 0, Theorem 5.6 predicts u n+1 h,ρ L,ρ ≤ u n h,ρ L,ρ ∀h, t, ∀n = 0, . . . , N − 1. We set the spatial discretization h = 0.142 and vary the time step t. We observe a stable behavior no matter what t is used, which confirms the theoretical result (see Figure 4). We report that the results for M = 2 with 81 Gauss-Legendre collocation points exhibited a similar unconditionally-stable behavior. Explicit projection The following results give an insight into the importance of performing the projection in a 'Gauss-Seidel' way, i.e. projection on the stochastic basis is done explicitly, Y n kept from the previous time step, while the projection on the deterministic basis is done implicitly, i.e. we use the new computedŨ n+1 (see Algorithm 4.1 for more details). For comparison we consider a fully explicit projection, i.e. Y n as the stochastic basis and U n as the deterministic basis. We use a semi-implicit scheme to treat the operator evaluation term as described in subsection 4.1. As shown in Figure 5, in all 3 cases the solution reaches the zero steady state, however, not in a monotonous way. Figure 5: Behavior of the energy norm ( · L,ρ ) for 3 different time steps when treating the projection in an explicit way (orange) and in a semi-implicit way (blue). We used the semi-implicit scheme for the operator evaluation term. We see that, as opposed to a semi-implicit projection, with an explicit projection we do not obtain an unconditional norm decrease Conclusions In this work we proposed and analyzed three types of discretization schemes, namely explicit, implicit and semi-implicit, to obtain a numerical solution of the DLR system of evolution equations for the deterministic and stochastic modes. Such discrete DLR solution was obtained by projecting the discretized dynamics on the tangent space of the low-rank manifold at an intermediate point. This point was built using the new-computed deterministic modes and old stochastic modes. We found this projection property to be useful when investigating stability of the DLR solution. The solution obtained by the implicit scheme remains unconditionally bounded by the data in suitable norms. Concerning the explicit and semi-implicit schemes, we derived stability conditions on the time step, independent of the smallest singular value, under which the solution remains bounded. Remarkably, applying the proposed semi-implicit scheme to a random heat equation with diffusion coefficient affine with respect to random variables results in a scheme unconditionally stable, with the same computational complexity as the explicit scheme. Our theoretical derivations are supported by numerical tests applied to a random heat equation with zero forcing term. In the semi-implicit case, we observed that the norm of the solution consistently decreases for every time-step considered. In the explicit case, our numerical results suggest that our theoretical stability condition on the time step is in fact sharp. Our future work includes investigating if the proposed approach can be extended to higher-order projector-splitting integrators, or used to show stability properties for other types of equations.
6314910
s2orc/train
v2
2016-06-18T01:47:24.626Z
2014-08-22T00:00:00.000Z
Future directions in precognition research: more research can bridge the gap between skeptics and proponents Although claims of precognition have been prevalent across human history, it is no surprise that these assertions have been met with strong skepticism. Precognition, the ability to obtain information about a future event, unknowable through inference alone, before the event actually occurs, conflicts with the fundamental subjective experience of time asymmetrically flowing from past to future, brings into question the notion of free will, and contends with steadfast notions of cause and effect. Despite these reasons for skepticism, researchers have pursued this topic, and a large database of studies conducted under controlled laboratory conditions now exist. This work roughly spans from the 1930's (e.g., Rhine, 1938) up to this day (Bem, 2011; Mossbridge et al., 2014; Rabeyron, 2014). The accumulated evidence includes significant meta-analyses of forced-choice guessing experiments (Honorton and Ferrari, 1989), presentiment experiments (Mossbridge et al., 2012), and recent replications from Bem (2011, discussed below; Bem et al., 2014). Perhaps most central to the recent debate regarding the existence of precognition is work by Bem (2011). Bem (2011) time-reversed several classic psychology effects (e.g., studying after instead of before a test; being primed after, instead of before responding) and found evidence across nine experiments supporting precognition. Given the sound methodology and publication at a high-impact mainstream psychology journal, Journal of Personality and Social Psychology, this work has prompted the attention of psychologists; and, not surprisingly, the response has been skeptical (Rouder and Morey, 2011; Wagenmakers et al., 2011). While we acknowledge skepticism and close scrutiny is vital in reaching consensus on this topic, given the equivocation surrounding the results, we propose that more research is needed. In particular, we suggest that applied research designs that allow for the prediction of meaningful events ahead of time can move this debate forward. Since it is not obvious how experiments that do not require explicit “guessing” of future events could be used for this goal, we give a general overview of two methodologies designed toward this aim. INTRODUCTION Although claims of precognition have been prevalent across human history, it is no surprise that these assertions have been met with strong skepticism. Precognition, the ability to obtain information about a future event, unknowable through inference alone, before the event actually occurs, conflicts with the fundamental subjective experience of time asymmetrically flowing from past to future, brings into question the notion of free will, and contends with steadfast notions of cause and effect. Despite these reasons for skepticism, researchers have pursued this topic, and a large database of studies conducted under controlled laboratory conditions now exist. This work roughly spans from the 1930's (e.g., Rhine, 1938) up to this day (Bem, 2011;Mossbridge et al., 2014;Rabeyron, 2014). The accumulated evidence includes significant meta-analyses of forced-choice guessing experiments (Honorton and Ferrari, 1989), presentiment experiments (Mossbridge et al., 2012), and recent replications from Bem (2011, discussed below;Bem et al., 2014). Perhaps most central to the recent debate regarding the existence of precognition is work by Bem (2011). Bem (2011) time-reversed several classic psychology effects (e.g., studying after instead of before a test; being primed after, instead of before responding) and found evidence across nine experiments supporting precognition. Given the sound methodology and publication at a high-impact mainstream psychology journal, Journal of Personality and Social Psychology, this work has prompted the attention of psychologists; and, not surprisingly, the response has been skeptical (Rouder and Morey, 2011;Wagenmakers et al., 2011). While we acknowledge skepticism and close scrutiny is vital in reaching consensus on this topic, given the equivocation surrounding the results, we propose that more research is needed. In particular, we suggest that applied research designs that allow for the prediction of meaningful events ahead of time can move this debate forward. Since it is not obvious how experiments that do not require explicit "guessing" of future events could be used for this goal, we give a general overview of two methodologies designed toward this aim. PHYSICAL IMPLAUSIBILITY It is not unexpected that psychologists are most skeptical of precognition (Wagner and Monnet, 1979). This is likely due to their knowledge of the many illusions and biases that influence perception and memory. However, putting these cognitive biases aside, this work is often dismissed out of hand under the assumption that precognition would require overturning basic and essential physical and psychological tenets. Schwarzkopf (2014) illustrates this position: ". . . the seismic nature of these claims cannot be overstated: future events influencing the past breaks the second law of thermodynamics. . . It also completely undermines over a century of experimental research based on the assumption that causes precede effects" Some clarification is needed here. From a physics perspective, except for several processes studied in high-energy physics (such as B meson decay), non-thermal physics is time-symmetric, perhaps allowing the possibility of precognitive effects. The formalism of time symmetric physics has been used, for example, in the Wheeler-Feynman absorber theory of radiation (Wheeler and Feynman, 1945) as well as in the transactional interpretation of quantum mechanics (Cramer, 1986), in which quantum wavefunction collapse is described as being due to an interaction between advanced waves (traveling backwards-in-time) and retarded waves (traveling forwards-in-time). With regards to precognition, Bierman (2008) has proposed that coherent conditions present in the human brain allow the fundamental time symmetry of physics to manifest itself. Some quantum mechanical experiments can be interpreted as showing retrocausal influence where a decision at a future time seems to affect a past time. One example is Wheeler's delayedchoice experiment in which the way a photon travels through an interferometer (wave-like or particle-like) appears to be affected by a measurement decision made at a later time (Wheeler, 1984;Jacques et al., 2007). However, information transfer into the past (retrocausal signaling), as opposed to influence without information transfer, remains controversial since it has not yet been demonstrated experimentally. That said, there is no physical law which precludes retrocausal information transfer. There has been some effort put into experimental realization of retrocausal signaling. Cramer proposed that standard quantum mechanics allows the construction of a retrocausal signaling machine using quantum optical interferometry (Cramer, 2007). Though Cramer's work has reached an impasse (Cramer, 2014), an approach of using entangled systems for retrocausal communication may reveal a physical explanation for precognition. Lastly, it is worth noting, that ultimately whether any given theory can accommodate precognition or not is irrelevant; what is relevant are the data. RELIABILITY CONCERNS Although it appears premature to rule out precognition from a physics standpoint, there have been concerns regarding the reliability of precognitive effects. In essence, the question boils down to whether there are in fact small, yet real, precognitive effects that are hard to pin down and require further study to isolate, or, whether the evidence for precognition is based on false-positives emerging due to biases in the research process. For a recent overview of these issues in psychology see the November, 2012 issue of Perspectives on Psychological Science. Interestingly, a recent commentary (Jolij, 2014) notes the similarity between precognitive effects and those in social priming research. Indeed, both research areas report small effect sizes, replication difficulty, and specific "boundary" conditions (covariates) that moderate the effect (Wilson, 2013). Although researchers point toward metaanalyses to bolster their position, metaanalyses are also susceptible to bias and rarely lead to headway in controversial areas (Ferguson, 2014). The resemblance between precognitive effects and those seen in the mainstream psychological literature has been used to leverage support for precognition (e.g., Cardeña, 2014); however, the difficulties of replicating other paradigms in psychology seems a dubious source of solace for the challenge of replicating precognition findings. Moreover, even if precognition results were robustly replicated as some meta-analyses have suggested, there is always the concern that there is some artifact driving the effect. As such, we suggest new directions for future research in precognition; one that can simultaneously address concerns about the robustness of the effects and the possibility that they are driven by unrecognized artifacts. FUTURE DIRECTIONS IN PRECOGNITION RESEARCH What would provide the most compelling evidence for skeptics? Ultimately, we realize that the most convincing demonstration would be to show tangible effects applied in real-world settings. If a paradigm can make accurate predictions about events that people consider important and are incapable of predicting using standard means, then the significance of the paradigm becomes selfevident. Perhaps most compelling would be if an experiment could be devised to predict games of chance and/or the whether it will be a good or bad day on the stock market. Although a few reports exist in the literature of precognitive applications, in particular those that utilize associative remote viewing (predicting silver future: Puthoff, 1984;stock market;Smith et al., 2014), there has not been a single replicable methdology that has translated into consistent winnings in games of chance. Below we give a brief overview of two experiments designed to predict the outcome of random 1 binary events in realtime (specifically, the outcome of a roulette spin, black vs. red, excluding green; see Figure 1). The left side of Figure 1 presents a general overview of one approach. This experiment is based on work designed to examine whether extended future practice in some domain can extend backwards in time to influence prior performance. The original experiment designed toward this aim used a novel 2-phase Go-NoGo experiment (Franklin, 2007). In phase 1 of the experiment, all participants complete an identical Go-NoGo task in which individual shapes are presented for a second, one at a time, on a computer screen. Each stimulus either requires a response ("Go") or not ("NoGo"). Participants are told to respond (using the spacebar) to shapes A and B and withhold responses to 1 Although there is an important distinction between truly random vs. pseudorandom selection, since any genuine precognitive effect of future stimuli on past behavior/physiology should be independent of selection method, we do not distinguish between these for the purposes of this overview. shapes C and D. In phase 2, participants are randomly divided into 2 groups with each group responding exclusively to a single shape (A or B). The rationale is akin to the subtraction method/additive factors methodology (Sternberg, 1969). If phase 1 performance is influenced by only past experience, then there should be no difference in reaction times or accuracy based on future condition assignment. If, however, phase 1 performance is influenced not only by past experience, but future experience as well, systematic differences in performance based on phase 2 condition assignment should emerge. As seen in Figure 1B, by mapping shapes A and B to outcomes of the roulette spin (RED and BLACK), it should be possible (assuming a genuine precognitive effect) to use phase 1 performance to predict the roulette spin outcome before the wheel is spun. Next we describe an experiment using EEG to detect predictive anticipatory activity (PAA; Mossbridge et al., 2014); also known as presentiment, the finding that various physiological measures of arousal are higher preceding the onset of emotionally charged vs. neutral pictures that are randomly presented (Bierman and Radin, 1997;Radin, 1997;Bierman and Scholte, 2002;Spottiswoode and May, 2003;Mossbridge et al., 2012). The specific methodology below extends work reported in Radin (2011), in which the pre-stimulus EEG activity of experienced meditators was found to differ significantly in response to light flashes and auditory tones. As seen in Figure 1, by mapping the light flash and auditory tone to a binary target (RED vs. BLACK roulette spin) and by evaluating baseline and pre-stimulus EEG potentials in realtime, it should be possible to predict the state of a future random target, allowing above-chance retrocausal communication. Similar to the first experiment design, the results of the prediction can be compared against chance (50%) with an exact binomial test. Currently, pilot testing with this basic design is underway, along with additional testing to assess whether a stimulus (flash vs. tone) triggered by the appropriate symmetric pre-stimulus response (a "neurofeedback" condition; e.g., flash delivered when occipital EEG increases) can condition response patterns in anticipation to random stimuli determined by FIGURE 1 | The left side displays the experimental design of two-phase Go-NoGo precognition task: (A) 4 random polygons are displayed individually on screen for 1 s at a time. Shape A is (arbitrarilly) associated with RED, and Shape B is associated with BLACK. During phase 1 all participants are told to press the spacebar only when shape A and B appear (the "Go" shapes, colored green), and withhold responses to shapes C and D while these responses and reaction times are recorded. In phase 2, particpants only respond to one "Go" shape. As seen in (B) the phase 2 shape is determined by a roulette spin outcome 2 . As such, the precognitive influence of phase 2 practice on phase 1 performance (e.g., improved detection of the shape practiced in the future) would allow for a real-time prediction of the future practice shape, and hence the future roulette spin outcome. On the right, is an overview of the experimental design of the "applied" EEG presentiment experiment: (C) Short duration visual or auditory stimuli are randomly presented to participants (equal probability). For the purposes of roulette spin prediction, each stimulus type is arbitrally associated with an outcome (Visual-RED, Auditory-BLACK) (D) EEG is continuously recorded from occipital electrodes (O1/O2). Prior to assigning a stimulus, a prediction is made based on a comparsion of the pre-stimlus interval to the baseline. Specfically, if voltage is positive relative to baseline, predict VISUAL (bet RED); if voltage is negative relative to baseline, predict AUDITORY (bet BLACK). roulette spin; allowing for a retrocausal Brain Computer Interface (BCI). The design presented in Figure 1 has the benefit of more protection against anticipation/learning strategies (there is only one future event). Also, extended exposure to the future stimulus may strengthen the effect and allow for more time between the prediction, bet and outcome. Although the EEG experiment relies on fewer data points for each prediction, this method could lead to BCI applications and be more powerful due to the large number of trials collected within and across participants. Altogether, there appears to be no inherent confound in either design given sufficient sample sizei.e., we know of no conventional confound that could lead to consistent above chance prediction in real time of a roulette spin. As such, both designs are worth exploring in future research. FINAL THOUGHTS Despite the accumulated data, and recent positive findings in the literature, significant controversy remains regarding the interpretation of the evidence for the existence of precognition. Proponents find the combined results as compelling evidence in support of precognition, with similar (small) effect sizes to those reported throughout the psychological literature. Skeptics, however, question potential methodological and/or analytical confounds in those studies, as well as the physical plausibility of precognition. Both, however, agree regarding the profound implications if these bold claims are true. We suggest that although the current state of evidence does not quite merit proponents' strong claim of having demonstrated replicable precognition in the laboratory, the accumulated experimental evidence, combined with advances in theoretical physics, warrant further research. We believe the most effective way forward is through the development of paradigms that use software in real-time to predict meaningful future outcomes before they occur. As others have noted (Mossbridge et al., 2014) a new technology that uses behavior and/or physiology to consistently predict random future events above chance would certainly be a "game-changer."
5843220
s2orc/train
v2
2018-04-03T04:01:38.578Z
2016-12-21T00:00:00.000Z
Options for Online Undergraduate Courses in Biology at American Colleges and Universities This analysis of online offerings in biology indicates that offerings at 2-year public colleges are common, while 4-year public and private institutions are lagging. Biology courses commonly offered online are general education and healthcare-related courses with limited options for biology majors. Ideas to increase biology online offerings are provided. INTRODUCTION Online education is transforming the landscape of American higher education. From 2000 to 2008, the percentage of students taking at least one online course increased from 8 to 20% (Radford, 2011), and by 2012, this statistic had increased to 33.5% (Allen and Seaman, 2013). College and university faculty and administrators project continued growth in online offerings into the future (Kim and Bonk, 2006;Allen and Seaman, 2014). Several factors appear to be driving this growth. Online courses are popular with college and university administrators, because they can decrease operating costs. For example, an institution lacking funds to build more classrooms and/or parking facilities to accommodate growing enrollments can meet the demand with online courses (Howell et al., 2003;Dziuban et al., 2004;Mayadas et al., 2009;Brown, 2011). Online learning can also reduce the cost of running a course (Twigg, 2003;Vaughan, 2007). Additionally, since the current supply of online courses is a diverse market, and students are shopping around for courses offered by thousands of institutions (Howell et al., 2003), increasing online course and program offerings can open an institution up to new student populations and increase enrollments (Moloney and Oakley, 2010;Brown, 2011). Conversely, not offering online courses could drive students into other institutions' online courses, thereby decreasing an institution's enrollment (Howell et al., 2003). Finally, strong student demand appears to be driving growth for online education. According to Garrett (2007), 53% of people who claimed they were interested in pursuing postsecondary education in the next 3 years indicated their preferred mode of delivery would be totally online or an equal balance between online and on-campus instruction. It is important to meet student demands for online courses. Online learning is more convenient for some students and makes accessing higher education a possibility for others, as it creates educational opportunities that are free of time and geographic constraints (Geith and Vignare, 2008). Additionally, the typical online student population includes higher than average percentages of nontraditional students, women, and minorities (Radford et al., 2015). Thus, online offerings help institutions attract and educate a more diverse student population. Not all types of academic institutions are embracing online education equally. Currently, students attending public institutions are more than twice as likely as students at private colleges to be taking some of their courses online (Ginder and Stearns, 2014). Among the private colleges, online enrollments increased dramatically at nonprofit institutions while declining sharply at for-profit institutions from 2012 to 2013 (Allen and Seaman, 2015). This suggests that future demand for online courses will be met by public institutions and, increasingly, by private nonprofit institutions. There also appears to be a positive relationship between the size of an institution and the likelihood that it offers online courses (Parasad and Lewis, 2008). While nearly all institutions with more than 15,000 students offer online courses, approximately one-third of those with fewer than 1500 students do not offer any online courses (Allen and Seaman, 2014). Growth of online offerings has not been equal in all academic disciplines. Science, technology, engineering, and mathematics (STEM) students appear to have fewer online course and degree options than those studying other disciplines. During the 2007-2008 academic year, students enrolled in natural science, mathematics, and agriculture programs of study were 30% less likely to be taking an online course and 75% less likely to be enrolled in an online degree program than their peers in other disciplines (Radford, 2011). In 2007, online programs in engineering were only offered at 16% of American colleges and universities, whereas programs in psychology and business were nearly twice as common (Allen and Seaman, 2008). Given high student demand for online learning and unique demographics of online students, national efforts to increase the number of STEM graduates (National Academies of Science, Engineering, and Medicine, 2007) and promote diversity within STEM disciplines (National Academies of Science, Engineering, and Medicine, 2011) may benefit from increased online STEM offerings. Laboratories are a common barrier to offering STEM classes online (Kennepohl and Shaw, 2010;Jeschofnig and Jeschofnig, 2011). Several alternatives exist to overcome this obstacle, including: online virtual or simulated laboratories; laboratory videos; and hands-on distance activities such as kitchen laboratories, in which students set up and conduct experiments at home using household items, or laboratory kit activities, which are mailed to students and include equipment, materials, and protocols that more closely mirror traditional laboratory experiences. Each of these options has drawbacks. Although virtual laboratories can be useful in many educational contexts, they do not provide students with the same tactile experiences and opportunities to learn discipline-specific techniques and operate related equipment. Furthermore, organizations such as the American Chemical Society (ACS) and the National Science Teacher's Association (NSTA) do not consider virtual laborato-ries equivalent to traditional laboratory experiences (NSTA, 2007;ACS, 2009ACS, , 2015. Kitchen laboratories and laboratory kits may cost students more than the laboratory fees they would pay in face-to-face courses and introduce safety and liability issues (Jeschofnig and Jeschofnig, 2011). Additionally, some argue that, due to equipment and materials cost and safety limitations, many of the hands-on distance laboratory activities lack the rigor of traditional experiments and can be useful only for illustrating certain processes (Reeves and Kimbrough, 2004;Lyall and Patti, 2010;Jeschofnig and Jeschofnig, 2011). Students have noted that distance hands-on activities take more time to complete (Lyall and Patti, 2010), which may be due to the lack of student-to-student interaction and the absence of an instructor for guidance (Kennepohl, 2007). Hybrid courses offer a solution in which students can fulfill the laboratory requirement of science courses face-to-face while engaging with other course content online. It is unclear, however, how frequently this option is available to science students. Hybrid courses, those that include between 30 and 79% online delivery, are not as common as fully online courses, in which more than 80% of the content is delivered online (Allen and Seaman, 2008); while 45.9% of undergraduate institutions offered at least one hybrid course, 55.3% offered at least one fully online course in 2004 (Allen et al., 2007). Hybrid courses may be less appealing to administrators and students, because their face-to-face class requirements may limit student enrollment. One of the major goals of my research was to document online course options in biology, as this information is not available. A search of Peterson's Online Schools database revealed that only 15 American institutions offered online undergraduate degrees in biology (Peterson's, 2015). Although online biology program options are limited, a higher than average percent of students studying in healthcare fields take online courses (Radford, 2011). Thus, healthcare program prerequisites or requirements may be a significant portion of the online offerings in biology, although this has not been documented. A distinction among biology courses offered at many colleges and universities is whether those courses are geared toward nonmajors who need to fulfill general education requirements or students majoring in biology. Nonmajors biology courses may be the only academic exposure a student has to a scientific discipline and, as a result, are typically more focused on promoting general scientific literacy and highlighting the social relevance and application of scientific knowledge (Sundberg and Dini, 1993;Wright, 2005). Appealing courses offered primarily for nonmajors can also help biology departments increase their student contact hours and can inspire students to declare the major (Klymkowsky, 2005). In contrast, courses offered for biology majors are generally part of an extended sequence of study within the discipline and include greater depth of coverage of discipline-specific content that prepares students for advanced study in biology (Sundberg and Dini, 1993;Klymkowsky, 2005). As a result of the differing emphases and goals of biology courses geared toward majors and nonmajors, their likelihoods of being offered online may differ, although this too has yet to be studied. Knowing the current landscape of online offerings in biology can illustrate 1) how biology online offerings differ from the rates of online offerings in general; 2) which types of courses biology departments are having success offering; and 3) deficiencies in biology online offerings that may be barriers to student access. My goal was to create a baseline understanding of our current online offerings in biology at American colleges and universities. Specifically, I aimed to 1. Document how the diversity and availability of online biology courses differ at different kinds of American colleges and universities, including 2-year public, 4-year public, and 4-year private institutions; 2. Document which kinds (fully online vs. hybrid, majors vs. nonmajors, laboratory vs. nonlaboratory) of undergraduate biology courses American colleges and universities are offering online; and 3. Make recommendations for growth in future online offerings in biology. When searching the course schedule for an institution, I noted the total number of unique undergraduate biology courses and number and titles of each unique undergraduate course offered online or hybrid in Fall of 2015 or Spring of 2016. When an institution listed lectures and corequisite laboratories separately, I counted these as one course. Aligned with Allen and Seaman's (2008) definitions of online versus hybrid courses described in the Introduction, I denoted online courses as those that were described as such and did not have regular class times and rooms. Hybrid courses were those that were noted as such or that had one component of the course listed as online but other components (e.g., laboratory sections) only offered face-to-face. I also denoted as hybrid courses those that were listed as online courses but had reduced regular meetings times compared with face-to-face courses. Additionally, I counted the total number of undergraduate biology sections offered and the number of biology undergraduate sections that were offered fully online or hybrid online, counting laboratory and lecture sections separately. Data Collection I documented additional information about the undergraduate courses that were being offered online or as hybrid-online courses. By reading the descriptions in the course catalogue, I determined whether the courses included laboratory components or were lecture-only credit hours. Additionally, I determined whether the courses were intended for biology majors or nonmajors. If the catalogue listing indicated the course was intended for biology majors, if it was listed as part of the institution's biology major, or if it required the institution's introductory biology course(s) for biology majors as a prerequisite, I counted the course as one for biology majors. Because they are often required as prerequisites for nursing programs and as part of healthcare-related degrees and certificates, I also deemed courses with variations on the following names as healthcare-related courses: human anatomy, human physiology, human anatomy and physiology, medical terminology, introduction to pharmacology, and microbiology. Data Analysis I calculated descriptive statistics about the enrollment size of the institutions in my sample, including the mean and range of FTE students. I examined the relationship between institutional size and online offerings for Fall 2015 and Spring 2016 separately by grouping the institutions sampled during each time period into four groups based on their 2012-2013 FTE student enrollment data (<1500; 1500-4999; 5000-9999; and >10,000 FTE undergraduate students) and then conducted 2 × 4 chi-square analyses to determine whether the proportions of institutions in each group, with and without online offerings, were the same. I then examined institutional size trends more closely for all of the institutions sampled that had online offerings in biology in Fall of 2015 and Spring of 2016. To determine whether there is a relationship between institution size and the number of all biology courses and sections offered (face-to-face, hybrid, and online), I conducted two simple linear regressions with size of the institution as the independent variable and number of biology courses and sections as dependent variables. I also conducted three simple linear regression analyses using institutional enrollment numbers as the independent variable and number of unique biology courses offered online (as an indication of online course variety), percent of total biology courses offered online (as an indication of the extent of online course offerings), and percent of total biology sections offered online (as an indication of online course availability) as dependent variables. To examine relationships between the type of academic institution and online offerings, I counted the number of institutions in each of my three categories that offered at least one online or hybrid course and the number that offered only faceto-face biology courses. I then conducted chi-square tests on the Fall 2015 and Spring 2016 data separately to compare the proportions of 2-year, 4-year public, and 4-year private institutions that offered at least one online or hybrid biology course. To compare difference in online course variety, extent of online course offerings, and availability of online sections, I counted the number of online courses and calculated the percent of online courses and percent of online sections offered at 2-year, 4-year public, and 4-year private institutions. Because the Fall 2015 and Spring 2016 data failed two of the assumptions of the analysis of variance test-homogeneity of variance and normal distribution of data-I used the Kruskal-Wallis test to compare their distributions between institution types. When the Kruskal-Wallis generated a p value <0.05, I conducted pairwise comparisons between all groups, using Wilcoxon rank-sum tests with Bonferroni-corrected p values as my critical value. I also conducted Wilcoxon rank-sum tests to determine whether there were differences in the number of courses, percent of courses, and percent of sections offered online at 2-year public institutions in Fall 2015 versus Spring 2016 and repeated these analyses separately for 4-year public and private institutions. To analyze the course data, I pooled all of the course-level data collected from the three types of institutions over the academic year. I then counted the number of courses offered fully online, only as hybrids, for biology majors, for nonmajors, for healthcare professionals, with laboratories, and without laboratories and calculated the associated proportions. Finally, I calculated the proportion of courses that fit into each of the following categories: hybrid lab courses for majors, hybrid lab courses for nonmajors, hybrid nonlaboratory course for majors, hybrid nonlaboratory course for nonmajors, online lab courses for majors, online lab courses for nonmajors, online nonlaboratory courses for majors, and online nonlaboratory courses for nonmajors. I used R (R Core Team, 2015) to calculate the summary and inferential statistics and Microsoft Excel to generate figures. I used alpha = 0.05 as the critical p value for all statistical tests and have reported means ± 1 SE. RESULTS I surveyed the course schedules of 96 American institutions of higher education, 48 from Fall 2015 and 48 from Spring 2016. There were many more 4-year private institutions that did not meet my sampling criteria. For example, I had to examine the Spring 2016 offerings of 33 4-year private institutions to find 16 that had offerings in biology or biological sciences and displayed their course schedule online. In contrast, I was able to find 16 suitable 2-and 4-year public institutions by sampling 17 and 19 institutions, respectively. Of 69 institutions I randomly selected for sampling in Spring 2016, 5.8% were excluded, because their schedules were not available online, while closer to 25% were excluded, because they did not offer biology courses (e.g., seminaries and schools of art and design). The institutions sampled ranged in size from 66 to more than 32,000 FTE students in 2012-13, with the mean size of institutions sampled being 5271 ± 886 and 5115 ± 738 FTE students in the Fall and Spring, respectively. I found no evidence of relationships between an institution's enrollment size and its online offerings. Enrollment data were unavailable for one of the institutions I sampled, thus the chisquare analyses described below include 47 institutions sampled in Fall and 48 sampled in Spring. The percentage of institutions with less than 1500, between 1500 and 4999, between 5000 and 9999, and more than 10,000 FTE students offering at least one online biology course varied between 38.46 and 66.67% in the Fall semester and 38.46 and 80.00% in the Spring semester (Figure 1). Chi-square analyses showed, however, that there was no relationship between the size class of an institution and whether that institution offered any online courses in biology in the Fall (χ 2 = 1.73; p = 0.63; df = 3) or Spring (χ 2 = 3.90; p = 0.27; df = 3). Of the 50 institutions in my combined academic year sample that offered at least one online or hybrid course in biology, I found the expected positive relationships between the number of FTE students and total number of biology courses (R 2 = 0.26; p < 0.001) and sections (R 2 = 0.44; p < 0.001) offered. However, among the same 50 institutions, there was no statistically significant relationship between the number of FTE students and 1) the number of online or hybrid biology courses (R 2 = 0.001; p = 0.80) and sections (R 2 = 0.03; p = 0.27); 2) the percent of their biology courses offered fully online or hybrid (R 2 = 0.02; p = 0.27); and (3) the percent of their biology sections offered online or hybrid (R 2 = 0.05; p = 0.11). I did, however, find strong differences between the online offerings at 2-year public, 4-year public, and 4-year private schools. In Fall of 2015, a high proportion, nearly 0.9 of 2-year public institutions surveyed, offered at least one fully online or hybrid class in biology. The same was true of fewer than half of 4-year public and one-quarter of 4-year private institutions ( Figure 2). In the Spring of 2016, the results were similar, with more than 0.8 of the 2-year public institutions offering online biology courses, while the same was true of fewer than half of the 4-year institutions surveyed (Figure 2). Chi-square analysis showed a significant difference between these proportions in the Fall (χ 2 = 13.19; p = 0.001; df = 2) and Spring (χ 2 = 8.68; p = 0.01; df = 2), indicating strong differences in the availability of online biology course offerings by type of institution. My results comparing the numbers and percentages of biology courses offered online or hybrid indicate strong differences Similarly, I found significant differences between the distributions of the percentages of online and hybrid courses offered at the three types of institutions surveyed (Fall: Kruskal-Wallis χ 2 = 21.91; df = 2; p < 0.001; Spring: Kruskal-Wallis χ 2 = 15.93; df = 2; p < 0.001). In the Fall and Spring, 43 ± 8.08% and 29.01 ± 6.22%, respectively, of the courses offered at 2-year institutions were offered in an online or hybrid format. These distributions were significantly different from the distributions of percentages of online and hybrid courses offered at 4-year public (Fall: W = 222; p < 0.001; Spring: W = 46.5; p = 0.002) and 4-year private institutions (Fall: W = 233; p < 0.001; Spring: W = 38; p <0.001), which only offered an average of 5.69 ± 1.88% and 2.78 ± 0.97%, respectively, of their courses online over the academic year ( Figure 3). There was no significant difference between the distributions of the percentage of courses offered at 4-year public and private institutions in the Fall (W = 152; p = 0.296) or Spring (W = 119; p = 0.71). My results comparing the percent of sections offered online or hybrid give an indication of how the availability of online options differ at the types of academic institutions surveyed. I found significant differences between the distributions of percentages of sections that were offered online and hybrid at the three types of institutions surveyed in the Fall (Kruskal-Wallis χ 2 = 19.861; df = 2; p < 0.001) and Spring (Kruskal-Wallis χ 2 = 10.57; df = 2; p = 0.005). In the Fall, only 3.35 ± 1.48% and 1.37 ± 0.81% of the sections offered at 4-year public and private institutions were online or hybrid, and the distributions did not differ significantly (W = 152; p = 0.296; Figure 3), with similar results in the Spring (W = 118.5; p = 0.70). However, over the academic year, 16.32 ± 2.91% of sections were offered as online or hybrid at 2-year institutions, and the distribution of data differed significantly from that of 4-year public (Fall: W = 216; p < 0.001; Spring: W = 61; p = 0.01) and private institutions (Fall: W = 229; p < 0.001; Spring: W = 55.5; p = 0.005). To determine which kinds of biology courses are being offered online in an academic year, I examined the 149 online and hybrid courses offered in the Fall 2015 and Spring of 2016 in my random sample. A larger percentage of the courses in my sample were offered in a fully online format, 59.06%, compared with 40.94% that were only offered as hybrid courses (Figure 4). While 22.82% of the courses surveyed served the institution's biology majors, 77.18% were not part of the biology major and are therefore referred to as nonmajors courses (Figure 4). More than 35% of the courses I surveyed were healthcare-related courses, and more than 90% of these were nonmajors courses. A high percentage of the hybrid and online courses surveyed were laboratory courses, 68.46%, while the remaining 31.54% did not include a laboratory. Finally, I quantified all three of the course attributes noted above: whether the course was fully online or hybrid; laboratory or nonlaboratory; and intended for biology majors or nonmajors. The three largest categories of courses were all intended for nonmajors. These include fully online laboratory courses and hybrid laboratory courses, which were each 26.17% of my sample, followed by fully online nonlaboratory courses, which were 23.49% of my sample. Hybrid, laboratory courses for majors were 12.08% of the courses surveyed over the academic year, and they were nearly three times more common than fully online courses with laboratories for majors, which were 4.08% of my sample. Fully online, nonlaboratory courses for majors were slightly more common, 5.37%, than the fully online laboratory courses for majors. There were few hybrid, nonlaboratory courses for biology majors (1.34%) and nonmajors (1.34%) in my sample (Figure 4). Institution Size and Online Biology Offerings Despite strong positive relationships between the size of an institution and the total numbers of biology courses and sections offered, indicating that larger institutions are serving more students with more courses and sections, I found no evidence of positive relationships between the size of an institution and the diversity and availability of its online offerings in biology. These findings differ from what others have found about online offerings and institution size in general. Typically, there is a positive relationship between institutional size and the likelihood of offering at least one online course in any discipline (Parasad and Lewis, 2008). This pattern has been attributed to resource discrepancies (Allen and Seaman, 2014), with large institutions having more resources to provide and support online courses than small institutions. This pattern may not hold true for biology courses, because other obstacles are preventing biology departments from moving forward with online courses. Faculty resistance to teaching online is a commonly cited obstacle. Faculty report resistance to teaching online for a variety of reasons, including their perceptions that teaching online is more work than teaching face-toface courses (Berge, 1998;Bolliger and Wasilik, 2009;Seaman, 2009) and their lack of technological expertise (Berge, 1998;Bower, 2001;Kennepohl and Shaw, 2010). Biology faculty may be particularly resistant to developing online laboratory courses, because such courses require testing a new suite of laboratory activities and related technologies that would further increase the initial workload. Additionally, it is possible that online laboratories may cause administrators to fear liability issues, thus prompting institutions to direct their efforts toward pursuing online courses in other disciplines. The random variation in online offerings relative to institution size may also indicate that biology departments have different motivations for offering online courses than other departments. Insufficient classroom space has compelled some institutions to pursue online offerings (Howell et al., 2003;Picciano, 2006;Brown, 2011). This factor may play a larger role in the decision to offer science courses online, because laboratory classrooms have more specialized safety and equipment requirements (National Research Council, 2006), making them less interchangeable with other spaces. Additionally, laboratory classrooms are more costly to build and maintain (National Research Council, 2006). When classroom shortages are limiting enrollment growth, the addition of other kinds of classrooms is likely more efficient. Furthermore, traditional laboratory sections are expensive to run, and institutions can cut costs by offering laboratories online (Powell et al., 2002). Thus, laboratory space and limiting course budgets may be motivating biology departments with fewer resources to offer courses online, thereby obscuring the expected positive relationship between the size of the institution and availability of online offerings. This could mean many biology faculty are teaching online without adequate technological support and training. This is a common complaint among online instructors (Berge, 1998;Seaman, 2009) that could hinder online course and program success (Howell et al., 2004). Institution Type and Online Biology Offerings I found significantly higher availability and diversity of online courses in biology at 2-year public compared with 4-year public and private institutions (Figures 2 and 3). There was, however, no difference between the availability and diversity of biology online offerings at 4-year public and 4-year private institutions. These findings differ from Allen and Seaman's (2015) findings, which showed that, while only 65% of 4-year private colleges have any online offerings, more than 90% of 4-year public and 2-year public colleges offer online offerings. Surprisingly, given the reduced involvement of STEM students in online education, I found the percentage of 2-year public colleges offering online courses in biology was similar to the percentage of 2-year public colleges with at least one online course offering in general (Allen and Seaman, 2015). I found, however, that ∼50% fewer 4-year public and 35% fewer 4-year private institutions offer online biology courses compared with the percentages of those types of colleges with online courses in general. The large amount of online biology course diversity and availability at 2-year public colleges is likely a response to high demand from students. Two-year public colleges have five times the number of students age 24 and older than 4-year colleges do (Alderman, 2005). Older students, ages 35-55, are known to prefer online learning (Garrett, 2007). Two-year public college students are also more likely to come from low-income families, have dependent children, and declare themselves financially independent for financial aid purposes. Thus, 79% of 2-year public college students work an average of 32 hours per week while enrolled, and 41% work full-time (Horn and Nevill, 2006). The flexibility of asynchronous learning in an online format appeals to students who also face the demands of raising children and/or working. Furthermore, 2-year public college students are less likely to be enrolled fulltime (Horn and Nevill, 2006) and in residence on campus. Thus, the online format may save them time and money associated with commuting. Sixty-four percent of the 4-year institutions I sampled had no online biology offerings. The reduced number of biology online offerings at 4-year institutions is aligned with other reports of fewer offerings in the sciences compared with other disciplines (Allen and Seaman, 2008;Radford 2011). Nonetheless, it indicates a possible barrier for students preferring or requiring online education who are pursuing bachelor's degrees at American colleges and universities. I failed to find any significant differences between the diversity and extent of online biology offerings at 2-year public, 4-year public, and 4-year private institutions in Fall of 2015 versus Spring of 2016, indicating consistency in online biology options over the academic year. It is possible, however, that institutions offer more online courses in the Summer term to serve their students who are only in residence on campus in the Fall and Spring; I discovered several 4-year private institutions that only offer online courses in the Summer term, although none of the courses were biology courses. The lack of data about Summer offerings is a limitation of this research. Types of Courses I found more online biology courses for nonmajors (Figure 4). Compared with developing an online biology program for majors, offering one or two nonmajors courses requires fewer resources. Furthermore, because of added convenience and flexibility, online nonmajors courses may better compete with courses from other science departments for general education students. A comparison of the supply of online biology general education offerings to those of other science disciplines remains a research gap. The high number of nonmajors courses may also relate to faculty members' commonly held perception of the reduced quality of online courses (Berge 1998;Bower, 2001;Seaman, 2009). This perception, which is more common among faculty and administrators with little experience in online education (Seaman, 2009;Allen and Seaman, 2013), exists despite strong evidence to the contrary, including the results of two large meta-analyses that reviewed 51 (Means et al., 2009) and 125 studies (Shachar and Neumann, 2010). These studies compared outcomes in face-to-face versus hybrid or online courses and found that, on average, student performance was better in online or hybrid compared with face-to-face courses. Evidence also indicates that online course quality has improved through time (Shachar and Neumann, 2010;Brinson, 2015), likely as a result of the adoption of effective online teaching practices and improved technology. This disjunction between documented perceptions of online course quality and the related scientific research highlights the need for more faculty training and professional development in online education. Yet because general education biology courses are not laying a foundation of discipline-specific knowledge and skills for future biologists, biology faculty who still question the quality of online courses may be more comfortable if online courses are for general education students. Additionally, a large portion of the online biology nonmajors courses in my survey were courses that serve students completing healthcare program prerequisites or degree requirements indicating that these students have more online options than other populations. This finding also helps explain the higher proportion of courses for nonmajors in my sample and is aligned with high interest in online learning among students pursuing postsecondary degrees in healthcare-related fields (Garrett, 2007). Furthermore, since nearly 90% of the online healthcare-related courses in my sample were offered at 2-year public colleges, the high number of health-care related courses also helps explain the high number of online offerings in biology at 2-year public institutions. In 2012-2013, 21% of all associate's degrees awarded in the United States were healthcare and related degrees; the number of students earning these degrees was 50 times higher than the number of students earning degrees in biology (NCES, 2014). Thus, students pursuing these degrees make up a large portion of biology enrollments at 2-year colleges, and the faculty and administrators at these institutions seem to be responding to their desire to complete their courses online. I found equal proportions of online and hybrid laboratory courses offered for nonmajors, but the proportion of fully online laboratory courses compared with hybrid laboratory courses was not quite one-third for biology majors (Figure 4). This may indicate that some biology departments are hesitant to offer fully online laboratory courses for biology majors despite evidence of high demand for fully online laboratory science courses from science majors (Kennepohl, 2007), further demonstrating limited online course opportunities for biology majors. Online laboratories may be offered more frequently for nonmajors, because instructors are more willing to use online or at-home hands-on activities in nonmajors courses. In some cases, these activities could have added benefits for nonmajors. For example, kitchen laboratories may help nonmajors students find relevance in the content (Reeves and Kimbrough, 2004). On the other hand, online laboratory courses could pose transferability issues that would be heightened for biology majors due to the number of laboratory science courses they must complete. When transferring in courses, some institutions will not count laboratory courses lacking traditional, face-to-face laboratories as equivalent to their face-to-face courses (Brewer et al., 2013). Although this issue may be resolved in time as online laboratory courses become more common, transferability of laboratory courses is currently a serious concern that online science course instructors, counselors, and potential students should carefully consider. Additionally, some authors believe that simulated or at-home alternatives to the traditional laboratories are not sophisticated enough to train future scientists (Lyall and Patti, 2010). Thus, reduced numbers of fully online laboratory courses for biology majors may also reflect fears about the quality of online laboratory courses. However, a growing body of evidence to the contrary should alleviate these concerns. For example, those studying learning outcomes in face-to-face versus online biology laboratory activities have found that online students can achieve similar learning outcomes (Johnson, 2002;Gilman, 2006;Lunsford and Bolton, 2006). Even more convincing, a recent meta-analysis reported that 89% of studies comparing learning outcomes in traditional versus virtual and/or remote laboratories found equal learning outcomes, while 65% reported higher learning outcomes in nontraditional laboratories (Brinson, 2015). Other researchers have documented additional educational benefits of online laboratory science courses, including 1) higher final course grades (Reeves and Kimbrough, 2004;Lyall and Patti, 2010); 2) deeper learning, because students are forced to work independently through issues they encounter doing laboratory activities at home (Jeschofnig and Jeschofnig, 2011); and 3) the ability to transcend time and space limitations imposed by traditional laboratories (Forinash and Wiseman, 2001). Carefully designed online laboratory courses can be effective, and instructors interested in developing an online laboratory course have many successful models to reference (e.g., Reeves and Kimbrough, 2004;Mickle and Aune, 2008;Reuter, 2009;Brown, 2011;Barbeau et al., 2013). Thus, online laboratory courses should be considered, even for biology majors. Hybrid courses, which were more commonly offered to biology majors than fully online laboratory courses, require students to complete face-to-face laboratories and are a suitable compromise in some cases. Some departments have tried to make hybrid science courses more convenient for their students by limiting laboratory time to a small number of extended laboratory periods that often meet on weekends (Lyall and Patti, 2010;Jeschofnig and Jeschofnig, 2011;Brewer et al., 2013). This arrangement makes completing laboratory courses possible for many students who are place-bound, working, and/or balancing education with family life. However, concentrating laboratory time can mean laboratory topics are not synced with the rest of the course, and extended laboratory periods may be exhausting for students (Jeschofnig and Jeschofnig, 2011). This may limit their effectiveness, although a comparison of this specific style of hybrid laboratory course with others has not been documented. Furthermore, even with a reduced number of face-to-face meetings, some students will still be unable to attend. Thus, hybrid courses with face-to-face laboratories can be useful to some but not all students. Finally, I was surprised to find such a small number of nonlaboratory courses offered in the hybrid form, despite evidence that blending online and face-to-face instruction can be beneficial. Advantages of hybrid courses include higher learning outcomes (Dziuban et al., 2004;Vaughan, 2007;Means et al., 2009) and lower withdrawal rates than fully online courses (Dziuban et al., 2004). Additionally, student demand for hybrid courses is high (Dziuban et al., 2004), and many attribute this to their desire for face-to-face interaction and more flexible scheduling (Vaughan, 2007). Faculty who teach hybrid courses report having enhanced student-teacher interactions (Riffell and Sibley, 2004b) and improved student engagement in the learning process (Vaughan, 2007). Studies focused specifically on comparing course outcomes in hybrid versus face-to-face science courses have found that 1) online assignments in hybrid courses are equivalent to or more effective than passive lectures (Riffell and Sibley, 2004b); 2) video lectures can be as effective at teaching complicated concepts as face-to-face lectures (Lents and Cifuentes, 2009); 3) participation and attendance can be higher in hybrid courses (Riffell and Sibley, 2004a); and 4) learning outcomes can be higher than or equal to those in face-to-face courses (Riffell and Sibley, 2004b;White and Sykes, 2012). Hybrid courses can have institutional benefits as well. In addition to reducing institutional operating costs (Dziuban et al. 2004), hybrid courses have been described as an effective and low-risk strategy that positions institutions for future technological developments (Garrison and Kanuka, 2004). Thus, institutions, faculty, and students have much to gain by increasing offerings of hybrid biology courses. CONCLUSIONS This research has described the current landscape of online offerings in biology. This baseline illustrates that, while some populations, including nonmajors completing prerequisites for healthcare-related programs or completing their science general education requirements at 2-year public colleges, are well served by the current online offerings, others are not. Online options at 4-year institutions were limited, and students majoring in biology had few online course options, especially online laboratory courses, at all types of institutions studied. Addressing these deficiencies would create more opportunities for students requiring or preferring online education to study biology while possibly promoting student diversity and boosting departmental enrollment. This research also identifies some potential barriers that are limiting the online offerings in biology. The following recommendations are for biology departments with few or no online offerings and are intended to help overcome barriers and increase student access to online learning opportunities in biology: • Provide faculty with professional development training focused on successful course redesign models (e.g., Twigg, 2003), best practices in online instruction (e.g., Newlin and
20984770
s2orc/train
v2
2016-05-16T05:50:40.347Z
2015-04-22T00:00:00.000Z
Introducing the PRIDE Archive RESTful web services The PRIDE (PRoteomics IDEntifications) database is one of the world-leading public repositories of mass spectrometry (MS)-based proteomics data and it is a founding member of the ProteomeXchange Consortium of proteomics resources. In the original PRIDE database system, users could access data programmatically by accessing the web services provided by the PRIDE BioMart interface. New REST (REpresentational State Transfer) web services have been developed to serve the most popular functionality provided by BioMart (now discontinued due to data scalability issues) and address the data access requirements of the newly developed PRIDE Archive. Using the API (Application Programming Interface) it is now possible to programmatically query for and retrieve peptide and protein identifications, project and assay metadata and the originally submitted files. Searching and filtering is also possible by metadata information, such as sample details (e.g. species and tissues), instrumentation (mass spectrometer), keywords and other provided annotations. The PRIDE Archive web services were first made available in April 2014. The API has already been adopted by a few applications and standalone tools such as PeptideShaker, PRIDE Inspector, the Unipept web application and the Python-based BioServices package. This application is free and open to all users with no login requirement and can be accessed at http://www.ebi.ac.uk/pride/ws/archive/. INTRODUCTION Mass spectrometry (MS)-based proteomics analysis techniques are increasingly used in the life sciences. The PRIDE (PRoteomics IDEntifications) database (1) (http://www.ebi. ac.uk/pride) at the European Bioinformatics Institute (EBI) is one of the world-leading public repositories for storing MS-based proteomics data. PRIDE stores, among other data types, peptide and protein identifications and related quantification values, the corresponding mass spectra (both as processed peak lists and raw data) and any other technical and/or biological metadata provided by the submitters. PRIDE is leading the ProteomeXchange Consortium (2) (http://www.proteomexchange.org) of MS proteomics resources, which aims to standardize data submission and dissemination of this type of data worldwide. Within the Consortium, PRIDE fully supports the storage of tandem MS data (by far the main approach used in the field today), although data coming from other proteomics workflows can be also stored (e.g. top down proteomics or data independent acquisition approaches). It is important to note that, unlike other resources that reanalyse the data using their own data analysis pipelines, PRIDE stores all result data types as originally analysed by the authors. The implementation of ProteomeXchange has resulted in a big increase in public data deposition in the field. By March 2015, around 1900 datasets had been submitted to the ProteomeXchange resources since mid 2012, when the data workflow within the Consortium was formalised. Of those, more than 90% were stored in PRIDE. The PRIDE project started around 10 years ago (3). However, the original system was built to support smallerscale experiments, and its infrastructure could no longer be maintained with the rise of new high-throughput workflows, producing ever-growing file volumes and new data types. The new PRIDE Archive system has now been developed from scratch following the ProteomeXchange guidelines and supporting the Proteomics Standard Initiative (PSI) community data standards mzML (4), mzIdentML (5) and mzTab (6), although other data formats (e.g. mgf, mzXML, raw files, etc.) are also supported (7). At present there are different ways to access this plethora of data: the PRIDE web interface, the file repository (which supports the FTP and Aspera (http://asperasoft.com/) file transfer protocols) and the stand-alone PRIDE Inspector tool (8). Until December 2014, it was possible to access PRIDE data programmatically by accessing the RESTful (service based on REpresentational State Transfer protocol) web services provided by the PRIDE BioMart interface (9). However, the BioMart interface was recently discontinued due to the lack of product support and increasing scalability issues. New W600 Nucleic Acids Research, 2015, Vol. 43, Web Server issue RESTful web services have been developed to replace the most popular functionality available in the BioMart and to serve the data access requirements of the newly developed PRIDE Archive. Among the other major public proteomics data resources, GPMDB (10) has also REST-style web services available (11) (http://rest.thegpm.org/1) whereas Pep-tideAtlas (12) does not provide this functionality. In this manuscript we describe the main features of these new web services, which can be freely accessed at http:// www.ebi.ac.uk/pride/ws/archive/. ARCHITECTURE, DESIGN AND IMPLEMENTATION To ensure maintainability and adequate support, the web services have been developed using technologies that are also used across other PRIDE projects. The services are implemented in Java, building on top of the Spring framework (http://projects.spring.io/spring-framework/). Data queries are powered by optimized Apache Solr servers (http:// lucene.apache.org/solr/). Data can be accessed over HTTP (HyperText Transfer Protocol) via REST-like 'Get' requests, which ensures that the services are easy to use and are supported by all major platforms. JSON (JavaScript Object Notation) was chosen as the output format since it is widely used as a data serialization format and is the current de facto data exchange standard for web technologies. Cross Origin request When the web service is used from within other web applications, the Same Origin Policy (SOP, which is implemented in all browsers for security reasons), is in effect. This may prevent the application to directly access the data in PRIDE. Traditionally such sites had to implement a local proxy to get around the restrictions of the SOP. However, in recent years other ways to make possible Cross Origin Requests have emerged. The web service currently supports two of the most common ones: For further details, consult the documentation pages (http://www.ebi.ac.uk/pride/help/archive/access/ webservice). DATA SEARCH AND RETRIEVAL The PRIDE Archive Web Service API is split into several specific resources, which currently are projects, assays, files, protein identifications and peptide identifications. This separation is also reflected in the service URLs (Uniform Resource Locator), where the first level after the web service root determines the resource or data type. Data retrieval options depend on the information available at each level. Specific project and assay records can be retrieved using their respective accession numbers as available via the PRIDE Archive web interface and the ProteomeXchange portal ProteomeCentral (http://proteomecentral. proteomexchange.org/cgi/GetDataset). Projects (which correspond to individual dataset submissions) can also be queried by their associated metadata, identified proteins (using their accession numbers) and peptides (using their amino acid sequence). Among others, metadata includes annotations for the samples (such as species, tissues, detected post-translational modifications, diseases, etc.) and other experimental details that are gathered during the data submission. Once a project of interest has been identified, other web service methods enable the retrieval of specific records, such as protein and peptide identifications and a list of URLs to download the originally submitted files. Then, users can combine web service functionalities to achieve more complex queries/results, for example the retrieval across projects of peptide/protein identifications, as described in the online documentation (see http://www.ebi.ac.uk/pride/ help/archive/access/webservice). The detailed list of current methods is available at Table 1. In an attempt to make the service more intuitive and easier to use, several conventions were introduced ( Figure 1): (i) as already mentioned, the first level after the web service root (http://www.ebi.ac.uk/pride/ws/archive) denotes the type of resource requested (project, assay, file, protein or peptide). (ii) if the end-point URL contains the keyword 'list', then a list of entities is returned, rather than a single entity. (iii) similarly, if the URL contains the keyword 'count' an integer number is returned, showing the total of entities that would be returned by the equivalent 'list' method. This is particularly useful when dealing with paged results (see below). Paging and sorting Certain end-points make use of paging and sorting to enable a more efficient access to the data (see Figure 1 and the online documentation for details). Methods that make use of paging have a corresponding count method, so users can check the total number of results before deciding how or if paging is necessary. Paging is essential to guarantee reasonable response times when result sets grow very large, which is often the case for data coming from high-throughput proteomics experiments. It is also crucial to improve the responsiveness in client applications and free the client from having to deal with large data volumes if perhaps only a preview is desired, or if the client is too small to process the full data load at once (e.g. for mobile devices). The service allows results to be ordered according to certain criteria, which is often needed along with the paging. For instance, sorting criteria for project lists include the publication date, its accession number or title, and the relevance score assigned to each result. Querying and filtering The web service therefore offers query functionality that is closely modelled on the search of the PRIDE Archive web interface. For a list of the available filters, their descriptions and examples, see Table 2. Users can search the repository using generic query terms, restrict results by applying designated filters or use a combination of both. To illustrate the difference, consider the following example. If a diseaseFilter with the value 'cancer' is used, only projects that carry 'cancer' as disease annotation are retrieved. However, if 'cancer' is used as generic query term, the result of the query will also contain projects with an annotation which mentions this keyword, but may not specifically in the disease-type annotation. Example use case A user of the web services might be interested in retrieving identification data for projects related to 'biomarkers for human cancer'. Since metadata annotations, like biomarkers, tissue or disease related information are available on project level and not in the individual protein or peptide identification level, the first step would be to find relevant projects. (i) A request to /pride/ws/archive/project/list?query = biomarkers will produce a list of projects that contain annotations including the word 'biomarkers'. By applying additional disease and species filters . . . ?query = biomark-ers&diseaseFilter = cancer&speciesFilter = 9606, the result can be further restricted. (ii) Examining the details of the result, a project of interest can be identified. To retrieve the full record for a specific project, for example PXD001034, a user would send a request to /pride/ws/archive/project/PXD001034. (iii) To then retrieve all protein identifications for that project, a user could first use /pride/ws/archive/protein/count/project/PXD001034 to find out how many protein identifications are expected (see the section devoted to paging above). (iv) With requests to /pride/ws/archive/protein/list/project/P XD001034 a user would then retrieve the lists of the protein identification records. (v) Similarly, in order to retrieve the original project files, the web services can be queried to provide a list of file records containing the URLs for all the files included in a given project: /pride/ws/archive/file/list/project/PXD001034. A user can then inspect the file records and decide to download all the files or only those of a particular type. Other example use cases can be found in the web services online documentation (see http://www.ebi.ac.uk/pride/help/ archive/access/webservice). API DOCUMENTATION The available documentation is divided in two parts. First of all, using the popular documentation framework Swagger TM (http://swagger.io/), an interactive auto-generated documentation is available in the home page (http://www.ebi.ac.uk/pride/ws/archive/). The goal of Swagger TM is to define a standard, language-agnostic interface to REST APIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation or through network traffic inspection (https://github.com/swagger-api/swagger-ui). It lists all the available end-points and provides definitions and descriptions of the methods, parameters and the data model ( Figure 2). The interactive part allows the execution of example queries using simple input forms. The results are rendered in the same page to allow a quick examination. The interface also shows the URL that would have to be used by a client in order to perform the same request. As this documentation is auto-generated from the source code, it is always up-to-date with the latest release. In addition to the auto-generated documentation, further information is available in the general PRIDE 'Help' pages. These contain, among others, descriptions of general concepts, example use cases and other content that is not covered by the Swagger TM framework. The documentation also contain links to example clients in Python and Java that can be used as starting points or templates for developers wishing to use the web services (see http://www.ebi.ac.uk/pride/ help/archive/access/webservice). DISCUSSION The PRIDE Archive RESTful web services have been developed to enable programmatic access to PRIDE Archive data. This is a key development for users due to the everincreasing data volume that PRIDE is experiencing and the fact that data mining, data reanalysis and reinterpretation are currently flourishing in the proteomics field (13,14). The API has been available since April 2014 and a few internal and external applications are already making use of the new functionality: (i) the stand-alone tool PeptideShaker (https: //code.google.com/p/peptide-shaker/) (15), which is an analysis tool that, among other functionality, enables the reanalysis of PRIDE data accessing PRIDE files and metadata using the 'PRIDE Reshake' option; (ii) the PRIDE Inspector, a visualization and validation tool developed by the PRIDE team. Using the API, PRIDE Inspector enables a generic project search and the access to any project in PRIDE Archive, including private datasets (protected by an username and password). This way, PRIDE Inspector can be used by journal reviewers and editors during the manuscript review process; (iii) the UniPept web application (http:// unipept.ugent.be/) (16), a metaproteomics resource, which makes use of the API to access peptide sequence information; and (iv) the Python-based BioServices package (17), which can be used to access several major bioinformatics resources including now PRIDE (http://pythonhosted.org/ /bioservices/references.html#module-bioservices.pride). The PRIDE REST web services functionality will continue to develop in parallel with the PRIDE Archive. Should users wish to discuss requests for new functionality, the authors encourage them to contact the PRIDE helpdesk (pride-support@ebi.ac.uk) with their suggestions.
234960430
s2orc/train
v2
2021-05-22T00:04:43.442Z
2020-01-01T00:00:00.000Z
IMPLEMENTATION OF 6R STRATEGY IN FDM PRINTING PROCESS: CASE-SMALL ELECTRONIC ENCLOSURE BOX University of Belgrade, Institute of Chemistry, Technology and Metallurgy Centre for Microelectronic Technology, 11000 Belgrade, Njegoševa 12, Republic of Serbia E-mail: worcky@nanosys.ihtm.bg.ac.rs PhD Student from the People's Republic of Bangladesh, The University of Belgrade, Faculty of Mechanical Engineering, 11000 Belgrade, Kraljice Marije 16, Republic of Serbia University of Belgrade, Faculty of Mechanical Engineering, 11000 Belgrade, Kraljice Marije 16, Republic of Serbia 4 Bombardier Aerospace, Toronto, Canada. INTRODUCTION The short product life cycle and the increasing complexity of the product are becoming the main imperative in the sustainable development of the company. That is why companies need to increase their innovative activities and shorten the time to enter markets, and that brings with it a new manufacturing technology -Additive Technology (AT). When making a model/prototype, its geometry is first considered, and then it is examined how the complexity of the product can improve the company's position. The implementation of AT creates additional value in the realization of complex geometry, which is not easy to realize with conventional processing. When it comes to sustainable business growth, the implementation of AT implies: improving the use of resources, implementing more effective manufacturing methods, applying new manufacturing processes, applying new materials, and adopting new business models (Despeisse, & Ford, 2015). The application of additive manufacturing (AM) produces waste that is negligible compared to conventional manufacturing (CM) (Wang, Zanni, & Kobbelt, 2016). AM works on the principle of adding materials in layers (Ciubară et al., 2018), and the obtained waste mainly includes imperfect or unfinished models (Peng, & Sun, 2017). AM technology effects on reducing weight, use of water, energy, and material, all of which have a positive impact on sustainability. CM creates material waste or absorbs extra material, while AM uses less toxic materials, additives, and cutting fluids. By implementing AM, designers are granted greater freedom to design and define optimal product design (Brackett, Ashcroft, & Hague, 2011). It is possible to create a product with different materials that have different mechanical properties to satisfy different specifications at different locations within the model (Sossou, Demoly, Montavon, & Gomes, 2018). Functional product analysis Functional product analysis (Sossou et al., 2018) considers the design of the product to be an input of the process. In this context, three important steps in the product analysis are highlighted: 1) external appearance (the relation between the product and the environment is required); 2) the principle of disassembly (the breakdown of the product by logical sequence is considered) and 3) the product architecture; (components and their connections with the elements of the assembly itself, as well as with independent design elements are analyzed). Product design can be re-created using any 3D CAD software package based on existing documentation or using advanced tools/software that take over the configuration of the model itself. Here, customers have a major role to play in determining the concept, while AM is an auxiliary tool for very quick product realization. It is necessary to consider the following activities (Eyers, & Potter, 2017) before embarking on the implementation of the design: 1) preparatory activities (pre-processing), 2) manufacturing and 3) additional or final processing. Some of the key design factors to be kept in mind when designing AM include: closed gaps, surface treatment, strength, and flexibility, as well as materials and equipment costs (Diegel, Singamneni, Reay, & Withell, 2010). FDM printing and materials AM are made of plastic parts, but other materials such as metal, ceramics, and various composites can be used (Nannan, 2013). FDM systems use a number of thermoplastics. The benefits of using FDM printing are reduction of the unnecessary use of materials, shortening the time for design and development of usable elements, manufacturing of components with complex geometry, use of new materials with good characteristics, and reducing the amount of realized parts/elements. (Wang, Zanni, & Kobbelt, 2016). There are several drawbacks in the use of FDM printers: line visibility on the surface of the model, very low strength after making the piece, the need for further processing and the need to manufacture a supporting structure, long realization time, and quite expensive material (Galantucci, Lavecchia, & Percoco, 2009). Non-toxic thermoplastic materials such as PLA and ABS are primarily used in FDM printing. These materials have a low melting point, lower noise levels during building, and need less energy when heating the nozzle and the working surface (Peng, & Sun, 2017). PLA (Polylactic Acid) is a biodegradable, thermoplastic material and is extracted 100% from sustainable sources such as beets, potatoes, and maize (Jordá-Vilaplana et al., 2014). PLA does not release harmful gasses into the atmosphere (King, Babasola, Rozario, & Pearce, 2014). ABS (Acrylonitrile Butadiene Styrene) is a polymer of excellent mechanical properties, hightemperature tolerance, and impact of resistance (Lim et al., 2010). Vapors emitted during the melting of ABS are hazardous to human health and the environment as a whole, hence it is important to provide filtration and an isolated (closed) system of work with as little human presence as possible (Stephens, Azimi, El Orch, & Ramos, 2013). FDM printing technology The methodology of model/prototype development using the FDM method takes place in the following steps (Zivanovic et al., 2020): 1. The process starts with the 3D CAD model. The model is realized using one of the CAD software packages. 2. The 3D model is imported as *.stl file into a specialized program (eg. open source Ultimaker Cura), which adjusts the operating parameters of the system. The STL file is a standard format for 3D printing and provides good readability in many 3D programs (Wong, & Hernandez, 2012). Disadvantages in manipulation with * stl. files are reflected in the fact that there is a loss of the desired printer resolution, which mainly refers to the thickness of each layer. 3. After appropriate settings and print simulation, a G code is generated in the *.gcod extension that the 3D printer recognizes. The G code is a standard format that describes the tool path (injectors in the FDM procedure). The described steps are partially automated, but this last step requires an operator. 4. In the end, the 3D model is realized, and after manufacturing, the excess material is removed and the model is cleaned and further processed (Zeltmann et al., 2016). Therefore, the application of FDM printing requires advanced technical knowledge in the preparation, setting parameters, and the process of making a model/prototype (Cupar, Pogačar, & Stjepanovič, 2015). The block diagram where the 3D model is transformed into a finished model/prototype (Junk, & Schröder, 2016) is shown in Figure 1. 6R strategy The 6R strategy is important for the sustainable development of the company, which includes: 1.) the reduction of waste to a minimum; 2.) the reuse of waste or used product; 3.) the processing of waste for the purposes of environmental protection -recycling; 4.) the regeneration of raw products, materials, and resources from waste that cannot be reduced, reused or recycled -recovered; 5) the redesign of a product, business area, or complete business process (Van Ackere, Larsen, & Morecroft 1993), and 6) remanufacturing which includes disassembly, cleaning, measuring and testing of parts, as well as disposal of correct/repaired parts in the warehouse (Sarkis, 2001). Product redesign is divided into parametric and adaptive redesigns (Otto, & Wood, 1998). Parametric redesign involves the basic product model or the latest product configuration model after redesign (changes in geometry, materials, minor changes in the product). Many of these improvements are determined by consumers. The adaptive redesign allows designers the ability to come up with appropriate solutions for product assemblies and subassemblies. The aim of recycling is to reuse the material or elements of the used product. In general, the product can be made up of old and new components. The material cost of replication is just 40 % of the overall cost compared to 70 % of the total cost of development of new goods (Hindo, & Arndt 2006). However, designers are increasingly wary of recycled materials, as these components may have variable quality characteristics (Barker, & King, 2006). According to Gehin, Zwolinski, and Brissaud (2008), the following advantages of remanufacturing are emphasized: 1) companies which use recycled goods minimize their costs; 2) the application of remanufacturing in marketing terms is the basis for increasing profits; 3) the remanufacturing process uses specialized equipment; 4) optimizing tools due to disassembly and assembly; 5) the remanufacturing provides stability in investment (6) METHODOLOGY The goal of this paper is to illustrate the value of the 6R strategy for the sustainable development of companies. As a solution, an algorithm for the 6R strategy implementation for sustainable development of the organization is used by AM for the implementation of new or updated existing elements/parts (Maxwell, & Van der Vorst, 2003). The algorithm with the implementation of AM (Cafolla, Ceccarelli, Wang, & Carbone, 2016;Sanchez, Boudaoud, Hoppe, & Camargo, 2017) involves the manufacturing preparation process, the manufacturing (or remanufacturing) process, the completion of the manufacturing process with the subsequent processing and recycling process, see Figure 2. Remanufacturing potentially helps many developing economies. Product reuse and recycling are becoming more intensive, though processed/modified or repaired goods are cheaper than newly produced products (Matsumoto, Yang, Martinsen, & Kainuma, 2016 ensures that such pieces are reused without any previous renovation or finishing. In order to confirm the algorithm correctly, it is necessary to design a product that allows: usability, easy replacement of parts/assemblies, disassembly of assemblies, the possibility of finishing, and reuse. Before starting FDM printing, the parameters are specified first and the material is selected. By placing the system into operation, the printing is permanently controlled by the operator. At the end of the manufacturing process, additional geometry and surface control is carried out during the execution of the model. The approved model is further refined mechanically and/or chemically, see Fig. 2. Check for errors in the preparation should only affect the layer and should not be spread to other layers. However, a change in the speed of the injector or an initial error in the system positioning can cause irregularities affecting the overall error (Oropallo, & Piegl, 2016). Input errors in the preparation of the output often emerge from the selection of printing materials whose characteristics affect the efficiency of the printing process. Set parameters during 3D printing have a major effect on the surface roughness of the produced pieces. Such parts typically have a high surface roughness and need additional surface treatment to achieve smooth surfaces (Qattawi, & Ablat, 2017). All this is acceptable if the model meets the mentioned criteria, but if it does not meet, it is disposed as waste. In the algorithm, waste is not seen as a permanently lost material, but as a resource that can be recycled and reused. Recycled material is later disposed of in a material/semifinished product warehouse. EXPERIMENTAL WORK The example of the 6R algorithm use in the sustainable development of the enterprise is the realization of a small electronics enclosure for a pressure transmitter. It is about a domestic manufacturer of sensors and electronic pressure transducers, level, and temperature transmitters in Serbia, IHTM-CMT. The realized electronics enclosure using AM is shown in Figure 3. The shown product is of modular architecture, consisting of the following modules: 1) pressure transducer, 2) electronics enclosure, and 3) electronics block. According to Urlich and Tung (1991), the modular product is reflected in 1) economic volume of components, 2) fast finishing of products, 3) increased product variety, 4) reduced-order time, 5) simplified design, and testing. The electronics enclosure is made based on the existing technical documentation. In this stage, it is possible to correct existing errors in design and modify the enclosure according to the requirements of end-users. A 3D representation of the elements of the electronics box made based on 2D documentation is shown in Figure 4. PLA material was used to make the enclosure. a) b) Figure 4: Elements of electronics box: a) electronics carrier, b) cover In this paper, a Wanhao Duplicator 3D printer, type i3 plus, manufactured by the People's Republic of China, was used to describe and implement the algorithm, see Figure 5. PLA filament, 1.75 mm in diameter (manufacturer Wanhao) was used for the model realization. The sample was printed with defined temperatures and speeds, and they are: 1) 215 °C for the nozzle (printing temperature), 2) 60 °C for the plate (plate temperature), 3) 50 mm/s print speed, 4) 70 mm/s travel speed of the nozzle at idle (travel speed) and 5) + 45/-45, infill orientation. An illustration of the realized elements of the electronics enclosure, on a 3D printer, is shown in Figure 6. Many irregularities can be seen in the pictures. This model cannot be used immediately, but it is necessary to perform additional surface machining. Methods of machine additional processing include manual grinding of parts, traditional machining, and finishing polishing, while chemical processing methods include painting, coating, heating, and processing with some chemical agent. In order to save on plastic, 3D printed parts are made to have a solid shell that surrounds the porous (partially hollow) interior. For obtaining better mechanical properties, there is a need for chemical or thermal processing. Using AM does not give the desired accuracy in the dimensions of the product, so sometimes it is necessary to machine (using CM) to certain tolerances, see Figure 7. Mechanical processing of the samples was performed with the help of four sandpapers of different granulations (P100, P180, P320, P400) with constant water cooling in order to prevent heating and melting of plastic parts. Figure 7: Additional machining on the lathe: thread (A) and contact surface (B) Following was used in the process of mechanical processing: cooling water; compressed air for cleaning and drying surfaces; fine grinding and polishing apparatus, as well as a device for dimensional control before and after processing. The methodology of additional surface treatment and finalization of the model is shown in Table 1. The process of manual polishing significantly affects the increase in the quality of treated surfaces, but can also lead to a reduction in the designed dimensions. The process of gritting and polishing the electronics enclosure is shown in Figure 8. Finally, the electronics enclosure is painted, giving the finished model, see Figure 9. DISCUSION AND CONCLUSION FDM printing allows users to be both designers and manufacturers for their own needs. Due to unrealistic expectations in the realization of models/prototypes using CT, the use of AT in terms of speed and cost of realization is increasingly resorted to. The paper presents the algorithm and procedures for the introduction of the 6R strategy in sustainable enterprise development. The algorithm introduces AM in order to improve manufacturing through the fast realization of models/prototypes, quality communication with customers, and quick reaction on the market. Also, waste (and its re-use) is presented as an important resource in sustainable development. The reuse of obsolete products with redesign and remanufacturing gives a special focus on the business excellence of the enterprise. This introduces a new business practice in designing products for multiple life cycles. AM allows the designer to choose the right technology that exactly suits his needs and meets the requirements of the prepared sample. However, the quality completely depends on the selected parameters and materials. In the realization of a small-sized electronics enclosure, care was taken to keep all the elements of the previous design and examine the cost-effectiveness of the realization of the product using AM. This is actually about the possibility of finishing, repairing, and changing the design of the existing product as well as recycling the product, all to protect the environment and minimize waste. By detailed analysis and application of the algorithm in practice, enterprises would be able to:  to quickly manufacture new products or models with minimal losses of energy, materials and the least possible accumulation of waste;  to perform certain tests on the realized product regarding the connection of elements made of the same or different materials and suggestions for improvements are given in order to strengthen/stiffen the structure;  to launch the product faster, i.e. to be able to extend the life cycle of the product with the application of new (biodegradable) materials and technologies.
17151050
s2orc/train
v2
2014-10-01T00:00:00.000Z
2010-08-25T00:00:00.000Z
Glycaemic Response to Quality Protein Maize Grits Background. Carbohydrates have varied rates of digestion and absorption that induces different hormonal and metabolic responses in the body. Given the abundance of carbohydrate sources in the Philippines, the determination of the glycaemic index (GI) of local foods may prove beneficial in promoting health and decreasing the risk of diabetes in the country. Methods. The GI of Quality Protein Maize (QPM) grits, milled rice, and the mixture of these two food items were determined in ten female subjects. Using a randomized crossover design, the control bread and three test foods were given on separate occasions after an overnight fast. Blood samples were collected through finger prick at time intervals of 0, 15, 30, 45, 60, 90, and 120 min and analyzed for glucose concentrations. Results. The computed incremental area under the glucose response curve (IAUC) varies significantly across test foods (P < .0379) with the pure QPM grits yielding the lowest IAUC relative to the control by 46.38. Resulting GI values of the test foods (bootstrapped) were 80.36 (SEM 14.24), 119.78 (SEM 18.81), and 93.17 (SEM 27.27) for pure QPM grits, milled rice, and rice-QPM grits mixture, respectively. Conclusion. Pure QPM corn grits has a lower glycaemic response compared to milled rice and the rice-corn grits mixture, which may be related in part to differences in their dietary fibre composition and physicochemical characteristics. Pure QPM corn grits may be a more health beneficial food for diabetic and hyperlipidemic individuals. Introduction Carbohydrates are the main source of energy for the human body. However, the rate of digestion and absorption of carbohydrates varies with the chemical components of the food source, the processing and storage conditions it was subjected to and the other foods that were consumed in conjunction to the carbohydrate-rich food. As shown previously, even through a constant amount of available carbohydrates, significant variations may still be observed in the glucose response to different carbohydrate foods [1]. Thus, the glycaemic index (GI) has been developed to classify carbohydrate foods based on the rate of carbohydrate absorption. Carbohydrates that exhibit low glucose response after ingestion have been shown to be beneficial in the management of diabetes and hypelipidemia [2][3][4]. Given the abundance of carbohydrate-rich foods in the Philippines, knowledge of the GI may prove to be beneficial in the prevention and management of prevalent metabolic diseases, such as diabetes, in the country. However, only a few local studies have been conducted to determine the GI of local food items [5][6][7][8][9][10][11]. Corn is considered as a secondary staple to rice in the Philippines. According to the National Nutrition Survey conducted by FNRI-DOST last 2003, corn-eating regions in the country usually consume this cereal in the form of grits which are produced by milling white corn similar to rice. Both rice and corn are rich in carbohydrates although their functional and physicochemical properties differ. The preference towards consumption of rice stems from various cultural, economic, and nutritional factorsone of which is the inferior protein quality of common corn varieties compared to rice. The development of Quality Protein Maize (QPM), a high-breed flint corn variety that contains the amino acids, lysine and tryptophan, changes 2 Journal of Nutrition and Metabolism this "inadequacy" of maize. The leveling off in the protein components of rice and corn may just be the solution to the search for a better alternative to importation given the limitation in the country's rice supply. Investigating possible benefits of consuming QPM corn may give the necessary push to promote the consumption and production of this indigenous food crop. Materials and Methods 2.1. Subjects. Ten apparently healthy female subjects from the College of Home Economics, University of the Philippines, Diliman, Quezon City, Philippines were selected based on the following criteria: age 18-30 years, no intake of metabolic drugs, and nonsmokers. Potential participants were contacted either through mobile phone or approached personally. Each individual was given a Subjects' Brochure, which enumerates the research objective, procedures, schedule, and other details of the study. A research staff also explained these information before each potential subject was asked for their comments and concerns on the study. Each potential subject was also interviewed for assessment of physical activity and was asked to fill in a three-day food intake recall form. Subjects with normal food intake and physical activity were included in the study. The subjects signed voluntary consent forms approved by the Department of Ethics Review Committee of the University of Santo Thomas, Manila, Philippines. Test Foods. QPM corn grits samples were obtained from the Institute of Plant Breeding (UP College of Agriculture, Los Baños, Philippines). Corn used was harvested after 65 to 70 days before it was dehusked and then sun dried for 2 to 3 days. After drying the corn samples, the dried corn kernels were mechanically removed from the cob and then sun dried for another one to two days to ensure that moisture content was less than 12 percent to inhibit fungal growth and aflatoxin contamination. Dried corn kernels were milled using a standard milling machine so that resulting total quantity of particles would amount to 30% of total weight of corn kernels processed. Rice samples of PSB RC72H (Mestizo1 rice) variety were obtained from the Philippine Rice Research Institute (Nueva Ecija, Philippines). After aging for 123 days, samples were harvested and then dehulled with a mechanical dehusker. Afterwards it was milled in a one-pass mill to produce the white rice. Twenty-five grams available carbohydrate portions of pure QPM grits, milled rice, and QPM grits mixture were used in the in vivo testing. These were prepared through boiling one hundred fifty grams of raw samples in water. For the pure QPM grits, the sample was soaked for 60 minutes in 325 grams of water and then boiled in the water used for soaking for a total of 35 minutes on a La Germania electric stove on medium setting for 12 minutes, then on low setting for 13 minutes, and lastly on simmer setting for 10 minutes. The 359 grams yield was divided into 117.4-gram pure QPM grits portions. On the other hand, milled rice was boiled in 240 grams of water for a total of 25 minutes on a La Germania electric stove on medium setting for 5 minutes, then on low setting for 10 minutes, and lastly on simmer setting for 10 minutes. The 350 grams yield was divided into 119.7-gram milled rice portions. Lastly, the mixture of 87 grams QPM corn grits and 58 grams milled rice variety was soaked for 30 minutes using 325 grams of water and was then boiled in the water used for soaking for a total of 35 minutes on a La Germania electric stove on medium setting for 11 minutes, on low setting for 14 minutes, and on simmer setting for 10 minutes. The 373 grams yield was divided into 85.9-gram portions of the mixture. The electric stove used was preheated on high setting for 2 minutes before cooking both test foods. The white bread, which was used as the standard for the glycaemic index testing, was prepared based on the formulation of Panlasigui and Thompson [11] which consists of all-purpose white flour (250 grams bleached, enriched, Pilsbury brand, Pilsbury Co., Philippines), lukewarm water (150 mL), refined white sugar (7 grams), iodized salt (1.25 grams), and active dry yeast (8 grams). The bread was baked using a standard method of mixing and then kneading, fermentation (30 minutes at 40 • C for the first rising of the dough and 1 hour and 340 minutes at room temperature for the second rising of the dough), and finally baking at 375 • C for 20 minutes. Cooked bread is divided into 50-gram portions. Protocol. Each subject was instructed to fast for 10-12 hours and refrain from any strenuous physical activity a day prior to the in vivo testing. During the test proper, the subjects were given a 10-15 minutes rest after their arrival before the fasting blood samples were obtained. The food sample assigned for the given day was taken within a 15-minute period and the subject's exact eating time was recorded. Each meal occasion was accompanied by 220-250 mL water which is made constant for each subject throughout the feeding sessions. Finger prick blood samples were obtained by gentle pressure at the finger tip then puncturing the skin with an autolancet (MediSense, Abbott Laboratories, Illinois, USA) at time intervals of 0 (FBG), 15, 30, 45, 60, 90, and 120 minutes through the assistance of a registered medical technologist from the UP Health Service in Diliman, Quezon City, Philippines. Approximately three to five drops of whole blood samples were collected and placed into 80 IU/ml soda lime glass microtubes that were sodiumheparinized (Vitrex, Modulohm A/S, Vasekaer 6-8, DK 2730 Herlev, Denmark). Samples were centrifuged using a Microtube Centrifuge (Vernitron Medical products, Inc., Carlsladt, New Jersey, USA) to isolate the plasma component of the blood. Ten microliters (10 µL) of isolated blood plasma samples were then pipetted into previously prepared and labeled test tubes containing 1.5 mL of blank (distilled water) and standard glucose oxidase (Mega diagnostics, LA, CA, USA 90012) reagents that were incubated in a water bath incubator (Chicago surgical and electrical Co, Melrose Park, Illinois) for 5 minutes at 37 • C. After the isolated blood plasma samples were pipetted, the respective tubes were mixed and again incubated for ten (10) minutes at 37 • C. Blood plasma parameter was analyzed for its glucose concentration using a Dialab DTN 410 Photometer (Boehringer Mannheim GmbH, Germany) with absorbance set at 500 nm. The area under the glucose response curve for each food was calculated geometrically [12]. The GI of each food was expressed as % mean glucose response to the test food divided by the standard food taken by the same subject and was determined using the following formula: IAUC of the test food × 100, IAUC of the standard food (1) where the IAUC is the incremental area under the glucose response curve. Proximate Analysis. The test foods were analyzed for total available carbohydrates (TACs), ash, moisture, crude fat, crude protein, and total dietary fibre (TDF). TAC was determined using the modified Clegg Anthrone method [13]. Table 1 shows the characteristics of the subjects. The ten subjects were female aged 19-22 years and had a mean BMI of 19.4 (SEM 0.6) kg/m 2 at baseline. There were no significant changes in the subjects' anthropometric measurements at baseline and post test. Table 2 shows the proximate composition of the test foods. Boiled rice-QPM grits (28.92%) had the highest TAC followed by the boiled QPM corn grits (20.94%) and boiled rice (20.88%), respectively. On the hand, milled rice had the highest crude fat (3.26%), crude protein (3.14%), and moisture (72.52%). Results Mean blood glucose concentration peaked at 30 minutes postprandial after the ingestion of the pure QPM grits, and QPM grits mixture. On the other hand, peak mean blood glucose concentration was achieved at 45 postprandial after ingestion of milled rice. The computed incremental area under the glucose response curve (IAUC) level varied significantly across test food (P < .0379) (refer to Table 3). The IAUC of boiled rice was higher by 6.46 than that of white bread (control). Boiled rice-QPM mixture yielded lower IAUC than the control by 44.89. The pure QPM grits however, yielded the lowest IAUC relative to the control by 46.38. The average glycaemic index for milled rice (119.89 (SEM 22.65)) was higher while that of the pure QPM grits (80.29 (SEM 17.11)) was lower than the control food. The mixed rice-QPM grits had higher GI (91.29 (SEM 33.61)) than the pure QPM grits, but its GI value was still lower than that of the control food. The different subjects, however, exhibited varying glycaemic response to the different test foods, resulting in high standard errors. To address these high standard errors of the glycaemic response, two alternative robust statistical methods (regression analysis and bootstrap) were applied. Since the initial blood glucose among the subjects varied each time they consume the different test foods, a regression analysis was made with glycaemic index as the dependent variable. The initial blood glucose was considered as a regressor in addition to the dummy variables accounting for the different test foods (with the control food as the baseline). The test food-initial blood glucose interaction was also included. The resulting regression model was significant (P < .0177) with a coefficient of determination of 32%. The average glycaemic index was computed from the regression on the initial blood glucose taking into account its interaction with the test foods (see Table 4). Resampling method (bootstrap) was also applied to analyze the possible bias introduced into the average glycaemic index caused by a few extreme glycaemic responses. For each test food, 500 replications were made, bias was computed, and the average glycaemic index was adjusted for the bias. The estimates are summarized in Table 4. While the estimated average GI for the different test foods did not vary significantly across different estimation methods, the bootstrapped estimates yielded the lowest standard errors. Discussion This study showed that ingestion of pure QPM grits resulted in lower blood glucose response in healthy subjects compared to milled rice and the rice-corn grits mixture. Differences in the chemical composition and physicochemical properties of the test foods may have contributed to the differences in the glucose response observed. QPM grits have thick vitreous endosperms [14] and undergo rigorous drying in the conversion of kernels to grits that renders it difficult to gelatinize. Comparing the cooking time of the test foods, it can be seen that pure QPM grits and the rice-corn grits mixture took longer to cook than milled rice. As shown previously, milled rice had a shorter cooking time and higher volume expansion compared with brown rice. Milled rice has also been shown to have low amylograph viscosity peak and consistency, an indication that it can be easily hydrated and gelatinized during food processing [11]. Amylose analysis of the test foods showed that pure QPM grits and milled rice have comparable amylose contents-25.04 and 23.95 for milled rice and QPM grits, respectively. This supports the study of Panlasigui et al. [5] that foods with similar amylose may still exhibit varying rates of starch digestibility and blood glucose response. Although the fats and proteins may lower the glucose response to a food item, the negligible amounts of these nutrients present in each test food investigated would not have strongly affected the observed glucose responses. As shown by a previous study, about 23 grams of fat is needed for fat content to significantly affect the glucose response to a food item [15]. On the other hand, 20-30 grams of protein is needed to sufficiently affect the glycaemic responses [16][17][18]. Pure QPM grits have the highest dietary fiber content (6.0 grams/100.0 grams of QPM grits) among the test foods (see Table 2). Dietary fiber may have contributed to the lower glycaemic response in the pure QPM grits. As previously investigated, varying fibre content of foods may cause fluctuations in the absorption of dietary carbohydrate and, therefore, affect the GI [15,19]. Dietary fibre, depending on its type, acts either as a physical barrier or increases the viscosity of the mixture in the digestive tract so that digestion and absorption is slowed down [20]. Given that most foods contain more insoluble fibre, insoluble fibre was related more strongly to the GI than soluble fibre content [21]. Pure QPM grits have higher insoluble fiber content than soluble fiber [22]. The GI of the rice-QPM mixture was compared to its theoretical GI value computed using the GI values of the pure QPM grits and milled rice. The theoretical GI value of the rice-QPM grits mixture is 95.94 similar to the GI value (93.17) obtained in the in vivo testing, supporting the postulate that GI of mixed meals may be computed by determining the amount of total carbohydrates contributed by each food component and its corresponding GI values [20,23]. In conclusion, pure QPM corn grits have a lower glycaemic response compared to milled rice and the rice-corn grits mixture, which may be related in part to differences in their dietary fibre composition and physicochemical characteristics. Pure QPM corn grits may, therefore, be a more health beneficial food for diabetic and hyperlipidemic individuals. Nonstandard Abbreviations QPM: Quality protein maize GI: Glycaemic index IAUC: Incremental area under the glucose response curve TDF: Total dietary fibre TAC: Total available carbohydrates.
231302100
s2orc/train
v2
2021-01-10T06:20:44.295Z
2021-01-01T00:00:00.000Z
Cudratricusxanthone A Inhibits Lipid Accumulation and Expression of Inducible Nitric Oxide Synthase in 3T3-L1 Preadipocytes Cudratricusxanthone A (CTXA) is a natural bioactive compound extracted from the roots of Cudrania tricuspidata Bureau and has been shown to possess anti-inflammatory, anti-proliferative, and hepatoprotective activities. However, at present, anti-adipogenic and anti-inflammatory effects of CTXA on adipocytes remain unclear. In this study, we investigated the effects of CTXA on lipid accumulation and expression of inducible nitric oxide synthase (iNOS) and cyclooxygenase (COX)-2, two known inflammatory enzymes, in 3T3-L1 preadipocytes. Strikingly, CTXA at 10 µM markedly inhibited lipid accumulation and reduced triglyceride (TG) content during 3T3-L1 preadipocyte differentiation with no cytotoxicity. On mechanistic levels, CTXA at 10 µM suppressed not only expression levels of CCAAT/enhancer-binding protein-α (C/EBP-α), peroxisome proliferator-activated receptor-γ (PPAR-γ), fatty acid synthase (FAS), and perilipin A, but also phosphorylation levels of signal transducer and activator of transcription-3 (STAT-3) and STAT-5 during 3T3-L1 preadipocyte differentiation. In addition, CTXA at 10 µM up-regulated phosphorylation levels of cAMP-activated protein kinase (AMPK) while down-regulating expression and phosphorylation levels of acetyl-CoA carboxylase (ACC) during 3T3-L1 preadipocyte differentiation. Moreover, CTXA at 10 µM greatly attenuated tumor necrosis factor (TNF)-α-induced expression of iNOS, but not COX-2, in 3T3-L1 preadipocytes. These results collectively demonstrate that CTXA has strong anti-adipogenic and anti-inflammatory effects on 3T3-L1 cells through control of the expression and phosphorylation levels of C/EBP-α, PPAR-γ, FAS, ACC, perilipin A, STAT-3/5, AMPK, and iNOS. Introduction Obesity is a major health concern often deteriorating life expectancy and increasing risks of many human diseases, such as type 2 diabetes mellitus, cardiovascular diseases, hypertension, non-alcoholic fatty liver disease, osteoarthritis, and cancer [1]. The alarming fact that the incidence of obesity has elevated steadily in the last decades, and almost half of the world's population is obese or overweight [2] has prompted the need for identifying novel cost-effective interventions that are capable of controlling obesity with minimal side effects. Given that obesity is defined as an increase in body mass fat resulting from excessive preadipocyte differentiation in the human body [3,4], and also as a low or systemic chronic inflammation [5], any substance that inhibits excessive preadipocyte differentiation and inflammatory responses in (pre)adipocytes can be therefore considered as a potential anti-obesity agent. Preadipocyte differentiation, also called adipogenesis, is a multi-step process that occurs in the form of cellular, morphological, genetic, and biochemical changes. Through this process, fibroblast-like preadipocytes are converted into mature adipocytes that are filled with lipid droplets (LDs) [6][7][8]. A wealth of information indicates that many intracellular Figure 1A is the experimental protocol of 3T3-L1 preadipocyte differentiation used in this study. Initially, we investigated the effect of different concentrations of CTXA on cellular lipid accumulation during the differentiation of 3T3-L1 preadipocytes into adipocytes using Oil Red O staining. Of note, as shown in Figure 1B (Upper panels), CTXA concentration-dependently suppressed lipid accumulation in 3T3-L1 cells on D8 of differentiation. The CTXA's lipid-lowering effect on D8 of 3T3-L1 preadipocyte differentiation was also confirmed by microscopic observation (Low panels). Given that cellular lipids are mainly stored in the form of triglyceride (TG) in differentiated adipocytes [31], we next analyzed the effect of different concentrations of CTXA on cellular TG content on D8 of 3T3-L1 preadipocyte differentiation using an Adipo-red assay. As shown in Figure 1C, there was a dose-dependent reduction of cellular TG content in 3T3-L1 cells on D8 of differentiation. Next, we examined whether CTXA at the doses tested affects growth (survival) of 3T3-L1 cells on D8 of 3T3-L1 preadipocyte differentiation using cell count analysis. As shown in Figure 1D, while CTXA at 5 or 10 µM did not affect survival of 3T3-L1 cells, CTXA at 20 µM reduced these cells' survival by approximately 95%. Thus, the maximal lipid-lowering effect by CTXA at 20 µM on D8 of 3T3-L1 preadipocyte differentiation seemed to be attributable to its cytotoxicity. The chemical structure of CTXA is depicted in Figure 1E. Because of the strong lipid-lowering effect with no cytotoxicity on D8 of 3T3-L1 preadipocyte differentiation, we selected the 10 µM concentration of CTXA in further studies. seemed to be attributable to its cytotoxicity. The chemical structure of CTXA is depicted in Figure 1E. Because of the strong lipid-lowering effect with no cytotoxicity on D8 of 3T3-L1 preadipocyte differentiation, we selected the 10 μM concentration of CTXA in further studies. Next, we analyzed whether CTXA at 10 µM interferes with expression and phosphorylation of C/EBP-α, PPAR-γ, STAT-3, and STAT-5, key adipogenic transcription factors, during 3T3-L1 preadipocyte differentiation using Western blotting. As depicted in Figure 2A, CTXA at 10 µM almost completely inhibited expression of C/EBP-α and PPAR-γ at the protein level on D5 and D8 of 3T3-L1 preadipocyte differentiation. Moreover, CTXA at 10 µM could largely inhibit phosphorylation of STAT-3 and STAT-5 on D2 and D8 of 3T3-L1 preadipocyte differentiation. Triplicate experiments, as shown in Figure 2B, further demonstrated the ability of CTXA at 10 µM to significantly inhibit not only expression of C/EBP-α and PPAR-γ but also phosphorylation of STAT-3 and STAT-5 on D8 of 3T3-L1 preadipocyte differentiation. Densitometric data of Figure 2B for the expression level of C/EBP-α or PPAR-γ normalized to that of control actin, and phosphorylation level of STAT-3 or STAT-5 normalized to the expression level of total STAT-3 or STAT-5 on D8 of 3T3-L1 preadipocyte differentiation are shown in Figure 2C. As further shown in Figure 2D, data of real-time qPCR analysis from triplicate experiments for measurement of mRNA expression level of C/EBP-α or PPAR-γ normalized to that of control 18S rRNA revealed that CTXA at 10 µM significantly reduced transcripts of C/EBP-α and PPAR-γ on D8 of 3T3-L1 preadipocyte differentiation. of C/EBP-α and PPAR-γ but also phosphorylation of STAT-3 and STAT-5 on D8 of 3T3-L1 preadipocyte differentiation. Densitometric data of Figure 2B for the expression level of C/EBP-α or PPAR-γ normalized to that of control actin, and phosphorylation level of STAT-3 or STAT-5 normalized to the expression level of total STAT-3 or STAT-5 on D8 of 3T3-L1 preadipocyte differentiation are shown in Figure 2C. As further shown in Figure 2D, data of real-time qPCR analysis from triplicate experiments for measurement of mRNA expression level of C/EBP-α or PPAR-γ normalized to that of control 18S rRNA revealed that CTXA at 10 µ M significantly reduced transcripts of C/EBP-α and PPAR-γ on D8 of 3T3-L1 preadipocyte differentiation. CTXA at 10 µM Downregulates Expression Level of FAS and Perilipin A during 3T3-L1 Preadipocyte Differentiation Next, we investigated the effect of CTXA at 10 µM on expression of FAS, an enzyme responsible for fatty acid synthesis [11,32] and perilipin A, a lipid droplet-binding and stabilizing protein [13,33], during 3T3-L1 preadipocyte differentiation. As shown in Figure 3A, CTXA at 10 µM strongly inhibited FAS protein expression on D2, D5, and D8 of 3T3-L1 preadipocyte differentiation. CTXA at 10 µM also greatly blocked perilipin A protein expression on D5 and D8 of 3T3-L1 preadipocyte differentiation. Results of Western blotting from triplicate experiments, as shown in Figure 3B, also revealed the ability of CTXA at 10 µM to largely inhibit expression of FAS and perilipin A proteins on D8 of 3T3-L1 preadipocyte differentiation. Densitometric data of Figure 3B for protein expression level of FAS or perilipin A normalized to that of control actin on D8 of 3T3-L1 preadipocyte differentiation are shown in Figure 3C. As further shown in Figure 3D, results of real-time qPCR from triplicate experiments for mRNA expression level of FAS or perilipin A normalized to that of control 18S rRNA demonstrated that CTXA at 10 µM significantly reduced transcripts of FAS or perilipin A on D8 of 3T3-L1 preadipocyte differentiation. In addition, as shown in Figure 3E, CTXA at 10 µM could significantly inhibit leptin mRNA expression on D8 of 3T3-L1 preadipocyte differentiation. 3T3-L1 preadipocyte differentiation. CTXA at 10 μM also greatly blocked perilipin A protein expression on D5 and D8 of 3T3-L1 preadipocyte differentiation. Results of Western blotting from triplicate experiments, as shown in Figure 3B, also revealed the ability of CTXA at 10 µ M to largely inhibit expression of FAS and perilipin A proteins on D8 of 3T3-L1 preadipocyte differentiation. Densitometric data of Figure 3B for protein expression level of FAS or perilipin A normalized to that of control actin on D8 of 3T3-L1 preadipocyte differentiation are shown in Figure 3C. As further shown in Figure 3D, results of realtime qPCR from triplicate experiments for mRNA expression level of FAS or perilipin A normalized to that of control 18S rRNA demonstrated that CTXA at 10 µ M significantly reduced transcripts of FAS or perilipin A on D8 of 3T3-L1 preadipocyte differentiation. In addition, as shown in Figure 3E, CTXA at 10 µ M could significantly inhibit leptin mRNA expression on D8 of 3T3-L1 preadipocyte differentiation. CTXA at 10 µM Alters Phosphorylation and Expression Level of AMPKα, LKB1, and ACC during 3T3-L1 Preadipocyte Differentiation AMPK is a heterotrimeric protein complex that is composed of α, β, and γ subunits [34], and plays a key role in cellular energy homeostasis [35]. There is much evidence that activation of AMPK inhibits lipid accumulation (adipogenesis) in 3T3-L1 preadipocyte differentiation [36]. The α subunit of AMPK (AMPKα) contains its catalytic domain where AMPK becomes activated when phosphorylation takes place at T172 [37][38][39]. This promptly led us to investigate the effect of CTXA at 10 µM on phosphorylation and expression level of AMPKα during 3T3-L1 preadipocyte differentiation. Of interest, as depicted in Figure 4A, CTXA at 10 µM elevated phosphorylation (T172) of AMPKα on D2 and D8 of 3T3-L1 preadipocyte differentiation. In addition, CTXA at 10 µM could increase phosphorylation (S79) of ACC, a downstream effector of AMPK [38], on D2 of 3T3-L1 preadipocyte differentiation. Of further note, CTXA at 10 µM also largely reduced expression of ACC on D5 and D8 of 3T3-L1 preadipocyte differentiation. CTXA at 10 µM had no or little effect on phosphorylation (S428) and expression level of liver kinase B1 (LKB1), an upstream kinase of AMPKα [39], in 3T3-L1 preadipocyte differentiation at times applied. Results of Western blotting from triplicate experiments, as shown in Figure 4B, further revealed the ability of CTXA at 10 µM to elevate phosphorylation of AMPKα but decrease in phosphorylation and expression of ACC on D8 of 3T3-L1 preadipocyte differentiation. Densitometric data of Figure 4B for protein phosphorylation level of AMPKα, LKB1 and ACC normalized to expression level of total AMPKα, LKB1, and ACC on D8 of 3T3-L1 preadipocyte differentiation are shown in Figure 4C, respectively. As further shown in Figure 4D, results of real-time qPCR from triplicate experiments for mRNA expression level of ACC normalized to that of control 18S rRNA displayed that CTXA at 10 µM significantly down-regulated mRNA expression of ACC on D8 of 3T3-L1 preadipocyte differentiation. picted in Figure 4A, CTXA at 10 µ M elevated phosphorylation (T172) of AMPKα on D2 and D8 of 3T3-L1 preadipocyte differentiation. In addition, CTXA at 10 µ M could increase phosphorylation (S79) of ACC, a downstream effector of AMPK [38], on D2 of 3T3-L1 preadipocyte differentiation. Of further note, CTXA at 10 µ M also largely reduced expression of ACC on D5 and D8 of 3T3-L1 preadipocyte differentiation. CTXA at 10 µ M had no or little effect on phosphorylation (S428) and expression level of liver kinase B1 (LKB1), an upstream kinase of AMPKα [39], in 3T3-L1 preadipocyte differentiation at times applied. Results of Western blotting from triplicate experiments, as shown in Figure 4B, further revealed the ability of CTXA at 10 µ M to elevate phosphorylation of AMPKα but decrease in phosphorylation and expression of ACC on D8 of 3T3-L1 preadipocyte differentiation. Densitometric data of Figure 4B for protein phosphorylation level of AMPKα, LKB1 and ACC normalized to expression level of total AMPKα, LKB1, and ACC on D8 of 3T3-L1 preadipocyte differentiation are shown in Figure 4C, respectively. As further shown in Figure 4D, results of real-time qPCR from triplicate experiments for mRNA expression level of ACC normalized to that of control 18S rRNA displayed that CTXA at 10 µ M significantly down-regulated mRNA expression of ACC on D8 of 3T3-L1 preadipocyte differentiation. Next, we examined the effect of CTXA at 10 µM on lipolysis in differentiated (mature) 3T3-L1 cells. In this study, the CTXA's lipolysis-inducing effect was assessed by its ability to elevate glycerol release and hormone-sensitive lipase (HSL) phosphorylation (S563), and isoproterenol (ISO), a lipolysis-inducing agent [40], was used as a positive control. The experimental protocol for assessment of glycerol release and HSL phosphorylation is depicted in Figure 5A. As expected, ISO at 20 µM for 3 h markedly elevated glycerol content in the culture media of differentiated 3T3-L1 cells ( Figure 5B). However, CTXA at 5 to 20 µM for 3 h did not elevate glycerol content in these cells. Furthermore, while ISO at 20 µM for 3 h strongly increased HSL phosphorylation in differentiated 3T3-L1 cells, CTXA at 10 µM for 3 h had no or little effect on it ( Figure 5C). Expression level of control actin and total HSL remained unchanged under these experimental conditions. CTXA at 10 µM Does Not Stimulate Glycerol Release and Phosphorylation of HSL in Mature 3T3-L1 Adipocytes Next, we examined the effect of CTXA at 10 µ M on lipolysis in differentiated (mature) 3T3-L1 cells. In this study, the CTXA's lipolysis-inducing effect was assessed by its ability to elevate glycerol release and hormone-sensitive lipase (HSL) phosphorylation (S563), and isoproterenol (ISO), a lipolysis-inducing agent [40], was used as a positive control. The experimental protocol for assessment of glycerol release and HSL phosphorylation is depicted in Figure 5A. As expected, ISO at 20 μM for 3 h markedly elevated glycerol content in the culture media of differentiated 3T3-L1 cells ( Figure 5B). However, CTXA at 5 to 20 μM for 3 h did not elevate glycerol content in these cells. Furthermore, while ISO at 20 μM for 3 h strongly increased HSL phosphorylation in differentiated 3T3-L1 cells, CTXA at 10 μM for 3 h had no or little effect on it ( Figure 5C). Expression level of control actin and total HSL remained unchanged under these experimental conditions. CTXA at 10 µM Inhibits TNF-α-Induced Expression of iNOS, but Not COX-2, in 3T3-L1 Preadipocytes To see any anti-inflammatory effect, we next investigated whether TNF-α at 10 ng/mL induces expression of COX-2 and iNOS in 3T3-L1 preadipocytes over time, and treatment with CTXA at 10 µM could interfere with it. As shown in Figure 6A,B, treatment with TNF-α at 10 ng/mL for 4 h maximally induced expression of COX-2 and iNOS at both protein and mRNA levels in 3T3-L1 preadipocytes, respectively. Of note, CTXA treatment at 10 µM for 4 h greatly suppressed TNF-α-induced protein and mRNA expression of iNOS, but not COX-2, in 3T3-L1 preadipocytes ( Figure 6C,D). Furthermore, results of Western blotting in triplicate experiments, as shown in Figure 6E, confirmed the ability of CTXA at 10 µM to block TNF-α-induced iNOS protein expression in 3T3-L1 preadipocytes. Densitometric data of Figure 6E for protein expression level of iNOS normalized to that of control actin in 3T3-L1 preadipocytes treated with TNF-α in the absence or presence of CTXA at 10 µM for 4 h are shown in Figure 6F. It should be noted that there are non-specific proteins with a same or similar epitope (some amino acid sequences) recognized by the iNOS or COX-2 antibody used herein, which are indicated in Figure 6A,C, and/or 6E with arrows labeled with NSP (non-specific protein). This notion is based on the fact that expression of NSP is not inducible by TNF-α treatment at the times tested, given that iNOS and COX-2 are inducible enzymes whose expressions are rapidly inducible by extracellular stimuli including TNF-α herein. treatment with CTXA at 10 μM could interfere with it. As shown in Figure 6A and B, treatment with TNF-α at 10 ng/mL for 4 h maximally induced expression of COX-2 and iNOS at both protein and mRNA levels in 3T3-L1 preadipocytes, respectively. Of note, CTXA treatment at 10 µ M for 4 h greatly suppressed TNF-α-induced protein and mRNA expression of iNOS, but not COX-2, in 3T3-L1 preadipocytes ( Figure 6C,D). Furthermore, results of Western blotting in triplicate experiments, as shown in Figure 6E, confirmed the ability of CTXA at 10 µ M to block TNF-α-induced iNOS protein expression in 3T3-L1 preadipocytes. Densitometric data of Figure 6E for protein expression level of iNOS normalized to that of control actin in 3T3-L1 preadipocytes treated with TNF-α in the absence or presence of CTXA at 10 µ M for 4 h are shown in Figure 6F. It should be noted that there are non-specific proteins with a same or similar epitope (some amino acid sequences) recognized by the iNOS or COX-2 antibody used herein, which are indicated in Figure 6A,C, and/or 6E with arrows labeled with NSP (non-specific protein). This notion is based on the fact that expression of NSP is not inducible by TNF-α treatment at the times tested, given that iNOS and COX-2 are inducible enzymes whose expressions are rapidly inducible by extracellular stimuli including TNF-α herein. Figure 6. Effects of CTXA on tumor necrosis factor (TNF)-α-induced expression level of inducible nitric oxide synthase (iNOS) and cyclooxygenase-2 (COX-2) in 3T3-L1 preadipocytes. (A,B) 3T3-L1 preadipocytes were treated with or without TNF-α (10 ng/mL) for designated periods. At each time point, whole cell lysate or total RNA was isolated and analyzed for iNOS and COX-2 by Western blotting or RT-PCR. (C,D) 3T3-L1 preadipocytes were treated with or without CTXA (10 µM) in the presence or absence of TNF-α (10 ng/mL) for 4 h. Whole cell lysate or total RNA was isolated and analyzed for iNOS and COX-2 by Western blotting or RT-PCR. (E) After the treatment mentioned above in (C,D), whole cell lysates were prepared from three independent experiments and analyzed by Western blotting with respective antibodies. (F) Densitometry analysis of (E). * p < 0.05 compared to vehicle control. NSP, non-specific protein. Discussion Although there is much evidence addressing that CTXA has anti-inflammatory, anticancerous, and anti-osteoclast differentiation activities [27][28][29], as of now, this natural substance's anti-obesity effect remains unclear. In this study, we report firstly that CTXA at 10 µM has strong anti-adipogenic and anti-inflammatory effects on 3T3-L1 preadipocytes, and these effects are mediated through modulation of the expression and phosphorylation level of C/EBP-α, PPAR-γ, FAS, ACC, perilipin A, STAT-3/5, AMPK, and iNOS. Through initial experiments, we have shown that CTXA at 10 µM greatly inhibits lipid accumulation and reduces TG content during the differentiation of 3T3-L1 preadipocyte into adipocytes, pointing out its anti-adipogenic (lipid-lowering) effect. As mentioned before, the differentiation process requires fibroblast-like preadipocytes to develop into lipidladen mature (differentiated) adipocytes [6][7][8]41], and numerous adipogenic transcription factors, including CCAAT/enhancer-binding proteins (C/EBPs), peroxisome proliferatoractivated receptors (PPARs), and signal transducer and activator of transcription (STAT) proteins participate in the process [9,10,[42][43][44]. As of now, little is known about CTXA regulation of C/EBP-α, PPAR-γ, and STAT-3 in the preadipocyte differentiation process. Strikingly, we herein have shown that CTXA at 10 µM strongly downregulates C/EBP-α and PPAR-γ at both protein and mRNA levels in 3T3-L1 preadipocyte differentiation, which may further point out that CTXA-mediated C/EBP-α and PPAR-γ down-regulation is due to their transcriptional repression. Moreover, the present study has demonstrated the ability of CTXA at 10 µM to largely interfere with phosphorylation of STAT-3 and STAT-5 without altering their total protein level in 3T3-L1 preadipocyte differentiation. These results collectively suggest that CTXA's anti-adipogenic effect on 3T3-L1 cells is closely linked to the reduced expression and phosphorylation levels of C/EBP-α, PPAR-γ, and STAT-3/5. It is known that the process of preadipocyte differentiation is accompanied by the synthesis of fatty acid, also called lipogenesis and stabilization of LDs [4,7,41]. FAS and ACC are important lipogenic enzymes involved in the synthesis of fatty acid [11,32]. Perilipin A is a protein that binds to and stabilizes cellular LDs, which thereby plays a crucial role in lipid accumulation or storage in preadipocyte differentiation process [13,33,45,46]. To date, CTXA regulation of FAS, ACC, and perilipin A in this preadipocyte differentiation process is unknown. Of note, we herein have shown that CTXA at 10 µM greatly lowers expression of FAS and perilipin A at their protein and mRNA level during 3T3-L1 preadipocyte differentiation at times tested (D2, D5, and/or D8). In the case of ACC, however, CTXA treatment at 10 µM substantially increases level of phosphorylated ACC, which are inactive forms of ACC [38], at the early (D2) stage of 3T3-L1 preadipocyte differentiation, while it largely reduces the level of total ACC at the middle (D5) and late (D8) stage of the cell differentiation. These results suggest that CTXA's anti-adipogenic/anti-lipogenic (lipid-lowering) effects are further attributable to the reduced expression of FAS, ACC, and perilipin A. AMPK is a metabolic protein that plays a pivotal role in the regulation of cellular energy homeostasis [14]. Until now, CXTA regulation of AMPK in the preadipocyte differentiation process has been unknown. Of interest, we herein have found that CTXA at 10 µM induces high AMPK phosphorylation on T172, which is an active form of AMPK [47,48], at the early (D2) and late (D8) stage of 3T3-L1 preadipocyte differentiation. Given that AMPK is an upstream regulator of ACC phosphorylation [38], and CTXA at 10 µM also induces ACC phosphorylation on D2 of 3T3-L1 preadipocyte differentiation herein, it is, therefore, conceivable that CTXA induces activation of AMPK, which is responsible for ACC phosphorylation at the early (D2) stage of 3T3-L1 preadipocyte differentiation. Further assuming that activation of AMPK inhibits preadipocyte differentiation [37,40,[47][48][49], it is likely that activation of AMPK may further contribute to CTXA's anti-adipogenic and anti-lipogenic effects. It has been previously shown that phosphorylation of AMPK is induced by several upstream kinases including LKB1 [39]. However, in the current study, CTXA at 10 µM is unable to induce LBK1 phosphorylation during 3T3-L1 preadipocyte differentiation process. These results address that CTXA-induced AMPK phosphorylation during 3T3-L1 preadipocyte differentiation herein is not through LKB1 but other upstream kinases and/or mechanisms. Given that AMPK phosphorylation is influenced by CaMKK [50] and also change in the ATP/AMP ratio [51], it will be interesting to examine, in the future, whether CTXA affects expression and activity of CaMKK and alters the ATP/AMP ratio in 3T3-L1 preadipocyte differentiation process. It is documented that lipolysis is a biological process in which ester bonds in TG are cleaved and generate free fatty acids and glycerol. Lipolysis is regarded as an alternative anti-obesity treatment [52]. Thus, any compound that enhances lipolysis can be used as potential anti-obesity therapeutics. In line with this, we herein have further tested the effect of CTXA on lipolysis in differentiated 3T3-L1 adipocytes. In the current study, we have observed that while 3 h treatment with ISO (20 µM), a lipolytic agent, leads to a big increase in glycerol release in differentiated 3T3-L1 adipocytes, CTXA at 5 to 20 µM does not induce glycerol release in these cells. HSL is a pivotal enzyme involved in lipolysis and its phosphorylation on several residues including S563 is indicative of an active form [53]. In this study, we have shown that while 3 h treatment with ISO (20 µM) induces high HSL phosphorylation on S563, that with CTXA at 5 to 20 µM does not. Taken together, these results point out that CTXA has no lipolytic effect on mature 3T3-L1 adipocytes. Obesity is alternatively defined as a chronic inflammatory disease [54]. It is known that (pre)adipocytes, a predominant cell type present in the adipose tissues, express and secrete not only adipokines (leptin, adiponectin, etc.) but also inflammatory cytokines (TNF-α, IL-1 β, IL-6, etc.) and enzymes (iNOS, COX-2, etc.) [21], which contribute to obesity inflammation. Studies have previously reported the ability of TNF-α to highly induce expression of COX-2 and iNOS in 3T3-L1 (pre)adipocytes [55]. However, until now, CTXA regulation of TNF-α-induced expression of COX-2 and iNOS in 3T3-L1 preadipocytes is not reported. In the present study, of interest CTXA at 10 µM for 4 h is able to largely interfere with TNF-α-induced expression of iNOS, but not COX-2, at both protein and mRNA level in 3T3-L1 preadipocytes, pointing out the specificity. These results thus advocate CTXA's anti-inflammatory effect on preadipocytes, which may further contribute to its anti-obesity effect. It should be noted that CTXA's inhibitory effects on adipogenesis and inflammatory reaction herein is seen in cultured 3T3-L1 cells. Although not directly related to the current study, we have previously reported that CTXA protects sepsis-triggered renal damage in mice by inhibiting induction of iNOS expression [56] and attenuates sepsis-induced liver injury partially through the reduced expression of inflammatory cytokines including TNFα [57], supporting this natural compound's anti-inflammatory effect via iNOS and TNF-α down-regulation in vivo. It is known that inflammation contributes to the expansion of white adipocyte tissues by increasing adipogenesis, though the primary event triggering this remains unclear [58]. Overexpression and hyper-activation of inflammatory mediators in white adipose tissues partly leads to the development of obesity. Given that CTXA strongly interferes with TNF-α-induced iNOS expression in 3T3-L1 preadipocytes herein, it is conceivable that CTXA may inhibit adipogenesis by alleviating or resolving the TNF-α and iNOS-mediated inflammation in (pre)adipocytes in white adipose tissues. At this moment, it is not sure whether CTXA exerts its anti-adipogenic and anti-inflammatory effects in vivo. Future studies are warranted to evaluate if CTXA could inhibit adipogenesis in high-fat diet induced obese or Ob/Ob mice and its lipid-lowering effect is further linked with reduction of inflammation. In summary, this is the first reporting that CTXA has strong anti-adipogenic and anti-inflammatory effects on 3T3-L1 cells, which are mediated through regulation of the expression and phosphorylation levels of C/EBP-α, PPAR-γ, STAT-3/5, FAS, ACC, perilipin A, AMPK, and iNOS. The present findings suggest that CTXA may be used as a potential anti-obesity natural substance. Materials Cudratricusxanthone A (CTXA) was isolated from the roots of Cudrania tricuspidata Bureau as reported previously [26]. Enhanced chemiluminescence (ECL) reagent was bought from Advansta (Menlo Park, CA, USA). 3-isobutyl-1-methylxanthine (IBMX), dex-amethasone, and insulin were purchased from Sigma (St. Louis, MO, USA). Adipo-red assay reagent kit was bought from Lonza (Basel, Switzerland). Free Glycerol Reagent and Oil Red O working solution was obtained from Sigma (St. Louis, MO, USA). Pierce BCA Protein Assay Kit was bought from Thermo Scientific (Rockford, IL, USA). Antibodies used in this study are listed in Table S1. In this study, the cell culture medium containing CTXA was vigorously vortexed for 30 s before addition to cells. After 48 h MDI-induction, the differentiation medium was replaced with DMEM supplemented with 10% FBS and 5 µg/mL I in the absence or presence of CTXA at the designated doses for additional 3 days. The cells were then fed every other day with DMEM containing 10% FBS in the absence or presence of CTXA at the indicated concentrations for additional 3 days. Oil Red O Staining On day 8 of differentiation, control or CTXA-treated 3T3-L1 cells were washed twice with PBS, fixed with 10% formaldehyde for 2 h at room temperature (RT), washed with 60% isopropanol, and dried completely. The fixed cells were then stained with Oil Red O working solution for 1 h at RT in the dark place and then washed twice with distilled water. Lipid droplets (LDs) accumulated in control or CTXA-treated 3T3-L1 cells were observed by light microscopy (Nikon, TS100, Tokyo, Japan). Cell Count Analysis 3T3-L1 preadipocytes were seeded in 24-well plates. Cells were similarly grown under the above-mentioned differentiation conditions. On day 8 of differentiation, control or CTXAtreated 3T3-L1 cells, which cannot be stained with trypan blue dye, were counted under the microscope. The cell count assay was done in triplicates. Data are mean ± standard error (SE) of three independent experiments. Quantification of Cellular TG Content Cells were similarly grown under the above-mentioned differentiation conditions. On day 8 of differentiation, intracellular TG content in control or CTXA-treated 3T3-L1 cells was measured using a commercially available Adipo-red assay reagent kit according to the manufacturer's instructions (Lonza, Basel, Switzerland). Fluorescence was measured on Victor3 (Perkin Elmer, Waltham, MA, USA) with an excitation at 485 nm and emission at 572 nm. Data are mean ± SE of three independent experiments. Western Blot Analysis Proteins (30 µg) were separated by SDS-PAGE (10%) and transferred onto polyvinylidene difluoride (PVDF) membranes (Millipore, Billerica, MA, USA). The membranes were washed with Tri-buffered saline (TBS) (10 mM Tris, 150 mM NaCl) supplemented with 0.05% (v/v) Tween 20 (TBST) followed by blocking with TBST containing 5% (w/v) non-fat dried milk. The membranes were incubated overnight with specific primary antibodies listed in Table S1 at 4 • C. The membranes were washed three times with TBST at RT and then exposed to secondary antibodies coupled to horseradish peroxidase for 2 h at RT. The membranes were washed three times with TBST at RT. Immunoreactivities were detected by ECL reagents (Advansta, San Jose, CA, USA). Equal protein loading was assessed by the expression level of actin protein. Quantitative Real-Time RT-PCR Total cellular RNA in control or CTXA-treated 3T3-L1 cells was isolated with the RNAiso Plus (TaKaRa, Kusatsu, Shiga, Japan). Three µg of total RNA was used to prepare complementary DNA (cDNA) using a random hexadeoxynucleotide primer and reverse transcriptase. SYBR green (TaKaRa, Kusatsu, Shiga, Japan) was used to quantitatively determine transcript levels of genes with LightCycler96 Machine (Roche, Mannheim, Germany). PCR reactions were run in triplicate for each sample, and transcript levels of each gene were normalized to the expression level of 18S rRNA. Primer sequences used in this study are listed in Table S2. Data are mean ± SE of three independent experiments. Reverse-Transcription Polymerase Chain Reaction (RT-PCR) Three µg of total RNA were transcribed, the same method as in Section 4.8. The above-prepared cDNA was amplified by PCR with the primers listed in Table S3. The expression level of actin mRNA was used as an internal control as well as loading control. Measurement of Glycerol Content Differentiated 3T3-L1 adipocytes (D8) were serum-starved for 2 h and incubated with different concentrations (5, 10, and 20 µM) of CTXA or isoproterenol (ISO, 20 µM), a known lipolysis inducer, for additional 3 h. The culture medium was saved, and glycerol content was measured by a free glycerol reagent according to the manufacturer's instructions (Sigma). Absorbance was measured at a wavelength of 540 nm using the microplate reader. The assay was done in triplicates. Data are mean ± SE of three independent experiments. Statistical Analyses Cell count analysis was performed in triplicate and repeated three times. Data are expressed as mean ± SE. Statistical analysis was performed using SPSS v.11.5 software (SPSS, Inc. Chicago, IL, USA). Data were subjected to one-way ANOVA and Student t-test, followed by Dunnett's post hoc test. p < 0.05 was considered to indicate statistically significant differences.
168997160
s2orc/train
v2
2019-05-30T13:21:13.082Z
2017-11-27T00:00:00.000Z
Between Industry and the Environment: Chemical Governance in France, 1770-1830 Compound Histories: Materials, Governance and Production, 1760-1840 explores the intertwined realms of production, governance and materials, placing chemists and chemistry at the center of processes most closely identified with the construction of the modern world. chemical manufacturing drastically changed industrial processes. This not only had adverse effects on workers' health, it more broadly altered European societies' relationship with their environment.4 Revolutionary events amplified this process by freeing the productive sphere from a number of constraints, encouraging all kinds of technical improvements and giving chemists a crucial role in matters of governance. This essay examines how chemists contributed to the technological reorganization in France at the end of the eighteenth and the beginning of the nineteenth century, how they justified using potentially harmful or polluting processes by stating that this would contribute to national prosperity, and how the idea of improvement helped legally and rhetorically to build a production regime that disqualified traditional precautionary attitudes to certain artisanal and industrial processes. This resulted in a new regime of environmental governance devoted to the advancement of chemistry and industrial production. The Acids Revolution and Value Shift In the 1770s in France, a silent revolution took place in the relationship between chemical production and both its environment and medicine. Alain Corbin has shown that this decade was a turning point in medical and olfactory attitudes towards certain products.5 Broadening this line of enquiry by considering the art of governing populations, it appears that chemistry played a crucial role in social and political representations as well as in governance systems. Previously, faced with the hazards, nuisances and disadvantages involved, regulatory authorities had been wary of laboratory and artisanal chemistry. The police, who traditionally saw to matters of public health and community safety and comfort, particularly resisted the use of aggressive acids. Reflecting this distrust, several trials took place in Paris against craftsmen who made nitric acid, the only strong acid produced on an industrial scale before 1770, known then as aqua fortis. In 1768, for example, Police Superintendent Jean-Baptiste Lemaire, with the backing of the Faculty of Medicine, summoned a nitric acid distiller who operated in the city center before the police court, on a charge of endangering the public's health.6 Under the continued influence of the miasma theory, police protocol through the end of the ancien régime called for keeping close tabs on manufacturers of aqua fortis and acids, viewing them as sources of ill health and pollution.7 Workshops and factories that offended the senses and contaminated the air and water underwent strict preventive investigations known as commodo et incommodo. 8 The manufacturing of chemicals, and acids in particular, was banned from cities and often carried out in small-scale and home-based production facilities, where hazards were none the less significant, as the case of nitric acid reveals. Before the growth of sulfuric acid, nitric acid was used in most industries, from tanneries to metal works9. A key product for industrialization and highly corrosive, it was made in Paris beyond the Porte Saint-Denis in small isolated workshops guarded by the police. Despite their product's fundamental contribution to industrialization, these spaces remained untouched by large capitalist investment and exuded a sort of toxic domesticity. In the 1770s, however, a new way of seeing chemistry was emerging. Of course, even chemists generally recognized nitric acid's corrosive nature. In 1773, describing the art of the aqua fortis distiller in Description sur l'art du distillateur d'eau forte, Jacques-François Demachy described its "suffocating fumes" and very dangerous manufacturing processes.10 The accompanying illustration, however, reveals a purpose quite other than promoting care when working with dangerous substances. A worker is pictured only to show the scale of the place, which is depicted without any chemical substances; the devices shown were to be understood, not experienced, and the heat, hazards and acid-soaked atmosphere were hidden to suggest an idealized vision of technical know-how.11 There was neither activity nor matter, just production tools, which were the pedagogical focus of this book's representation of work. (See figure 7.1) 6 Archives Nationales (AN), Y 9471B, report by Ibid., Part 2, Plate 1, Figure 2. Just like the Description des arts et métiers commissioned by the Academy of Science, which included the study of aqua fortis distillers, the Encyclopédie's plates were based on facilities in Paris: their representations of work reflected a technological and universal order that wished to discipline bodies and become free from the constraints of particular locations.12 This was a world ruled by scientists and technicians, who increasingly imposed their authority on the world of craftsmen and related physical practices. The stakes were all the higher because acids were a key industrial product, and government had begun ardently to promote acid manufacturing. The main change came with sulfuric acid production. Despite having similar uses to nitric acid, sulfuric acid was only produced in small quantities before the late 1760s, mainly in laboratories where it was condensed in expensive and delicate glass jars during the final production stage. Sulfuric acid was absolutely essential for cotton printing, on which the government had recently lifted its ban in 1759. Simultaneously, in the United Kingdom, John Roebuck broke new ground by using lead chambers to condense sulfuric acid. The room-sized lead-lined chambers allowed sulfuric acid production on an industrial scale, which soon challenged nitric acid's preeminent position. The tech nology was introduced in France by the Englishman John Holker, a factory inspector employed by the French monarchy, who in 1768 set up a sulfuric acid factory next to his printed cotton factory in a suburb of Rouen.13 Over a period of some months, the gases discharged by the chambers corroded by the strong acid caused breathing problems for neighbours and damaged surrounding vegetation.14 According to police jurisprudence, this kind of nuisance was not tolerated near homes and, in 1772, Holker was prosecuted in France's first great industrial pollution trial. After several months of proceedings in the Parlement de Rouen (then called Conseil supérieur), the accused parties, supported by Jean-Charles Trudaine, the Commerce Director, obtained a hearing at the finance royal council. There Trudaine had to argue against Minister Henri Bertin, a former Paris Lieutenant-General from 1757 to 1759.15 Economic interest prevailed over Bertin's arguments: in September 1774, the plaintiffs' case was dismissed and henceforth, no one was allowed to trouble or disrupt the factory's operation. 16 The lead chamber was therefore not only a technological development: it occasioned a shift in the order of industrial and environmental governance. Firstly, it required major investment, which made any production stoppage problematic. Secondly, it was supposed to be a perfect device that replaced multiple operations by the workers with a simple system in which leaks could be better controlled. The same argument was used for both health benefits and economic profits, as any leak was treated as a loss of value.17 Lastly, it led to a change in the representation of sulfuric acid manufacturing, presented from then on through its technology, such as by a technical drawing or a model showing only the mechanism's external envelope. Devices appeared, in the 13 Smith, The Origins (see note 9). 14 L.-G. de la Follie, "Réflexions sur une nouvelle méthode pour extraire en grand l'acide du soufre par l'intermédiaire du nitre, sans incommoder ses voisins," first representations of this kind of factory, like magical boxes where everything took place according to the scientific processes of physics and chemistry. Through representations of the working world and especially of artisanal and industrial chemistry, the last ancien régime decades witnessed the inevitable fading out of the proximity of arts and crafts. In its place arose a technical, disembodied order that would celebrate technical drawings during the nineteenth century, the seeds of which were already present in the encyclopaedic initiative and in scientific encouragement.18 While chemistry transformed the governance of industry and especially the government's attitude to nuisances, the root causes of this change should be sought in the government's economic policy as well as in the changes chemists were introducing to medical aetiology. The groundwork was laid by the chemist Louis-Bernard Guyton de Morveau from Dijon. In March 1773, he was contacted by the Dijon Cathedral's authorities, who could not get rid of the mephitic stench emanating from the decaying corpses in one of the building's vaults. Applying the theory on the combination of ammonia, whose presence could be deduced from the smell of decay, with an acid to produce a neutral salt, he fumigated the vault with muriatic (hydrochloric) acid and managed to neutralise the smell. In the medical community, among which the miasma theory was predominant, this removal of a smell was considered a victory over putrid infection and the experiment had a huge impact.19 It was the first time acid fumigation was used in France as a way of controlling fermentation and its smell. The novel procedure broke with traditional conceptions about the corrosive and dangerous nature of acids. Until then, acids had never been thought of as a disinfectant; instead physicians recommended fumigation with odoriferous herbs, the spraying of vinegar or starting of a fire or a powder explosion to disperse and destroy miasmas. The fact that acid fumigation was not widely taken up, at least not immediately, is not important. The significance of these experiments and the publicity surrounding them in 1773 and 1774 was not that they immediately led to routine therapeutic use, but that they profoundly altered the perception of acids, a product that was crucial for industrial development. Because acid promotion was at the heart of a governmental scheme to encourage production in order to boost industrial development, Guyton de Morveau's experiments were a godsend and allowed medicine to progress hand in hand with economic development. In 1774, Vicq d' Azyr prescribed acid fumigation to treat epizootic diseases in the south of France, and the following year, the academicians Etienne Mignot de Montigny and Philibert Trudaine de Montigny also recommended this disinfection method in two separate notices and enquiries.20 The chemist Antoine Parmentier, senior scientific advisor to the police lieutenant-general, observed that "acid vapours" combined with other elements in the air to "contribute to its cleanliness."21 He further extolled the virtues of "spirit, acid and corrosive fluids, which could be released to destroy or neutralise the miasma supposed to be dispersed in the air."22 The link between medicine and chemistry was strengthened specifically during this decade: one after the other, the physicians Claude Berthollet, Antoine Fourcroy and Jean-Antoine Chaptal stopped practising medicine to study chemistry, and all played a predominant role in the development of industrial chemistry, especially in relation to acids. Fourcroy became an expert and regular government advisor, assessing nuisances caused by chemical factories. For instance, he was commissioned by the Royal Society of Medicine in 1783 to write a report on a sulfuric acid factory in Rouen. In this report, he strongly defended the manufacturer, dismissing his opponents as prejudiced and ignorant about chemistry.23 From then on, the Bureau du Commerce (Trade Office) relied on these new ideas to encourage and at times force the establishment of acid factories in close proximity to cities. To override public objections, trade officers used a combination of medical and chemical arguments, claiming that "sulphur vapours, far from being hazardous, are very healthy. They purify unhealthy air. They prevent epidemics."24 This was a reasonable stance to take, especially as the chemists Macquer and then, from 1784, Berthollet were members of the Bureau and greatly contributed to spreading these ideas. Chemistry not only contributed to the idea of improvement in arts and crafts, but also in society and the economy. Many chemical substances were used to produce semi-luxury and luxury goods, especially in the non-ferrous metal industry. This was the case, for example, for silver and gold plated products and in the new platinum industry. In a letter to Guyton de Morveau, dated November 1786, Lavoisier mentioned that he was working with the innovative gold and silversmith Jacques Daumy. To treat platinum for craftsmen and manufacturers, they refined, dissolved, precipitated and revived the metal with hydrochloric and nitric acid, ammonium chloride, borax, lead, bismuth, antimony and arsenic.25 More generally, the integration of new chemistry with luxury goods production occurred through precision metalwork on precious metals, "the artistry of which was perfected through very delicate chemical operations and relatively challenging processes for the workers."26 The new Paris Mint, built between 1771 and 1775, served as a laboratory, not only for making coins but also mastering the chemistry behind refining, cupellation and alloying assays to make all kinds of gold and silverwork pieces. In this field, with a similar argument as that used for sulfuric acid, the matter of goldbeating was raised before the royal council in 1773. Gold beaters in Lyons were accused by the police of using furnaces within the city, as well as treating gold with antimony and corrosive sublimate (a mercury compound), two dangerous substances. Both activities violated manufacturing and public health laws. The beaters defended themselves by appealing to the king and arguing a number of economic points: to uphold the restrictions and the "broadly prohibitive law" would condemn their industry to decline; and it was only by violating restrictions "contrary to the public interest" that their "art has improved." According to them, violations were "brutal procedures enforced by an uninformed police officer." The king was convinced and agreed to authorize the use of furnaces as well as antimony and mercury for metal refining, "to support the main factories in Lyons," by an order of the Council dated 29 April 1773. The order's preamble stated that the petitioners had "a duty to preserve their industry for the state and to perfect it."27 Chemistry thus contributed to transform physicians' and scientists' perception of mineral acids and other chemical substances, which until then had been feared for their corrosive effects. It demonstrated the medical usefulness of these acids, thereby helping to overcome the usual precautions and spurring their industrial use. This reversal had an effect on industrial nuisance policy in 25 Antoine the medium-term. During the Revolution, the Consulate and Imperial years, it translated into fundamental reports and regulations, which tied medical expertise, chemistry and industrial development together. Chemical Governance and the Environment (1789-1810) In 1791 liberalism, which was already perceptible at the end of the ancien régime, inspired several steps to facilitate the setting up of industries whatever their nuisances. While the disruption that occurred in 1789 implicitly resulted in more freedom for industrialists, who took advantage of the dismantling of former regulatory institutions, the new legislation permanently released industry from several controlling regulations.28 Commodo et incommodo investigations were stopped, and the d' Allarde Law of March 1791 abolished arts and crafts guilds and their statutes.29 In September 1791, the Bureau and industry inspectorate were dismantled. In October 1791, letters patent granting exclusive privileges were abolished, which did away with pre liminary investigations in use under the ancien régime. Consequently, industrialists were free to set up factories wherever they wanted and manufacture products using whichever processes they wished. Legislators ruled that the courts only had jurisdiction to address property damage. However, the revolutionary period was also characterized by a strengthening of the value shift occurring in the public interest domain. Public interest was no longer concerned first and foremost with safeguarding public health, but was permanently associated with economic development. Chemists became the new official experts on assessing pollution and contributed to the policies of the successive republican governments. Thus in 1791, when the Academy of Science investigated the pollution caused by an ammonium chloride factory established in the middle of a populated neighbourhood near Valenciennes, the report's authors (chemists Louis Cadet, Fourcroy and Berthollet) conceded that pollution had disadvantages, but considered that the smoke could be tolerated in the interest of national industry and general welfare.30 A few flagship products illustrate the involvement of chemists in industry. In addition to armaments manufacturing, leather, copper and pigments are worth noting. From the autumn of 1793, the revolutionary government was looking for a way to produce leather goods for the troops as fast as possible and entrusted the task to the chemists. The Comité de salut public instructed Berthollet "to take charge of tanning improvement," and named Armand Seguin to conduct several experiments. Fourcroy praised Seguin's "revolutionary" tanning method, which involved replacing previously used weak organic acids with a concentrated solution of sulfuric acid, in his report to the Convention, noting that the new process sped up manufacturing considerably.33 The new method was employed at the state-financed tannery established in late 1794 on the Sèvres Island in the Paris suburbs, with used acid discharged into the Seine. A similar mindset was applied to copper production. From 1791, the government requested gold and silversmith Daumy to melt and refine bronze bells to make coins, and then cannons, in a new factory on the Île de la Cité using chemical processes requiring large amounts of nitric, sulfuric and muriatic acid.34 Here too, toxic remains were discharged into the river. The example of minium, a lead-based pigment used to make porcelain, shows how chemists used pollution charges to promote industrial innovation. Neighbours alleged that the lead oxide discharged from a minium factory in the Parisian neighbourhood of Bercy in June 1793 polluted the area. Simultaneous to Bercy's council banning the factory, the government entrusted an expert report to the chemists Pelletier and Petit, who argued that the problem could be reduced by improving manufacturing processes. Guyton led a second inspection, with the understanding that minium production was "valuable for the Republic" and "useful for arts workshops." Fourcroy, then a National Convention member, advocated that the owner should be "protected in his factory given that minium could no longer be procured in Britain or Holland."35 Confirming that there was a public health issue, Guyton's report resulted in an order to demolish the factory, but the owner was encouraged to improve his manufacturing processes with the help of well-placed chemists and physicians, who also lobbied successfully for generous government compensation to rebuild the factory.36 This case exemplifies what became a pattern of technical improvement under the guise of chemical scientific expertise, initially only seen with sulfuric acid factories, now fixed by Guyton de Morveau and Fourcroy.37 From the Napoleonic regime, members of the Conseil de salubrité would take it upon themselves to make this the core of environmental regulation. Two important considerations emerged from chemists' involvement: public interest was equated with economic development and technical solutions were proffered as the best way to reduce nuisances from craft production. It thereby became possible to divest the traditional police of its prerogative powers and to bypass the judicial reasoning of the ancien régime. After peace returned in 1795, France's economic expansion was driven by its chemical industry. In Paris alone, dozens of factories were working inside the city walls and suburbs. Growth was especially embodied in four flagship plants Between 1802 and 1804, Chaptal worked to build a coherent framework to serve industry. He began by founding the Conseil de salubrité in 1802, an institution with scientific expertise -mainly chemists with a soft spot for industry -to advise the Parisian authorities. In agreement with the owners of the factories and workshops, members often denied that industrial fumes were noxious or deleterious to plaintiffs' health. In the case of chemical factories, they pointed out that the waste gases were "valuable" and that it was in the interest of the manufacturer to prevent them from escaping. Pollution, thus, was construed as the result of unintended accidents rather than daily practice.41 Meanwhile, economic affairs were entrusted to new or reorganized institutions, such as the Mint, which became a veritable laboratory for testing the However, everywhere in France as in Paris, trials against owners of chemical factories accused of pollution threatened to disrupt the steady industrial production. After Chaptal was replaced by Jean-Baptiste de Champagny as the Interior Minister in August 1804, the authorities contemplated a national response to this recurring issue. In November 1804, the new Minister asked the French institute "about factories exhaling an obnoxious smell and the risk that they posed for public health"; the institute entrusted the report to Guyton de Morveau and Chaptal.44 A second report followed in 1809.45 Together they provided the basis for the law of 1810 on polluting industries.46 The 1804 report argued against the necessary validity of complaints by claiming a distinction between industries with processes based on organic putrefaction, which released "smells that were disturbing or toxic fumes," and those with processes 42 Patrice Bret, "Des essais de la Monnaie à la recherche et à la certification des métaux: un laboratoire modèle au service de la guerre et de l'industrie (1775-1830)," Annales Historiques de la Révolution Française 320/2 ( based on fire, which emitted vapours or gases that were uncomfortable to breathe, but usually only inconvenient. In particular, factories that were well run, might release an obnoxious, but certainly not a harmful smell. As a matter of fact, they wrote, the smell released by sulfuric acid factories "was not dangerous in the least for the workers who breathed the smell daily, and no neighbours' complaint could be deemed well founded." As for nitric and hydrochloric acid factories, their characteristic smell could not affect human breathing; the men "who work there every day were not at all inconvenienced and it would be very wrong of the neighbours to complain. writhe on the floor in pain; often these first effects of oxy-muriatic acid can cause even serious illnesses."48 In fact, Chaptal knew that occupational health was at stake in the workplace. In 1798, in his Essay, he argued that "the various tasks in a workshop are not all equally easy or pleasant; and since young men are too often minded to refuse difficult or repulsive tasks, a coercive force is needed to compel them to carry out these tasks and this force can only be found in the ties that bind them to the workshop and keep them at the disposal of their superiors."49 Politics and productivity won the day in his view. The 1804 report's fundamental stance was that the central government needed to protect France's chemical industries. Obstructions "would be at once unfair, persecutory, harmful to the advancement of the arts and would not address the harm caused by the operation." Chaptal and Guyton thereby turned the Minister's question on its head, moving away from a public health issue to a concern of political economy by defining an entirely new program. "[P]rosperity of the crafts absolutely requires that boundaries are set to put an end to arbitrary decisions by magistrates by drawing a circle around industrialists, inside which they will be able to ply their trade freely and securely."50 The 1809 report followed similar reasoning, but the context had changed. On the one hand, since its foundation in 1802 and its specialization in industrial affairs in 1806 (with chemists Deyeux and Cadet de Gassincourt as its authorized experts), the Paris Conseil de salubrité had acquired an undeniable legitimacy. 47 Quotations from "Rapport" (1804) (see note 44). 48 Robert O'Reilly, Essai sur le blanchiment (Paris: Bureau des Annales des arts et manufactures, 1801), 99. 49 Chaptal, Essai, pp. 9-10 (see note 39). 50 Quotations from "Rapport" (1804) (see note 44). On the other hand, for several months, an ongoing problem had been caused by sodium hydroxide factories, in which sea salt was broken down by sulfuric acid using the Leblanc process, discharging large quantities of muriatic acid. Several soda plants, managed by distinguished chemists who would become members of the Conseil de salubrité or were very close to them, were built, in the Parisian suburbs after 1800. The irreversible damage caused by acid vapours and the utter destruction of crops and orchards around these factories was obvious. Faced with a fresh spate of pollution cases in 1809, the Minister was forced to commission a second report from the institute. The new committee membership had a similar "industrialist" flavor: alongside Chaptal and Guyton de Morveau, the entrepreneurs Fourcroy and Vauquelin also owned a sizeable chemical factory in the center of Paris, while the chemist Deyeux made no bones about his industry bias in the Conseil de salubrité. The Minister urged its authors to strike a balance between the interests of industrialists and those of neighbouring property owners. No longer simply cast as victims, industrialists were required to choose factory locations carefully. The report's conclusion thus called for a consensus, proposing to group industries into three classes according to their degree of nuisance. The chemists suggested introducing specific administrative enquiries for the purpose of authorizing factories in each group, to pre-empt most pollution problems. However, the spirit of Guyton's 1793 report on minium was not forgotten; on the contrary, the report promoted technical improvement for the chemical industry as a means of moving from one class to another to lighten constraints and government control. These conclusions were included in the law of October 1810 on insalubrious industries. Pollution and Governance through Chemistry The decree of 1810 aimed to establish a regulatory framework by separating industries into three categories depending on their level of noxiousness. A great deal was at stake in how a factory was categorized; being moved from the first meant that the factory was no longer considered noxious and could avoid the Conseil d'Etat's long and strict authorisation procedure. The decree was therefore supposed to promote innovation and the perfection (a word very often used) of processes.51 Conseil de salubrité members, who completely sup- 51 Thomas Le Roux, "La chimie, support du développement de l'industrie perfectionnée sous la Révolution et l'Empire," Natacha Coquery, ed., Les progrès de l'industrie perfectionnée (Toulouse: Presses Universitaires du Midi, 2017), 26-35. ported their founder's industrialism, were soon convinced by the principle of process improvement as a way of avoiding production restrictions. From 1811, the Conseil de salubrité linked industrial improvement and public health. In the years after the decree was first implemented, Conseil de salubrité members expressly encouraged the building of chemical plants in Paris, as shown by numerous reports supporting the four flagship factories mentioned above. These factories belonged to the first class according to the decree, but had been set up prior to it. Their assessment by the Conseil de salubrité was spurred by complaints from neighbours. Impressed by these magnificent factories for which substantial capital had been raised, the Conseil systematically ruled in their favor. Complainants were discredited as reflecting the muchadmired entrepreneurs' reverse image, their complaints deemed even less reasonable because at each inspection, improvements were observed. To explain why complaints persisted, the Conseil blamed exuded fumes on accidents, themselves considered rare and due to worker negligence, an increasingly standard response from the nineteenth century. Hopes about further improvement rested on the wager that scientific theorizing and laboratory tests could and would be confirmed on an industrial scale. Despite multiple protests, none of the main factories were threatened with closure. Instead, the chemical industry became a pillar for industrial governance, with chemists and other scientists given a crucial role. On one hand, they were granted authority through claims of how they stimulated further industrial innovations. On the other hand, they were asked to exercise that authority as arbiters of the governmentally sponsored drive for national prosperity and perfection of the arts while attending to matters of public health. Like Achilles' spear, chemistry was thus poised to "cure the wound it had inflicted."52 In calling for grouping insalubrious industries together in certain areas, for example, the Conseil de salubrité member Parent-Duchâtelet showed the way: [A] special government official will be able to supervise them effectively and implement the conditions required to ensure public health. We stress the importance of the latter point, to show that large manufacturing centres will not become, as we might have feared, sites of infection by expelling their poisonous atmosphere far away, but will contribute to the advancement and sanitizing of factories, and perhaps also to the improvement of the arts.53 This sanitizing by chemistry was carried out in several ways, depending on the industry, through disinfection, smoke consumption or condensation. In industries using putrescible matter, "disinfection" was one of the preferred means of applying the recommended procedures. The first large-scale trials were carried out in Parisian gut factories using chlorinated products, in a decisive battle against putrid infection. In 1820, the Société d'encouragement pour l'industrie nationale created an award for manufacturers who could dress guts without prolonged maceration or noxious smells. The model gut factory in Clichy near Paris became a site for testing disinfection, using the new method of the pharmacist Antoine Germain Labarraque. The guts were steeped in a soda chloride bath, which removed the smell straight away. Though expensive, the method was quicker than the old one and succeeded in sanitizing the factory. In October 1822, Labarraque was awarded the prize and the Conseil de salubrité recommended the method to every new gut factory, assuming that it would also be adopted in older factories in a few years.54 The "disinfecting" properties of acids were also put to use, thanks to their powers of decomposition. Darcet tested the use of sulfuric acid himself for melting tallow in the new Parisian slaughterhouses after 1818. In the 1820s, the acid was also used to purify oils in many Parisian workshops, distilleries and potato starch factories, where it immediately turned starch to syrup, and in beet sugar refineries, where it prevented decay. Darcet began to use muriatic acid in 1815 to extract gelatine from bones, and encouraged strong glue manufacturers to adopt his method.55 With regard to smoke consumption in furnaces, he was once more at the heart of technological change to cut down the amount of industrial smoke. To reduce the incidence of industrial smoke increasingly criticized by city dwellers, especially as the use of fossil coal had begun to spread in Parisian industries, the Conseil d'Etat strove to recommend the construction of smokeless furnaces. Having witnessed the first lasting attempt to build a smokeless furnace at the mint in 1808, Darcet of smoke "burning" furnace with improved combustion and perfected the technology. Finally, the expansion of the chemical industry in the Paris region forced manufacturers to take technical measures to preserve the surrounding areas. Condensing, absorbing, dissolving and closed-system production were all complementary methods implemented to "coerce" or retain the vapours produced by the manufacturing or use of chemicals by industry. In the 1820s, the Conseil de salubrité's efforts to condense acid vapours increased. Whenever possible, closed-system production was encouraged in acid factories. Woulfe's apparatus, in which gases were forced to pass through a series of tubes and vessels filled with water or liquid absorbents, was recommended in nitric acid workshops.56 Other condensation devices were proposed for various industries that implemented chemicals and acids in particular. This was the case for precious metal refining, for instance. Gold and silver refining, no longer restricted by a Directory government monopoly and performed subsequently with less expensive methods using sulfuric acid instead of nitric acid, was carried out in several Parisian workshops after 1815. Having observed various technical processes at the mint, Darcet set out to prove their harmlessness provided a number of steps were followed to ensure gas condensation. Therefore, industry's presence within cities hinged on the manufacturers' ability to prevent the discharge of acid gases. In 1827, Darcet himself designed a model refining workshop and its furnishings. In the refining furnace, five closed platinum vessels allowed acid gases to discharge through a lead pipe and flow into a single pipe under the workshop towards three refrigerated lead boxes, where the sulfuric acid fumes condensed. Uncondensed sulfuric vapours remaining in the gas were then removed by directing the gas into a box filled with hydrated lime, which rotated on itself when operated by a crank and a gear system. This mixed the lime and improved contact with and absorption of the sulfuric acid. Finally, a pipe discharged any remaining vapours from the box into the main workshop stack. 57 The same reasoning was applied to recycling. In the 1820s, the chemists Charles Derosne and Anselme Payen embarked on producing depurative organic compounds (bone-black and animalized carbon) from animal residue. Recovered animal waste was made into chemicals with sanitizing properties, for example to clarify and purify beet sugar, while partly addressing the problem of refuse disposal. Against charges of polluting the neighbourhood, the Conseil de Salubrité praised Derosne's operation for recycling wastes, boosting production and sanitizing the environment: This animal matter [livestock blood], which used to be wasted and often spoiled the air as it decayed, is now carefully collected to be used in numerous sugar refineries […] and will be turned into a worthwhile export industry; the fortuitous benefit of an industry in operation, which extracts a useful product out of a worthless substance and turns an unhealthy cause into a new source of wealth.58 Like Derosne, Payen was involved in the chemistry of recycling animal waste, which he distilled in his Grenelle plant to make ammonium chloride.59 By 1820, the factory had become a huge industrial complex, also manufacturing soda chloride, lime chloride, animalized carbon, sugar, and so forth. While pollution from recycling on such a large scale was frequent and at times permanent, the Conseil de Salubrité found a convenient answer in proposing to recycle the recycling plant's main waste, empyreumatic oil, which they offered to gas factories. These could distil the oil into lighting gas, in exchange for which the soda chloride factory could then treat the ammoniated waste that they produced.60 Therefore, most of the time, sanitizing processes combined waste recycling and its profitable reclamation. This insistence on promoting technical improvement explains why chemists became so fond of engraved technical drawings, which were soon adopted by the Bulletin de la Société d'Encouragement pour l'Industrie Nationale. From the first issue published in 1802, the Bulletin included copper-engraved plates as inserts, showing the emerging graphical art form that was developing around the Conservatoire des arts et métiers. 61 Unlike representations by artists, who had distanced themselves from production sites during the revolutionary decades, technical drawing was a political undertaking in itself. As a tool for rationalization, it introduced a new symbolic order that established technology as superior to work places and physical movements. 62 Chemistry was at the heart of the combination of technical devices and law: environmental governance simply conformed to the necessities of competitive industrial production. In March 1815, to explain the shift to local prefects, the government tried to clarify the new approach and the spirit that should guide their decisions on implementing the 1810 law: "Before, the existence of chemical factories was precarious in some respects […] In reviewing authorisation applications [the local authorities] will most certainly rise above petty interests; and driven only by reasons of public interest, they will give opinions based on considerations of a higher order."63 Sulfuric acid production improved continuously as greater numbers of lead chambers appeared; they symbolized the analogy between economic growth, political economy, chemistry and technical and environmental devices. Increasingly effective lead chambers were one of the advanced industries that could better prevent acid vapours, and was typical of scientists' discourses. According to Chaptal, in 1819, this technology had "reached perfection, as not one sulphur atom was lost in the operation as proven by the analysis carried out on the acid produced."64 Without any loss of acid, and therefore, no loss of value for the manufacturer, virtuous profit was combined with environmental protection, Chaptal claimed. Conclusion Thus, linked to industrial production and scientific improvement, chemistry contributed to change environmental perceptions of the industrial world by the turn of the nineteenth century. The mistrust widely shared by local authorities, social observers and citizens regarding factory and workshop emissions was replaced by a new definition of harmfulness and harmlessness as industrialization imposed its pace, in order to adapt to the claimed imperative of economic growth. While this shift was perceptible from the 1770s with the first regulatory exceptions for strategic products, the 1810 decree -imagined, designed and implemented by chemists -perpetuated chemistry's role as an environmental regulator. Chemistry and its practitioners helped build an industrial world at a time when its arrival was not universally welcomed. After 1815, there was no doubt that industrial advancement had become a value shared by many actors. Through their experiments as well as their discourses and involvement in industrial applications for their discoveries, chemists participated in this expansion more than others. The authorities provided a great deal of support, especially in resolving conflicts about pollution caused by the chemical industry, by conceiving an administered regulatory framework that justified industrialism. In 1816, in a retrospective essay on industrial growth since the Revolution, Chaptal's first assistant Claude-Anthelme Costaz sang the merits of the 1810 decree: "We are not afraid to say that it has been of great benefit to owners and manufacturers […] [who] […] are now assured not to be bothered when carrying out their business once it has been authorized by the authorities: which is not inconsequential for the prosperity of chemical factories."65
110174050
s2orc/train
v2
2019-04-13T13:03:29.603Z
2013-07-17T00:00:00.000Z
Dynamic analysis of vibrating screener system Transversal vibrations of a screen for a fine granular material are studied using both analytical and numerical methods. In the analytical approach the motion of the screen is described by partial differential equations. The general solution of the screen free vibrations is derived from variables separation method. In the numerical computations the finite element method is applied. The screen compound geometry and the mass of the granular material are considered. Screen natural frequencies and natural mode shapes are determined. Numerical results are compared with the analytical solution. The screen geometry simplifications are proposed and validated by benchmark tests. The presented research allows determining the impact of the fine–granular material mass on the vibrating screener. The influence of the granular material on the screen natural frequencies is also investigated. It is important to note that the results of this research provide practical hints for vibrating screens designers. Introduction The large number of cut-outs and their distribution allows effective fine granular material sieving. A theoretical solution for a body of such geometry does not exist. Both static and mode shapes solutions exist only for a plate without openings. Many questions arise considering this model of sophisticated geometry. Is it possible to replace the screen geometry by a solid plate without the loss of accuracy in prediction of natural frequencies and natural mode shapes? What is the sufficient number and necessary concentration of openings? How to modify material data -Young's modulus and density -in order to obtain comparable results for the screen plate and the solid plate? The answers for the above questions allow developing simplified vibrating screen model which has the same properties as considered screen. Numerical results for the solid plate can be compared with the ones derived from analytical methods. This way, the reliability of proposed simplified model can be validated. Free vibrations of rectangular plate -theoretical solution A vibrating system for fine-granular material is considered ( figure 1). The vibrating screen is a rectangular plate 0.85 x 1.5 m one millimeter thick with the set of rectangular openings (figure 2). The solution of free vibrations of the rectangular plate simply supported (pinned) on all edges is commonly known. Here, the analytical solution of natural frequencies and mode shapes for the rectangular plate in which two opposite edges are pinned while the other two are free is presented. The considered plate in the Cartesian coordinate system is shown in figure 3. ______________________________ 1 To whom any correspondence should be addressed The differential equation for plate natural vibrations is: where ρ is the density, h is the plate thickness, D is the plate bending stiffness ( ) Eh D (2) where: E -Young's modulus, ν -Poisson ratio In the variables separation method the solution ( ) is assumed to be a product of spatial and time functions: After substituting (5) into (1), (3) and (4) one obtains: where: ω is the natural frequency. After introducing parameter p: By application of Levy's method for considered boundary conditions, the natural frequencies can be obtained from the following equation: For the considered plate the magnitude of natural frequencies found analytically are presented in table 1. Free vibrations of rectangular plate -numerical solution The solution for free vibrations of the rectangular plate described in the previous chapter is also found numerically using the finite element method [2]. Two commercial FEM programs are used independently: ANSYS and ABAQUS. In both programs the rectangular plate is modeled as a mesh of shell elements. The mesh consisting of higher-order rectangular elements is generated in both programs. An example of the solution obtained in ANSYS is presented in figure 4. The magnitudes of natural frequencies of the rectangular plate computed by ANSYS and ABAQUS programs are summarised in table 2. Results presented in table 2 show excellent accuracy and convergence to the theoretical results for the first two-three natural frequencies. For higher natural frequencies the precision is still acceptable but the approximation errors are slightly larger. Fortunately, these frequencies are not important from the point of view of presented research. Free vibrations of rectangular plate with cut-outs Following numerical simulations for the solid rectangular plate, similar computations are made for the rectangular screen. This problem cannot be solved analytically. In the numerical computations very dense finite elements meshes are used. In the best-fitted model each gap visible in figure 2 is covered by three layers of shell elements. The size of the problem and computation time is much larger now than those in previous example. The mesh generation is a very laborious task because of sophisticated geometry which includes thousands of openings. An exemplar solution obtained by ANSYS program is presented in figure 5 (a mesh of holes is clearly visible). The natural frequencies of the rectangular screen plate computed by ANSYS and ABAQUS programs are presented in table 3. Comparison of the natural frequencies of solid and screen rectangular plates shows that for the first three frequencies obtained results are comparable and they are similar to the analytical solution of the solid plate. Thus, the question arises if the solid model of rectangular plate can be used instead of screen one? It is very important from the point of view of the total cost of numerical computations [3]. The answer for the above question is positive if material data is properly modified. Two material constants influence natural frequencies radically i.e. Young's modulus (representing stiffness) and the density (representing mass). If the ratio of E/ρ is preserved, the solution obtained for the same geometry and the same boundary conditions is the same. In this case the geometries of solid and screen plates are different. In the numerical tests considering replacing the model of the screen plate with the solid one the mass of the plate should remain constant. Thus, the density of the solid plate is corrected. The magnitude of the Young's modulus is also modified. This modification is based on the accuracy of the first natural frequency which is the main frequency of interest. During the series of computations the optimal Young's modulus was found. The magnitude E = 1.06e11 Pa is selected in the analysis of solid plate model used instead of the screen plate model. Of course, if other natural frequencies were also important, the choice of Young's modulus may have taken them into account and the least squares based-methods could have been applied. Forced vibrations of rectangular plate Another problem solved within presented research is the prediction of rectangular plate forced vibrations. In this case the plate is loaded by the mass of stones. The layer of stones covering the plate is shown in figure 6. The contact between the plate (modeled as a shell) and stones (modeled as a solid body) is assumed to be of type "bonded" (ANSYS program). It means that transverse displacements of both entities are common. The presented analysis is the first attempt to solve the screen forced vibration problem [4]. The assumed numerical model is as simple as possible. Instead of the screen model the solid plate model is used. Material data is modified as described in the previous chapter. Unlike in the reality, the thickness of stones layer is constant here. Of course, this model does not allow individual stones to pass through the screen. The main goal of this test is the investigation of the influence of stones mass on the natural frequencies of the screen. The results of analysis are summarised in table 4. As expected, the natural frequencies have decreased because of the larger vibrating mass. Although presented model is relatively simple, the results of numerical simulations are promising. They should be verified experimentally. Appropriate research is planned in the future. Conclusions Numerical investigations presented in this paper are a part of a larger project considering development of effective vibrating screen system. Frequencies of vibration inductors applied in this system should be far away from the resonance i.e. far from the natural and forced frequencies of the vibrating screen. The set of free and forced natural frequencies can be found by the numerical analysis. The most important are the first few frequencies. In this research the finite element method is used in order to provide free/forced frequencies and appropriate mode shapes. Results of numerical analyses are compared, if possible, with the theoretical solution. The simplified model of the solid plate based on the modified material data is proposed. In the future research the experimental investigations will be undertaken to help validate and improve proposed numerical models.