added
stringlengths
24
24
created
stringlengths
23
23
id
stringlengths
3
9
metadata
dict
source
stringclasses
1 value
text
stringlengths
1.56k
316k
version
stringclasses
1 value
2023-01-17T16:51:56.692Z
2023-01-13T00:00:00.000
255923865
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s41605-022-00379-5.pdf", "pdf_hash": "2da424a8065a1ccee2df99d15a377f52ef10058c", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46759", "s2fieldsofstudy": [ "Physics" ], "sha1": "72cac214b594c7dea7967f556119544c3fd4fd67", "year": 2023 }
pes2o/s2orc
Detector development at the Back-n white neutron source Back-n is a white neutron beamline at China spallation neutron source, which was established in the year of 2018. It is a powerful facility for nuclear data measurement, neutron detector calibration, and radiation effect research. A series of detectors were built for different experiments, including beam monitoring, beam profile measurement, neutron induced secondaries (fission fragments, light charged particles and gamma) cross section measurement, and neutron resonance radiography, etc. A common digitization electronics and a cluster-based DAQ were developed for these detector systems. Most detectors have been employed at Back-n and serviced for experiments from the beginning of the beamline running. As an overview of detectors of Back-n, the details of the detector design and the experiment performing are described in this paper. Some developing systems, e.g., MTPC and B-MCP, are also included. Introduction Back-n is a white neutron source associated with the CSNS high energy proton accelerator, which has been running since 2018. The main motivation of this facility is for the nuclear data measurement and neutron detector calibration. More than hundreds experiments have been accomplished since the beginning of the operation, including 235 U/ 238 U fission cross section measurement [1], neutron induced light charged particle emission cross section measurement of 6 Li and 10 B [2,3], and others. Some of the results were collected by the EXFOR (Experimental Nuclear Reaction Data) [4] and evaluated by consultants on Neutron Data Standards [5]. Figure 1 shows the layout of the Back-n beamline and detectors' locations. The 1.6 GeV proton beam hits the tungsten target with a 15 • deflection. The back-streaming neutrons are leading to the Back-n tunnel, which has a long flight distance for the neutron time-of-flight method. Two end stations ES#1 and ES#2 are constructed for different nuclear data measurements. The ES#1 has a distance of about 55 m, and ES#2 is about 70 m from the target. The beam diagnostics and several detector systems, for example the Fission chamber, the light charged particle detector array, the C 6 D 6 scintillation detectors and the camera system were developed before the beamline established and serviced most of the experiments at Back-n. Several ambitious developments, for example, the multi-purpose time projection chamber (MTPC) , the radiation resistant semiconductor detectors, the Gamma-ray total absorption facility (GTAF-II) and the neutron-sensitive micro-channelplate detector (B-MCP) are in progress for future experiments. The layout of the Back-n beamline and the locations of the detectors can be found in Fig. 1. This article follows the classification of detectors, which can be divided into beam diagnostic systems, detector systems for different particles, and neutron resonance radiography systems. It also includes the electronics and DAQ systems used by these detectors. Detectors for neutron beam characterization and monitoring Beam diagnostics is basic component for the beamline construction and running. Back-n has built an online flux monitor (Li-Si detector), a neutron energy measurement detector (Fission chamber) and a beam profile detector (Micromegas). Li-Si beam flux monitor A 6 LiF-silicon detector array was developed for real-time neutron monitoring at the back-n white neutron beam. It is installed on the beamline and works simultaneously with the nuclear data measurements to provide neutron flux data for normalization of the experiments [6]. To minimize beamline interference and background, the monitor was designed as a structure with a 6 LiF neutron conversion layer and Si detector array separated. 6 LiF is deposited on a 10 µm Al film with a thickness of 360 µg/cm 2 and placed on the beamline with an area of φ 80 mm covering the maximum beam envelope. Eight identical Si detectors are placed symmetrically on the side of the 6 LiF layer and upstream of the beam to reduce scattered neutron background. The sensitive area center of the Si detectors is on the diagonal of the beamline and the layer center. The bottom edge of the Si detector is outside the beam envelope to avoid direct irradiation by the neutrons. The monitor is shown in Fig. 2. It is entirely installed in the middle of a vacuum tube with a length of 40 cm and flanged to the other vacuum tubes on the beamline. Two Li-Si monitors were mounted in ES#1 and ES#2, and the distance between the 6 LiF layer and [7]. The voltage on the Si detector is -30 V and is supplied by a custom power supply, which was designed to supply voltages to both the Si and MSI-8 preamplifier. The well-known cross section for the 6 Li(n, t) 4 He reaction allows to monitor the neutron flux from thermal to approximately 1 MeV energies [8]. The reaction products α and 3 H are emitted opposite to each other according to the principle of momentum conservation, which ensures that the Si detectors do not have double counting. The monitor count rate reflects the neutron fluence rate, which is consistent with the proton power on the target. The conversion factor requires a calibration using 235 U/ 238 U fission chamber, which gives measurement of the actual neutron fluence rate. Good performance was obtained when the Li-Si monitor was mounted on the beamline for monitoring experiments. Figure 3 shows the real-time online results obtained by the Li-Si monitor when CSNS operates at a frequency of 25 Hz and proton beam power of 100 kW. The (a) is a typical waveform of a neutron signal, and (b) is the integral spectrum of the signal pulse height, in which the peak of α-particles and tritons can be clearly identified. The (c) shows the online curve of count rate. Few downward spikes in the figure may be caused by the instantaneous drop of the beam power due to a flash fire or other reasons. Although the curve has a large fluctuation with the measuring time in figure (c), it has good stability after being normalized to the beam power. As shown in (d), the counting rate is approximately 45.28 per 10 kW and the uncertainty is less than 1%, which includes statistical and systematic errors. The insert shows that there is a stable conversion factor between the proton flux and the monitor counts over a long period of time. In general, the calibrated monitor counts and integral protons can equivalently reflect the neutron flux during the monitoring period. In the case of stable energy spectrum of Back-n, the entire neutron flux can be obtained by a conversion factor and without having to detect neutron events over all the energy regions. The cross section of 6 Li is very smooth and accurate in the neutron energy below 1 MeV. The detailed structure of the energy spectrum can be well measured by the monitor especially in the energy region below 10 keV, which are difficult to be analyzed by fission chamber with 235 U target, since the 235 U target has many resonances in the neutron cross section in this region [9]. The neutron energy is determined by the TOF method which defines the neutron velocity as V = L/T , where L is the flight path from the tungsten target to the 6 LiF layer and the T is the time-of-flight of neutrons obtained by the time interval between the time recorded by the detector and the trigger T 0 . For the neutrons below 10 keV, the effect of the T is much less than 1%, and the L can be calibrated by comparing the energy spectrum detail structure of the hyperthermal neutrons (1-10 eV) to that measured by the 235 U fission chamber. With determined L and T , the kinetic energy of neutrons can be calculated with the relativity formula. The neutron spectrum is obtained by dividing the counting rate by the cross sections of 6 Li extracted from ENDF-B/VIII.0 [10]. The result is normalized to 100 kW beam power and is shown together with the spectrum measured by the 235 U fission chamber with 100 bpd, as shown in Fig. 4 [11]. The energy spectrum below 10 keV measured by the monitor agrees with the spectrum measured by the 235 U target pretty well in the overall trend and without the resonance peaks. We compared the details of the two spectra in the energy region of 2.5-5 keV. As shown in the insert, the details of the two spectra outside of the resonance region can basically coincide. Fission ionization chamber for neutron energy spectrum measurement The fission chamber is dedicated to measuring neutron energy spectrum of Back-n, which is composed of several targets, electrodes, shielding, insulating parts, signal wires, inlet and outlet feedthroughs, etc., as shown in Fig. 5. The fission chamber adopts a copper shielding chamber and electrodes. The target substrate is a platinum plate with a thickness of 0.2 mm and a diameter of 32 mm. The diameter of the active area of the target is 20 mm. The total amount of the enriched uranium coating is 832.97 µg . The collection electrode, the target plate and the chamber are separated The comparison of the neutron energy spectrum measured by the Li-Si monitor and the 235 U fission chamber. The detailed structure of 2.5-5 keV region is in the insert [6] by insulating materials, and the distance between the target plate and the collector is controlled by the varying thick- ness of the insulating gasket. The target is grounded, and the collector is connected to the ORTEC-142B preamplifier, which is connected to a positive high voltage of 400 V. The fission chamber adopts a gas flow design, and the working gas is argon-methane P10 (10% methane, 90% argon). The ability to distinguish noise and fission signals is obtained by adjusting parameters such as electrode spacing and voltage, in Fig. 6. The experimental results show that when the distance between the electrodes is small, the energy spectrum of the fission chamber is generally lower. The α decay events and fission events are mixed together; with the increase in the distance between the plates, the fission fragment peak and the α peak are gradually distinguished, and the peak-to-valley ratio of the fission fragment is getting higher; when the distance reaches 10 mm, the identification between the fission fragment and the α particle can be counted more accurately. Using the 14 MeV pulsed neutron beam generated by the D-T fusion reaction at the 600 kV Cockcroft-Walton generator of the China Institute of Atom Energy, the time correlation spectrum between the output signal of the fission chamber and the pick-up signal was measured, while the plate spacing was 10 mm, as shown in Fig. 7. The distance between the fission chamber and the tritium target head is 1.5 cm, and the neutron flight time is about 0.28 ns, which can be ignored. The rise time of the output signal of the fission chamber model was measured to be about 30 ns using an oscilloscope. With a Gaussian fitting, the FWHM of the fission signal peak is 30.97 channels in TAC, and the time resolution calculated according to the TAC channel width is 14.7 ns. The time resolution mainly comes from two contributions, the pulse width of the pulse beam and the time resolution of the detector. The pulse beam width is 2-3 ns, and the contribution to the time resolution is very small, so it can be considered that the measured time resolution is the detection time resolution of the device. The fission reaction generally produces two fission fragments with opposite directions. When the direction is parallel to the target surface, both fragments cannot be detected. Therefore, the detection efficiency of the fission chamber is shown in Eq. 1, which is related to the number of target nuclei N , the fission cross section of σ f (E), and the detection efficiency ε f of fission fragments, where ε f is less than 100 % . The neutron flux φ can be deduced by the companion particle method, and the detection efficiency of the fission chamber can also be obtained in combination with the fission chamber count rate N f . As a result of calibration, with the 14 MeV neutron test, the detection efficiency is 4.3×10 −6 , and the uncertainty is less than 5%. The final Back-n neutron energy spectrum measured by this calibrated fission chamber can be found in reference [11]. Micromegas for neutron beam profile measurement Measurement of the neutron beam spot distribution is the basic requirement for the Back-n facility. Thanks to the good two-dimensional (2D) spatial resolution and fast timing capability of the Micromegas (micro-mesh gaseous structure) detector [12,13], it can be used for a quasi-online measurement of the neutron beam profile, e.g., at the n_TOF facility [14] and at the Back-n white neutron facility [15]. Micromegas detectors (MDs) have been widely used in nuclear and particle physics including rare event searches [16,17] and neutron detection [18]. In this section, we present the development of an MD with 2D spatial resolution capability for the measurement of the neutron beam spot distribution. The current setup of the MD, used as a beam profiler at Backn, sums up the neutron events of the entire energy range, with higher weights on low-energy neutrons due to higher cross sections 10 B(n, α) 7 Li and 6 Li(n, t) 4 He. However, it is planned, as part of the future detector development, to improve the MD system design so as to obtain the energy dependence of the neutron beam profile of Back-n. Figure 8 shows the MD of this work. A thin layer of 10 B or 6 Li deposited on a thin aluminum foil is used as the neutron converter. The neutron converter is attached to the cathode Fig. 8 Micromegas fabrication process. a The mesh after thermocompression bonding. An array of cylindrical pillars is visible, which hold the mesh and ensure a constant distance between the mesh and the PCB. b The detector chamber Fig. 9 Schematic of the MD with back-to-back double-avalanche structure. The mesh (dashed line) separates the 5 mm drift gap from the 100 µm avalanche gap. Each of the XY readout strips is independently connected to the central board electrode facing the drift gap (i.e., the region between cathode and mesh). The mesh-anode avalanche gap (an amplification region where electron avalanches occur) is manufactured with the thermal bonding method (see Ref. [19] for detailed description of the manufacturing process). With this method, each side of the readout PCB is attached to a metallic mesh to form the avalanche gap, and a back-to-back structure is obtained (see Fig. 9). These two avalanche structures are able to work simultaneously for a higher detection efficiency. The avalanche gap and the drift gap are 100 µm and 5 mm, respectively. The total active area of the MD is 90 mm × 90 mm. The layout and description of the MD are described in detail in Ref. [15]. Two detector units (namely MD-MA and MD-RA) with different anode designs are fabricated and characterized with a 55 Fe X-ray source to obtain the relative electron transparency, the gain and the gain uniformity, and the energy resolution [15]. For MD-MA, copper anode is used; for MD-RA, resistive anode with germanium coating is used. The resistive layer protects the detector from discharges caused by possible intense ionization. Although the resistive anode MD-RA is relatively more complex to fabricate than the copper anode MD-MA, the former has higher gain than the latter. A dedicated front-end electronics system based on the AGET ASIC chip [20,21] is developed to process the anode strip signals [22]. In total, two AGET chips with 128 channels are used. The signal is amplified by a charge sensitive preamplifier and a shaping amplifier, and subsequently digitized by a multi-channel analyzer. The neutron beam profile reconstruction capability of the detectors has been tested with an 241 Am α source, an Am-Be neutron source, and the CSNS Back-n neutron beam (Fig. 10). Figure 11 shows the 2D neutron beam profile at the Back-n ES#1 measured by the MD-MA with 10 B converter. The good agreement between the simulation and experimental data confirms the reliability of this measurement, as shown in Fig. 12 [15]. Detectors for measuring fission cross sections and light charged particle emissions Back-n used a fast fission ionization chamber to measure fission fragments and an light charged particle detector array to measure light charged particles. In future experiments, fragmentation fragments and light charged particles tend to use the same kind of detector system. So a multi-purpose time projection chamber and next generation semiconductor detectors have been developed as an option. The MD together with the front-end electronics system was placed in the shielding container made of aluminum, as a neutron beam profiler at Back-n ES#1. The arrows represent the incoming neutron beam Fig. 11 The reconstructed 2D profile of the Back-n white neutron beam at ES#1. The bin widths for the 2D histogram are 1.5 mm on both the horizontal and vertical axes. The shaded dots in the profile image correspond to the reconstructed positions of the insulating pillars between the mesh and the anode plane Multilayer fission ionization chamber The fission ionization chamber as a reliable approach has been used for neutron induced fission cross section measurements for many years [23,24]. A multi-cell fast fission ionization chamber (FIC) has been developed as an initial detector for Fast Ionization Chamber Spectrometer for Fission Cross Section Measurement (FIXM) at Back-n [25,26]. FIXM contains FIC, front-end electronics, data acquisition (DAQ) system and peripheral supporting system and is mounted at the ES#2 as shown in Fig. 13. FIC with an aluminum shell of φ300 mm × 300 mm and 5 mm thick is filled with a gas mixture of 90% Ar and 10% CF 4 at a pressure of 800 mbar. The two neutron beam windows of φ80 mm are sealed with 100-µm-thick Kapton films and located in the center of the fission chamber caps. There are eight independent cells including seven cells with highpurity fissile materials enriched over 99.94% and one blank cell inside the fission chamber and mounted along the direction of the neutron beam. Each cell consists of two electrodes of φ80 mm with a gap of 5 mm, in which the fission frag-ments produced by fission reaction ionize the gas to generate signals. The gap, the capacitance and the high voltage of +200 V between electrodes are optimized in order to provide electrons having enough drift velocity, which is beneficial to obtain fast signals for high timing precision. A stainless steel of 20 µm thickness or aluminum foil of 100 µm thickness electroplated with the fissile material is the cathode, while the anode is made of aluminum foil of 100 µm thickness. The diameter of about 50 mm and uniformity of the fission coating are measured by the alpha-sensitive imaging method, and its mass is determined with the α-particle spectra measured in a small solid angle device [27]. The aluminum anode collects ionization signals by connecting to an 8-channel preamplifier MSI-8 in which the gain of each channel can be independently adjusted. The output signals of MSI-8 are entered in the common DAQ in which the signals are conditioned, digitized and stored. After several upgrades and performance tests at neutron sources, the improved FIC has a reliable and stable performance with the rise time of fission fragment signals of less than 30 ns from 10 to 90% and a high signal-to-noise ratio. The 236,238 U/ 235 U fission cross section ratios from threshold energy to tens MeV were measured by using FIC and time-of-flight method at Back-n, and the results verify the reliability of FIC [1,28]. FIXM could be used to measure fission cross sections of various isotopes covering a wide neutron energy region. Light charged particle detector array In cross section measurements of neutron induced light charged particles emission (n, lcp) reactions, detection and identification of charged particles are essential. Detection setups comprised of detectors, for example, grid ionization chambers, silicon detectors and scintillators, are usually adopted in previous measurements [29][30][31][32][33]. Based on these detection setups, particle identification(PID) of charged particles is realized with method such as E-E method. At Back-n, a light charged particle detection array(LPDA) has been built and applied to experiments. Sixteen units E-E-E telescopes are placed in a vacuum chamber with a diameter of 1 m. Each telescope is composed of a lowpressure multi-wire proportional chamber(LPMWPC) [34,35], a 2.5 cm×2.5 cm PIN silicon detector and a cube CsI(Tl) detector with a side length of 3 cm. Thickness of each sil- Fig. 12 The upper panels show the 1D projections of the slices of the 2D profile corresponding to a the middle Y-strip onto the X -axis and b the middle X-strip onto the Y -axis, for both data (solid squares) and MC simulation (solid curves). The expectation from MC simulation is normalized to data. The lower panels contain the significance of the deviation between the observed data and the simulation prediction in each bin of the distribution, considering only the statistical fluctuations in data Fig. 13 Photograph of FIXM in Back-n beamline icon detector is 300 µm. The angle between two adjacent telescopes is 9.5 degree. As Fig. 14 illustrates, with eight telescopes sealed in one cavity, the telescopes are divided into two identical parts. The entrance window of the cavity is replaceable, and a 4 µm polypropylene foil is used as an entrance window in default. The distances between the surface of the LPMWPC, silicon detector and CsI(Tl) detector are 205, 214 and 222 mm, respectively. LPMWPCs and silicon detectors are connected to charge-sensitive preamplifiers that are mounted behind CsI(Tl) detectors in the cavity. Besides, optical signals produced in a CsI(Tl) detector are collected and converted to electric signal by a 2.1 cm×2.1 cm SiPM. During an experiment, the cavity will be filled with the working gas of the LPMWPC, usually 0.2 atm Ar+CO 2 , which means that all detectors and charge sensitive preamplifiers in the cavity also work under the gas environment. To stable the pressure in the cavity, an air pressure control system is applied to ensure that the pressure in the cavity is ±200 Pa near the preset value. Besides, as Fig. 14 shows, a target holder on which four targets can be mounted is installed on the target chamber cover, and the center of different targets can be moved from top to bottom to the beam center position with the movement of the remote control motor. In each telescope, the LPMWPC is used to identify the low-energy particles, and the CsI(Tl) detectors are applied to particles with higher energy. Results of an in-beam test indicate that using the E-E method, protons in the energy range of 0.5 MeV to 100 MeV can be identified by the telescope [34]. Besides the E-E method that two detectors required, the amplitude-E n method and pulse shape discrimination method are also adopted to identified particles using silicon detectors [36]. In the amplitude-E n method, E n is deduced from the measured time-of-flight information. The amplitude-E n method is utilized in 10 B(n, ) 7 Li and 6 Li(n, t) 4 He cross section measurement reactions. Simulta- Fig. 14 Photo of LPDA at Back-n. Sixteen telescopes are divided into two identical parts; the target holder is at the center of the vacuum chamber neously, the pulse shape discrimination of silicon detectors is achieved with a charge and current sensitive preamplifier. The detector combination and PID method depend on the goal and emission particles' energy. As part of the LPDA, a silicon detector array and E-E telescopes have been applied to several experiments. Cross sections of 10 B(n, ) 7 Li, 6 Li(n, t) 4 He, 1 H(n, n) 1 H and 2 H(n, n) 2 Multi-purpose time projection chamber As a kind of detector with excellent three-dimensional tracking capability for measurements of particle momentum, spatial position, angular distribution and particle identification, time projection chambers (TPCs) have been widely used in particle and nuclear physics [39]. In recent years, TPC has also been adopted to the research of neutron nuclear data measurement [40], providing a new powerful tool for the cross sections of difficult measured with traditional methods. To meet the requirements of nuclear data measurement, a multi-purpose time projection chamber (MTPC) is proposed at Back-n mainly for the light charged particle emission reaction, at the same time, other kinds of measurement such as fission reaction, neutron beam profile and neutron imaging can also be carried out with the MTPC. A MTPC has been built and tested at Back-n. The schematic is shown in Fig. 15. The MTPC has a cylindrical sensitive volume of diameter of 140 mm. To form a uniform drift electric field, a height adjustable field cage shown in Fig. 16 composed of stacked PCB rings is designed to meet different measurements requirements. The readout anode PCB is connected at the end of the field cage. At the center of the anode PCB (Fig. 17), an array of 1519 hexagonal 16 The field cage of MTPC pads is fabricated to collect the deposited charge. A resistive Micromegas structure [19] is stretched on the surface of the anode PCB for the purpose of ionization electron avalanche as shown in Fig. 18. With the coating of the high-resistance germanium film layer on the anode surface, the high-voltage stability of the detector is obviously improved and a higher spatial resolution is obtained due to the effect of charge dispersion on the resistive layer [41]. The structure of the electronics system of the MTPC is shown in Fig. 19. The introduction of the electronics system in details can be found in Ref. [42] and a brief introduction is given here. At the front-end of the system are the preamplifier module (PAM), the analog-to-digital module and the power clock management module (PCMM), and the backend is composed of the data concentration module (DCM) and trigger clock module (TCM). The main parameters of the electronics system are listed in Table. 1. The MTPC has been commissioned with the white neutron beam at Back-n to verify its ability for measurements of neutron-induced reactions. In the beam test, a 6 Li sample is Fig. 21. The peak located at the small TOF region is caused by the high-energy neutron events, leading to a gap of dead time of about 40 µs. The distribution after the gap is mainly contributed from the reactions induced by neutrons on the 6 Li sample. A particle identification (PID) between triton and α-particle is shown in Fig. 22, verifying that the MTPC has a well PID ability which is limited if only the energy spectrum is obtained in the traditional method. Currently, the finalized MTPC has been proposed and the new version of detector facility is under manufacturing. With this newly developed TPC detector system, a physical experiment will be conducted at the Back-n beamline in the nearly future. Radiation-resistant semiconductor detector Because of the excellent radiation resistance performance, wide-band gap semiconductor detectors are considered as an optimal option for silicon detectors. Among the wide-band gap semiconductor detectors, SiC and diamond detectors are widely studied for particle detection [43][44][45][46]. SiC and diamond detectors have the advantages of neutron and proton irradiation resistance, working normally at high temperatures [47][48][49]. In addition, due to the faster carrier drift speed and higher breakdown field strength, the rise time of output signals is faster than silicon detectors'. Thus, these detectors are suitable for fast measurement or time measurement [50]. 4H-SiC detectors have been manufactured and tested at Back-n. The thickness of the substrate of the N-type 4H-SiC detector is 370 µm, and the N-type 4H-SiC epitaxial material is produced by Cree Company. The epitaxial layer includes an N+ type 4H-SiC buffer layer with a thickness of about 500 nm and a 21 µm-thick N-type 4H-SiC epitaxial layer with a doping concentration of 1.5 × 10 14 cm −3 . A Ni/Au Schottky contact electrode with an area of 1 cm 2 was fabricated on the epitaxial layer, and an Al/Ti/Au ohmic contact electrode with an area of 1 cm 2 was fabricated on the substrate side. The single-crystals diamond used at Back-n is highquality single diamond crystals prepared by Zhengzhou After multi-step cleaning, titanium and gold circular electrodes were evaporated on both sides of the single diamond crystal using thermal evaporation technology to form ohmic contacts and prevent metal oxidation, separately. Annealing at 650 • C for 40 minutes ensures that a layer of Ti and C compounds is formed on the surface of the titanium and diamond, which increases the adhesion and reduces the potential barrier. SiC and diamond detectors have been adopted to several cross section measurements and show good performance during experiments. Nevertheless, the area of diamond detectors and thickness of SiC detectors are still needed to be improved. Detectors for measuring reaction gammas Radiative neutron capture cross section (σ nγ ) is of primary importance in studies of stellar nucleosynthesis, designs of advanced nuclear reactors, and applications of nuclear technology. Nowadays, mainly three kinds of detectors for σ nγ measurement are used: total γ -ray absorption detectors, known as 4π Ba F 2 detector array [51], total energy detection system, such as C 6 D 6 [52], and high-resolution γ -ray detectors, such as HPGe detectors [53]. Since HPGe detectors are currently being designed and procured, only the first two are included in this section. detection system A detection system with four C 6 D 6 liquid scintillation detectors was chosen as the first σ nγ measuring equipment at Back-n for its low neutron sensitivity, fast time response, and simple structure [54]. This C 6 D 6 detection system was installed at the center of ES#2 of Back-n, about 76 m away from the spallation target, shown in Fig. 23. The C 6 D 6 liquid scintillator was 127 mm in diameter and 76.2 mm in length, contained in a 1.5-mm-thick aluminum capsule and coupled with a photomultiplier tube (PMT). These C 6 D 6 detectors were placed upstream of the sample relative to the neutron Fig. 23 Photograph of the C 6 D 6 detection system at Back-n beam, and the detector axis is at an angle of 125 degrees with respect to the neutron beam direction. The distance between the center of the front face of the detector and the sample center is about 150 mm. Anode signals delivered by the PMTs were recorded by the Back-n general-purpose Data Acquisition System (DAQ) [55], which digitizes the analog signals into full-waveform data with a sampling rate of 1 GS/s and a 12-bit resolution. To obtain high precision σ nγ , the weighting function (WF) of the C 6 D 6 detection system was calculated with the pulse height weighting technique (PHWT) [52,54] and the experimental backgrounds were determined with dedicated measurements and Monte-Carlo simulations [56]. Besides, the resolution function was also studied to analyze the resonance parameters of 232 Th and other nuclides [57]. With these studies, the σ nγ in the energy region between 1 eV and 400 keV can be measured with this C 6 D 6 detection system [58,59]. Gamma-ray total absorption facility Gamma-ray total absorption facility (GTAF-II, which is the upgraded version of the earlier GTAF [60] according to the experiment of Back-n) has been constructed for accurate σ n,γ measuring by prompt gamma method, which consisted of 40 barium fluoride detector units [61], shown in Fig. 24. The excited states in the compound nucleus that are populated by neutron capture can decay via many different gammaray cascades. The principle of detection is the sum energy of the cascade that corresponds to the binding energy of the captured neutron plus the kinetic energy of the captured neutron, it is usually about 6 ∼ 8 MeV. The BaF 2 crystal shell with a thickness of 15 cm and an inner radius of 10 cm was subdivided into 12 pentagons and 28 hexagons, which covered 95.2% of the solid angle. The overall time resolution (FWHM) of GTAF-II was 23.7 ns, and the detection efficiency of 60 Co cascade gamma rays was 90% [62]. The σ n,γ experimental data of 197 Au [63] and 169 Tm were measured (neutron flight distance of 75.9 m). The position of resonance peak could be well consistent in comparison with the relevant data of ENDF evaluation database, which verified the reliability of GTAF-II and measurement technology. In the next step, high precision σ n,γ of fission nuclides and radioactive nuclides could be obtained by deducting the background and improving the effect background ratio. Detection systems for neutron resonance radiography Thanks to the highest flux of the white neutron beam in the world, Back-n may have the best opportunity for the neutron resonance photography. The principle can be found in reference [64]. Two different systems are been developing tested at Back-n. A CMOS camera is a traditional image system, while the neutron-sensitive micro-channel-plate is still a novel technique. Gated CMOS camera A detector system with CMOS camera is constructed at Backn white neutron source, which is mainly used in beam spot measurement, scintillator calibration, neutron imaging and other experiments. The system mainly consists of CMOS camera, neutron conversion scintillator and optical mirror, which are packaged in a light-shielding box, as shown in Fig. 25a. The neutron conversion scintillator usually uses EJ426, which is still a novel technique for thermal neutrons with low sensitivity to gamma radiation [65]. The scintillator is a ZnS screen doped with 6 LiF, and the thermal neutron detection efficiency of 0.5 mm thick is about 34%. As shown in Fig. 25b, the screen is placed in the center of the neutron beam, and its plane is perpendicular to the beam direction to detect neutrons by the well-known cross section of 6 Li + neutron − → 3 H + 4 He + 4.78 MeV. The CMOS camera was purchased from Oxford Instrument Company, and its sensor size is 16.4 × 14.0 mm 2 with 2560 × 2160 pixels. The typical spatial resolution of the camera is about 130 µm. This camera has the ability to delay and select the time window, and the gate delay and width are adjustable from 0 to 10 s with the highest precision of 10 ps. The neutron energies are measured by the time-of-flight method. Therefore, this function of the camera can enable it to achieve energy-division photography and energy-resolved imaging on the Back-n. Since March 2018, the camera system has carried out experiments such as standard beam spot measurement, scintillation screen calibration and neutron photography on the Back-n. A lot of research achievements have been obtained. As shown in Fig. 26, the φ50 mm mode neutron profile distribution and grayscale information on x and y axis measured in the ES #1 [66]. The beam spot distribution is independent of the neutron energy. For the neutrons with energy above 0.5 MeV, a more suitable plastic scintillator can be used instead of the ZnS( 6 Li) screen. The standard beam spots in different modes of the Back-n are important beam parameters for the user experiments such as nuclear data measurement, radiation effect and neutron radiography. For neutron radiography experiments, a remotely controlled FIXM sample platform is used to move a series of samples horizontally and vertically to save experimental time significantly. A rotating platform is used for neutron CT experiments. Neutron radiography is a method for investigating the inner structure of samples [67]. Compared with X-ray radiography, neutron radiography has a natural advantage in nuclide identification as neutron attenuation depends on the nuclear reaction between nucleus and neutron. Backn has great potential in obtaining large-scale sample images due to its high flux and high-energy neutrons. Fig. 27a and b shows photograph and neutron imaging of internal structures of the irregular marine condensate, in which the shape of the ancient coins is clearly visible. The energy-selective photography of the CMOS camera enables it to perform nuclide resonance imaging measurements on the Back-n. In the resonance imaging, the neutrons with different energies are transmitted though the sample, then multiple radiographs are recorded by the camera with and without resonance neutrons. The nuclide components can be implied by processing the images [68]. Benefiting from the wide energy spectrum and high flux of the Back-n, various nuclides with cross section resonances in different energy regions are studied. Neutron-sensitive micro-channel-plate A neutron detector with energy and position resolution based on a 10 B-doped micro-channel plate (B-MCP) is being under development on the Back-n. The B-MCP has high temporal resolution, spatial resolution, and high neutron detection efficiency. With the advantages of wide energy spectrum and high flux of Back-n, it is very suitable for neutron resonance imaging experiments. The B-MCP detector is generally composed of neutron-sensitive B-MCP group, readout anodes, vacuum chamber and voltage sources. Its main structure is shown in Fig. 28. The neutrons react with the sensitive nuclide 10 B in the B-MCP to produce secondary particles α and 7 Li, which excite secondary electrons on the inner wall of the B-MCP channel. The secondary electrons multiply in the channel under electric field to generate more secondary electrons and finally form an electron cloud at the exit of the channel to enhance the signal of the neutron event. To increase the collision probability between electrons and the channel and suppress ion feedback, the channel has an angle of about 5 • ∼ 10 • relative to the MCP surface normal, and generally, two MCP are used in combination with a V-shape of their channels. The 10 B is one of the commonly neutron-sensitive nuclides, and its secondary particles α and 7 Li have a short range of only about The φ30 mm beam profile in the #ES2 measured by the B-MCP detector 2 ∼ 3.5 µm. Therefore, the secondary electrons are basically multiplied in several adjacent channels, and the spatial resolution of MCP can be better than 10 µm. The transit time of secondary electrons in the MCP channel is at the ps level. The B-MCP can accept very high count rate with excellent time resolution. The detector system count rate limit and time resolution depend primarily on the electronics. The readout anodes of the B-MCP use an array of anode strips for the high flux neutron sources. The analytical spatial resolution of the charge center of gravity method can reach several tens of microns, which is sufficient for fast neutron imaging in most cases. B-MCP normally works at a vacuum of about 10 −4 Pa ∼ 10 −5 Pa. Therefore, the detector system needs to be sealed in a vacuum chamber. The 10 B-doped B-MCP used in our experiments is about 10 mol% content of the 10 B nuclide. The test samples have a diameter of 33 mm and a thickness of 0.5 mm. We carried out beam spot measurements on Back-n using the B-MCP detector and studied the response of B-MCP to neutrons in different energy regions. As shown in Fig. 29, the measured φ30 mm beam profile in the #ES2 is clearly displayed, indicating that the development of the B-MCP detector is basically in line with expectations. The simulation study on signal response of the B-MCP is also carried out to provide a reference for the experimental analysis. The domestic development of 10 B-doped B-MCP technology is being carried out in cooperating with the North Night Vision Technology Research Institute Co.Ltd, and it is difficult to dope 10 B with high concentration consistence. Therefore, it is of great significance to study the properties of 10 B-doped B-MCPs on Back-n. Common readout electronics Good-performance electronics is a mandatory part of advanced spectrometers. Facing the challenge of high neutron flux, Back-n brings full digital trigger mechanism [69] and readout electronics [70,71] with high performance. Figure 30 shows the principle of signal readout and data acquisition. On the top side, signal from neutron detector is fed into the signal condition module to generate proper signals which is compatible with the ultra high-speed digitizer. With the help of folding structure of analog digital converter, full digitized waveform of detector signal is obtained with very good magnitude and time resolution. The sampling rate can reach up to 1 GSas, while the resolution can be 12 bit. Based on the digitized waveform data, full digital trigger algorithm is proposed in Back-n as shown in Fig. 30. The L1 hardware digital algorithm is executed on FPGA (field programmable gate array). On each digitizer local FPGA, digitizing data stream is divided into two branches in parallel, one is fed into trigger match FIFO waiting for global trigger, and another one is fed into the sub-trigger processing module simultaneously. In the L1 hardware trigger structure, there is one master global trigger module, which receives all sub-trigger packet from all local trigger modules on each digitizer to generate the global trigger signal based on specific algorithm to indicate the valid good event occurs. This global trigger signal should be fanned out to each trigger match FIFO so that digitizer can readout the correct data corresponding to this good event. After being built, good event data are finally transmitted to the data acquisition server through ethernet. Besides the waveform measurement, in order to obtain the neutron energy precisely, the neutron TOF (time-of-flight) should be measured precisely in the electronics. The timeto-digital converter based on FPGA is adopted in readout electronics [72]. With experiment and evaluation, the accuracy of TOF can reach about 280 ps which is good for Back-n application. To fulfill the full digital trigger algorithm efficiently, it is necessary to use industry advanced readout platform with the ability of excellent data transmitting throughput and system synchronization. Back-n adopts the PXIe platform to be the readout crate as shown in Fig. 31. The PXIe platform is based on high-speed serial PCIe bus, which can support ultra-high-speed of data bandwidth. Meanwhile, there are several differential buses with star structure on backplane (PXIE_DSTARA, PXIE_DSTARB and PXIE_DSTARC), which can support for the global clock and trigger distributing and sub-trigger packets gathering. There are three different modules in the PXIe crate including TCM (trigger & clock module), SCM (signal condition module) and FDM (field digitizer module) as shown in Fig. 32. The FDM effective number of bits can be better than 9.2 bit among the frequency range of 398MHz with sampling rate of 1GSas, which guarantees that the readout electronics can meet the requirements of all Back-n spectrometers, especially the BaF 2 spectrometer which has very fast signal edge. DAQ A data acquisition(DAQ) system is built for the generalpurpose electronics and Back-n detectors. The hardware structure of DAQ system is shown in Fig. 33. PXI chassis controllers connect to a DAQ server through 1GbE/10GbE network. Three storage servers are deployed at the remote computer center and Gluster File System(GlusterFS) with excellent performance [73] is chosen for the management of storage resources. The storage system is close to the offline computing node, so that offline users are able to access the raw data more efficiently. Waveform data digitized by FDM board are readout by PXI chassis controllers and transferred to the DAQ server. The DAQ data flow software assembles the waveform packages into T0 Fragments(A T0 Fragment contains all events generated by a neutron bunch). T0 Fragments are fast processed in parallel for quality monitoring purposes. The whole The DAQ data flow runs on a single DAQ server with 56 CPU cores, data are processed in pipeline from readout to storage. The run control and monitoring software is developed based on Django web framework. It communicates with the data flow software through Redis [74], in-memory data structure store. Control messages are sent throw Redis Pub/Sub interface and status parameters are updated periodically in Redis by the data flow software. Most online processing results are ROOT objects, which are also serialized to JSON string and published in Redis like status parameters. The JavaScript ROOT(JSROOT) library allows reading and displaying ROOT objects in an efficient way, and its rendering is now very d by ROOT [75]. So, the JSROOT library is used for visualization of online processing results in the Backn monitoring software. The implementation of control and monitoring GUI of Back-n is shown in Fig. 34 188 F. Ruirui et al. Summary In past 5 years, a preliminary detector system has been established. In order to ensure the common operation of the day-one experiments, these detection systems such as the fission chamber and the C 6 D 6 detectors adopt to the similar design of other white neutron beamlines. A update-designed common electrics and DAQ also supported these system while the beamline running. In future, some more difficult experiments were proposed. Such as the MTPC, the GTAF-II and the B-MCP were developed to meet these requirements. Most of these detectors are at the forefront of detector research. With the completion of these detectors in the next few years, it is believed that Back-n will achieve excellent physical results in the research of small cross section and rare nuclear reaction physical events, the nucleosynthesis models of the astronomical nuclear physics and the archeology nondestructive detection, etc.
v3-fos-license
2019-05-20T13:06:20.031Z
2019-05-07T00:00:00.000
158442276
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://repository.tudelft.nl/islandora/object/uuid:03e98915-aa7c-4fc2-a27a-5449d62711b1/datastream/OBJ/download", "pdf_hash": "ded61288135abcd38bc2155316448e1bc1487680", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46760", "s2fieldsofstudy": [ "Economics", "Law" ], "sha1": "187bacbbe6bf1d73184ab98514a55b84cef69ee6", "year": 2019 }
pes2o/s2orc
Land pricing upon the extension of leases in public leasehold systems Purpose – This paper aims to compare and review alternative ways to adjust public ground leases. Design/methodology/approach – Based on principles derived from a review of scientific literature, alternatives for the extension of leases are discussed based on the case of Amsterdam. Findings – Many alternatives lead public ground-lease systems to produce results that are the opposite of what they are intended to be (as inspired by Henry George): new improvements result in higher rent, but additional location values do not result in higher rent. One exception is the lease-adjustment-at-propertytransaction alternative, whichmay nevertheless result in fewer transactions. Social implications – Public leasehold systems are highly contested with regard to the extension of leases. Such systems are often aimed at capturing land-value gains. In practice, however, this tends to be more difficult than expected. Value capture by authorities, as intended by the system, results in counter-movements of lessees, who often gain public support to set lower leases. These political processes may even result in the termination of such public ground-lease systems. This paper reports on a search for possible solutions. Originality/value – The comparison of various alternatives to ground-lease extension based on principles derived from literature is new, and it contributes insight into public ground-lease systems. Introduction Often inspired by George (1920George ( , 1st ed. 1879), many public authorities have launched public leasehold programmes based on the idea that such programmes would allow for a fair harvesting of land values to the benefit of all. Experience has indeed shown that taxation is lower in cities, such as Hong Kong (La Grange and Pretorius, 2016), in which leasehold accounts for a large share of the public income. One problem, however, concerns the adjustment of land values to assure rent capture. As a result of this problem, the public leasehold system "as originally conceived in Israel and Canberra, has reached its end" (Benchetrit and Czamanski, 2004, p. 46). It has also generated fierce debates elsewhere (Tyvimaa et al., 2015;Ploeger and Bounjouh, 2017), such that many ground-rent adjustments end up in court (Mandell, 2002). Nevertheless, interest in public ground-lease models is undergoing a revival as a way of providing affordable housing that allows for a separation between the affordable price paid by the lessee and an enduring claim on full future land values by the public owner (Bourguignon, 2013;Löhr, 2017;Shamsuddin and Vale, 2017). ©Willem K. Korthals Altes. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode Although the initial allocation of land has been the subject of considerable research (Needham, 1992;Hong, 1998;Tas an-Kok et al., 2013;Caesar, 2016), much less has been published on revaluation practices, which constitute the topic of this paper. Resetting the rent in the form of ground leases is an important issue, including within the context of private ground leases. In this context, rent-reset clauses have been identified as "land mines" (Stein, 2014, p. 22) or "time bombs" (Stein, 2007, p. 129) that could potentially have serious negative effects on lessees and their lenders. Moreover, it has proven difficult to draft clauses that anticipate all contingencies in rent adjustment (Stein, 2014). In a democratic society, one specific feature of public ground leases is that the public authority that owns the land is not driven only by profit but also by the interests of its citizens, who are also known as "consumer-voters" (Tiebout, 1956, p. 417) who "pick communities" that best fit their preference patterns for public goods. In addition to being lessees, citizens are voters who have an impact on political majorities. For this reason, lease setting involves both a technical component (i.e. assessing the property value) and a political component (i.e. setting a tax), especially if the rent is to be paid over the land value of a citizen's home. Lease setting can be seen as "an approximate form of property tax" (Deng, 2005, p. 372). In relation to taxation, authorities are not driven by the goal of obtaining the highest legally feasible taxwhich could be as high as 100 per cent, depending on legal circumstancesbut by the objective of weighing interests with regard to taxation, the quality of public goods and the needs of "consumer-voters". In other words, "public authorities typically are supposed to maximize social welfare rather than profit" (Mandell, 2002, p. 154). Property taxes should "be regarded as being fair by the public" (Grover et al., 2017, p. 100). According to George, land values cannot be avoided and, in contrast to taxes on improvement, taxes on land are unlikely to have an impact on behaviour. Despite these insights, however, the literature on municipal taxation (based on Tiebout, 1956) indicates that the taxpayers are likely to "vote with their feet", by moving to areas in which public goods and taxation have a better fit. A lower local public lease may be fully compensated by a higher property price, which does not make it less expensive to move to such an area. Alternatively, it would be logical to expect that low rents would convince consumer-voters to stay, as low rents would provide them with higher equity relating to their homes. This unearned increment, which is not fully incorporated into the rent, is part of the amenities that consumer-voters currently enjoy when living in certain communities. Consumer-voters may not appreciate if elected representatives pursue policies to capture these increments. If this is the case, citizens may prefer representatives who promise low rents. In a context in which public ground leases cover a substantial part of the homes of citizens, such political power could be substantial, as there is substantial pressure to keep taxes low. This situation could potentially result in practices in which the rents of public ground leases are set below what would be feasible in a context of private ground leases (Tyvimaa et al., 2015). A ground lease is a right that is attached to the land, which is a durable asset. The ground rent paid and its value may deviate over time. Neighbourhoods that were inexpensive at the time that the lease was set can become more expensive, and the other way around. The effect on the revaluation of land on the neighbourhood's ground rents will be different accordingly. Furthermore, households that do not move often are likely to be exposed to large differences over time. The development of their income may not keep pace with the rising locational value of the neighbourhood. This is more likely to occur in neighbourhoods in which the development of location value exceeds rent development. By not moving, a household can realise a higher value of the public good (i.e. location value) without paying additional rent. One specific feature of public ground-lease systems is that they are based on a differentiation between land values and the value of improvements. At the start of a ground lease, these two elements are relatively easy to differentiate. The lessee acquires a plot of land, and the rent is based on its value. Furthermore, the lessee is responsible for developing the building and all other improvements. At the time of renegotiating the rent, this distinction becomes much more complex (compare Grover, 2018). Given that the renegotiations are based on an enduring lease, the first and final elements in defining what must be covered in this land value are the provisions in the deed by which the lease is set. This document constitutes the relationships between lessor and lessee. The sharpest distinction is between a finite ground leasein which all properties return to the landowner when the lease expires, without any compensationand perpetual leases without rent adjustments. One inherent problem with such arrangements is that comparative transactions of the landowner's rights are particularly scarce in public ground-lease systems. The situation is even more complex than is the case in private leasehold systems, which also involve many issues relating to valuation (Grover, 2014). Because local authorities do not trade their ground-lease holdings on the market, it is difficult to find comparative prices. In some alternative methods of appraisalincluding "rough approximation" (Mandell, 2001, p. 75) by establishing the residual value of landthe behaviour of the lessee could potentially have an impact on the land value. This is inconsistent with the notion that ground rent should cover land values, regardless of improvements. In Section 2, these academic reflections and theories on the valuation of perpetual ground leases are elaborated into six principles. In Section 3, alternatives for ground-rent adjustments are introduced based on the case of Amsterdam. This is followed by a comparison of these alternatives according to the six principles (Section 4) and a discussion of the outcomes of these comparisons (Section 5). Reflection and principles Basic to the ideas on land rent is that for built properties an analytical distinction can be made between rent for land, based on the value of the location, and capital income from improvements, such as buildings, attached to this land. Rising land values may cause a downgrading value of current improvements if these improvements, such as low-rise buildings, stay in the way of making optimal use of the land (Tideman and Plassmann, 2018). The value of improvements can even be negative if alternative uses are more profitable. Making the distinction between land and improvements is in practice more difficult than in theory as transactions usually refer to the whole of a property and do not refer to partial interests in property. In the literature on property appraisals, residual property valuation is often used as a way to appraise partial interests in property. In this method, "[. . .] the value of the unknown component is the residual value left when the value of the independently estimated component is subtracted from the other value" (Lusht, 2001, p. 353). In relation to new-built property, residual land value can be set by subtracting building costs from the property value that can be created on the plot. In existing properties with different property rights resting on the same property, it can be used to The valuation of property in its highest and best use is a well-known practice used by appraisers as "the use of an asset that maximises its potential and that is possible, legally permissible and financially feasible" (IVSC, 2018). A more specific concept is that of synergistic value (also known as "marriage value"), which is used in the British appraisal practice of private ground leases (Baum et al., 2011, p. 160). It is defined as an "additional element of value created by the combination of two or more interests where the value of the combined interest is worth more than the sum of the original interests" (IVSC, 2018). Synergistic value emerges when splitting full ownership in ground lease and residual ownership would not be efficient in reaching the highest and best use of the land. Capozza and Sick (1991) have analysed finite ground leases, which prescribe that the land and all improvements will belong to the owner at the end of the lease term, with no compensation for the improvements to the leaseholder. In this context, lessees tend to refrain from making improvements the closer they are to the termination of the lease. As a result, the land is not likely to attain its highest and best use. The same applies to land improvements within a context of uncertain agricultural lease conditions (Myyrä et al., 2007). Future lease adjustments may also result in uncertainty for buyers and lenders of ground-leased properties, thereby resulting in lower market values (Tyvimaa et al., 2015). The synergistic value is understood as the difference between the value realised at the highest and the best use and the value created due to the conditions of the lease. Synergistic value may also exist in other contexts of split ownership rights, such as the loss of the value of remaining land, if land is partly expropriated (Kalbro, 2007). Land assembly could yield additional value by merging different plots to allow a large building site, thereby enabling more efficient land use (Shapiro et al., 2013). The existence of such synergistic values could potentially result in hold outs and lack of initiative, based on the idea that a large part of this value will become available to the last parties to join the land assembly, as the additional values will not become available until then (Miceli and Sirmans, 2007). In the context of public ground-lease systems, authorities may use ground-lease deeds to limit property rights (e.g. in relation to affordable housing provisions or the number of buildings allowed) in the public interest. While limitations imposed by public powers are included in the appraisal of the highest and best use valuations of ownership, lease limitations are not included. Full owners of property rights are not restrained by these limitations. These restrictions are thus part of the synergistic value of the property. The function presented above can be used to formulate a first principle for evaluating different systems. According to this principle, land value can be considered residual (i.e. the market value of the right of leasehold land plus the value of the land for the owner equals the value of land in its highest and best use, minus the synergetic value). The existence of this relationship can help to appraise property values and to test whether a certain system of ground-lease determination meets market principles or makes use of available market information to reset the rent. A second principle is that the synergistic value is acknowledged and based on the limitations that the systems impose on the highest and best use of the property. As indicated in the introduction, public ground-lease systems are inspired by George (1920George ( , 1st ed. 1879, whose ideas on land-value taxation have been widely discussed (Dwyer, 2014;Fernandez Milan et al., 2016). The step from George's ideas on taxation to public ground-lease systems can be made easily: [. . .] George argued that the rent-seeking behavior that arises when land is sold once and for all is socially destructive. He saw no benefit in allowing entrepreneurs to compete for a monopoly position. (Dwyer, 2014, p. 726) According to George, the essential factor is not security of ownership, but security for the improvements. The idea that land value is such a good instrument derives from the insight that externalities "[. . .] are priced via land rent. Air pollution, noise pollution, highway improvements, complementary land uses, public spending-all have their effect on rent" (Dwyer, 2014, p. 727). This relates to the fact that land value is location value (Fernandez Milan et al., 2016). The power of land values to reflect accurately external (off-site) economic activities (such as subsidies, pollution-causing production, or local infrastructure) derives ultimately from the spatial nature of land. (Dwyer, 2014, p. 727) The insight that land value is essentially location value implies that a distinction must be made between the value of land in its natural state and with improvements. The real and natural distinction is between things which are the produce of labor and things which are the gratuitous offerings of nature [. . .] (George, 1920(George, , 1st ed. 1879, VII 1 10). This idea of a distinction between location value and improvement value is part of the principle that appraisers use, as "the value of a site is unaffected by its improvements" (Lusht, 2001, p. 356). This means that adverse effects on value from over-development and under-development must be reflected in the value of improvements or, if they relate to the leasehold structure, in the synergistic value. The third principle relates to the "Georgean" principle that improvements have no impact on location value. According to this reasoning, the improvements of leaseholders are protected, and the rent paid is independent of these improvements. In this way, the lease would provide the tenants incentives to improve the property to its highest and best use as the rent is based on this use. A lease on improvement value, will do the contrary. Tenants may not improve their land as it will result in paying a higher lease. This principle does not hold for finite leases, in which improvement values are transferred to the landowner upon expiration of the lease, as they do not provide security of improvements. A fourth principle is that locational values must be captured. This principle is based on the notion that, between the date on which the rent is originally set and the date of the revaluation, an unearned increment in land values can occur, and that this unearned increment must be captured. This is essentially one of the pillars on which ground-lease systems based on the ideas of George are built. For example, during an initial process of land disposal, locational values could be captured by holding auctions (Hong, 1998). This does not work for revaluation of enduring leasehold rights, as such auctions cannot separate location value from improvement value. The idea of revaluation is that locational values change over time, due to the external effects of urban development outside the land parcel. Public ground-lease systems proceed from the assumption that these values should go to the community. In essence, imposing taxes on unearned increments can minimise taxes on earned increments, thereby stimulating economic activity. A fifth principle relates to the functioning of the property market. Lease adjustments may have a negative effect on the transaction volume of housing. For example, uncertainty relating to future lease adjustment may have an impact on the possibility of obtaining a mortgage (Sevelka, 2011). In a good system, ground-lease adjustment will have little impact on the marketability of properties. Public leasehold systems A sixth and final principle relates to social aims that authorities may have to counter market processes. It has to do with the issue of gentrification. Although the normative appreciation of gentrification may differ according to political perspectives, it is relevant to distinguish leasehold systems that promote gentrification and those that do not. Issues of state-led gentrification as a means of capturing land values have been discussed in Hong Kong, where "[. . .] a proportion of leases [is] maturing at any time and creating for the state a range of potentially lucrative options to redeploy the assets and generate fiscal revenue from selling renewed rights" (La Grange and Pretorius, 2016, p. 512). One potential disadvantage, however, is that the process tends to displace current inhabitants from their homes, as they are not able to afford the increasing property prices in the neighbourhood. In the context of public ground-lease systems, these consumer-voters may not support the ground-lease system. In other words, residents may not support councils that decide to set their rent at unaffordable levels. Although in some systems, inhabitants may have security of improvements in an economic sense, they do not have such security in a social sense (Becher, 2014). In an extensive case study on the use of eminent domain in the city of Philadelphia, Becher explains legitimacy failures in terms of the failure to recognise investment. In Becher's definition, investment involves both financial transactions and "the years that owners and residents have spent with their property" (Becher, 2014, p. 230). This means that legitimate security of improvements should "entitle people to government protection for many different kinds of value that they have invested, including but not limited to financial value" (2014, p. 256). Given the legitimacy issues associated with leasehold revaluation systems, it may be relevant to align such systems with these principles. For example, under such conditions, a family who had bought a house 30 years ago in a neighbourhood that was declining at that time would not be confronted with an unaffordable lease 30 years later if the neighbourhood were to have developed into a more expensive area. The underlying rationale is that the family had invested 30 years of their time and commitment to the neighbourhood, and their investment would eventually pay off as the neighbourhood began to prosper. Such a family would deserve security for this commitment (or investment, according to Becher, 2014), which, in practice, would be an affordable rent. This argument may also be conceptualised in a different manner. External effects are important for locational value. High population turnover rates are considered to erode place attachment, which may relate to "concerns about crime, safety and anti-social behaviour in many deprived places" (Livingston et al., 2010, p. 425). The commitment of a family living in a neighbourhood for 30 years could be seen as an external effect, in that it has contributed to the locational value of other properties in the neighbourhood. An affordable rent would therefore be a compensation for the external effect produced by this family. It is nevertheless unclear whether an affordable rent would always amount to an exact internalisation of such external effects. Based on the arguments stated above, the following principles are used to analyse several alternative pricing arrangements for public leasehold systems: The system acknowledges the residual nature of land, with the market value of the ground-lease right plus the value of the remaining public ownership of the land being equal to the property value in its highest and best use, minus the synergistic value. The synergistic value is based on the way in which the public leasehold systems limit the highest and best use of the property. Improvements have no impact on land value. Land value is captured. The system has no negative impact on the marketability of property. There must be a mechanism that protects resident leaseholders from gentrification caused by excessive lease adjustments. The fact that these principles do not necessarily point in the same direction reflects the political nature of public ground-lease systems. Case study: Amsterdamintroduction and alternatives In this section, the six principles elaborated above are used to analyse alternative options based on the case study of Amsterdam. Since 1896, Amsterdam has had a well-established public ground-lease system (Van Veen, 2005;Gautier and Van Vuuren, 2017;Ploeger and Bounjouh, 2017). As a result, a large part of the city has been developed through ground leases. Between 1915 and 2016, the system provided dwellers with an enduring leasehold right, in which lease conditions were set for an initial period of 75 years (for ground leases effected through the mid-1950s) or 50 years (for more recent ground leases). This initial period is followed by a new period, which is subject to new leasehold conditions (set by the municipality). It is also subject to a new rent, which is either based on a proposition by the municipality (beginning in 1966) or set (currently optional) by a committee of three experts: one appointed by the municipality, one by the leaseholder and the third by the two experts. The process of valuation is based on the conditions set in the old lease. In their valuation, the experts may take into account specificities that are set by the new general conditions. The general conditions are approved by the municipal council, and they are thus based on political consent. As a result of this system, the City of Amsterdam proposed new, considerably higher rents at the end of a period. In response, tenants increasingly requested that land values be set by expert commissions, which tend to set lower rents than those proposed by the municipality (Rekenkamer Amsterdam, 2012). These rents were nevertheless much higher than they had previously been. This reflects the development of land values in Amsterdam, a city in high demand (Kadi and Musterd, 2015;Boumeester, 2017). This situation generated opposition from lessee-citizens against the city. The higher leases were perceived as unfair, and the municipality was portrayed as a greedy landlord, stripping inhabitants of the value that they had accumulated in their homes. One active non-governmental organisation uses the legal-expense insurance of its members to sue the city up to the highest courts for a wide variety of disputes (Ploeger and Bounjouh, 2017). In most cases, the Supreme Court of The Netherlands allows the cases to be discussed as part of cassation proceedings, meaning that there are sufficient matters of legal substance. These legal complexities involve the specific position of the experts, the system of new conditions that have been proposed and the standards and quality for the way in which the new conditions are set. European consumerprotection directives play a role as well (Ploeger and Bounjouh, 2017). As response to the critics the municipality offers up to 2020 a transfer bargain for households making use of the new system, and it has upgraded the partially compensation to poor households for difference in ground rent, which only applies in the situation that sale of their home would, due to the negative price effect of the higher ground rent, result in less proceeds than the current mortgage debt level (Gemeente Amsterdam, 2017a). The remainder of this section consists of a comparison of several alternatives to this system that have been proposed, based on the principles set above. Critics would prefer an alternative in which lessees acquire their rights at no additional cost after the first period has ended (as in Canberra and Israel). In practice, this alternative is Public leasehold systems likely to result in tax increases to compensate for the loss of income. In The Netherlands, property taxes are structured within a one-tier system that does not distinguish between land and improvements. In Amsterdam, there is room for higher property taxes on houses, as they are currently very low (i.e. the second lowest in The Netherlands, following the island of Texel). Neighbouring communities have higher property taxes. For example, property tax rates in the city of Zaanstad and the new town of Almere are above average, being about three times as high as in Amsterdam, relative to the value of the dwelling. The low property tax rate in Amsterdam is also reflected in the fact that, in 2016, municipal income from the parking tax (e199m)which is paid for street parkingexceeded income from property taxes (e168m, with only one third coming from housing) and tourist taxes (e65m; a percentage of per-night turnover, and thus related to rent) (Gemeente Amsterdam, 2017b, p. 463). Given that the property tax rate for housing is much lower than for other functions, property tax proceeds from housing are about e10m lower than the proceeds from the tourist tax. Compared to these figures the net result of the ground-lease programme (e37m in 2016) is relatively low (Gemeente Amsterdam, 2017b, p. 400). Although the actual proceeds from ground leases are much higher (e252m), a large share is used either to pay for interest (e107m)in the case of annually payable rentor to settle the land value for ground-lease holders paying the rent up-front (e108m). The results of the ground-lease system are thus highly dependent on the number of leases expiring in a given year, and their rent revaluations. Profits based on the redevelopment of land are accounted for at the landdevelopment system and not at the ground-lease programme. Most of the municipal income of Amsterdam comes from the central government through the municipal fund [e2bn in 2016 (Gemeente Amsterdam, 2017b)]. These funds are distributed nationally according to a complex system based on differences in the needs of the various local authorities. One remarkable feature of this system is that higher property values in the municipality are associated with lower allocations to the local authority (Allers and Vermeulen, 2016). Property values are used as a negative criterion in the allocation of central government funds, based on an average tax rate. In Amsterdam, the property tax rate for housing is about half this average rate (Gemeente Amsterdam, 2017b, p. 467), such that higher housing values result in less funds through traditional means. In other words, the loss of municipal-fund allocations due to higher housing values is not compensated by higher property taxation for housing, as the rate is too low. At the same time, more funds are generated by other means that are based on property markets, including income from landdevelopment activities (Savini, 2017), parking taxes, tourism taxes and ground-lease renewals. A former alderman of the City of Amsterdam, Van Poelgeest, concluded that the opposition to the current system was too fierce to continue. As an alternative, he proposed that lease renewal would occur only upon the transfer of a property to another party. This alternative would prevent current dwellers from being driven out of their dwellings, and the City could still profit from developments in land values at the time of transactions. One essential feature of this system is that the ground rent would be transparent. As such, property buyers would know the price of transferring the ground-lease right, as well as the ground rent to be paid to the city. As originally proposed, the land price would be based on the combination of the transaction price and the size of the dwelling. In this system, additional improvements would result in higher rent. This proposal drew heavy criticism, with the suggestion to develop a map of land prices set by a standing committee of experts (Frijns et al., 2014). This system had not yet been implemented at the time of the 2014 municipal elections. Following these elections, a different coalition came into office, with two liberal parties aiming to abolish the ground-lease system and one socialist party aiming to keep it. A compromise was reached in the form of a perpetual ground-lease system in which holders of ground-lease right have the opportunity to pay off the lease in perpetuity, with the only additional payments being for improvements extending beyond the current use. Lessees may also choose to pay a rent in perpetuity, with adjustments corresponding to the consumer-price index. The following section presents an evaluation of the four alternatives discussed in this overview, in light of the six principles developed in Section 2. First, in the traditional arrangementthe existing (and reference) system of enduring public lease-holding in Amsterdamnew leases are set after a period of 50 years. Second, in the property tax alternative, leases are renewed at no charge, with property taxes used to compensate for the loss of income. Third, in the novel system of perpetual leasehold rights in Amsterdam, which involves paying a rent to acquire this right and additional rent is paid when developments occur beyond the current use of a property. Fourth, the transaction alternative is based on the previously proposed system of setting the rent at every transaction in a transparent manner, thereby allowing a new rent to be set in every transaction. Table I presents, using an ordinal scale, a comparison of differences between the property tax, perpetual and transaction, and the reference traditional alternatives, which are marked according to the six principles developed in Section 2. The scores corresponding to the principles are based on a ranking of the alternatives according to the principles. The principle scores do not indicate the size of these differences. The scores are explained as follows. Comparison of alternatives Property. Given that leases are extended at no charge, the value for the owner is set to e0, which does not acknowledge the residual nature of land. There is currently a difference in value between housing in ground leases and ownership within the same neighbourhood (Gautier and Van Vuuren, 2017). This model explicitly eliminates the synergistic value of ground leases. Although improvements have no impact on the rent, they do affect the property tax to be paid. This alternative therefore results in a higher tax on improvements. Setting the rent to e0 has a strong negative impact on the capture of land values. It does not affect property markets, however, as it eliminates rent payments. Current dwellers cannot be gentrified due to higher rent. This effect could potentially result in higher property taxes (which are based on the value for the highest and best use). Given that improvement value accounts for a larger share of this tax, however, higher land values have less of an impact. Perpetual. The alternative of perpetual rights resembles the property tax system. In a way, it will eventually develop into a property tax system, as no additional land values will be captured through rent after the first transfer to this system. In general, therefore, this alternative performs between the traditional models and the property tax model, albeit with several exceptions. In the novel Amsterdam system, the synergistic value is not based on a Public leasehold systems sound calculation of synergistic value, and it is less well accounted for than it is in the current system, in which the valuation is set by experts. The valuation is thus not related to the way in which the system limits property rights. One major difference between the perpetual model and the other models is that, in this alternative, rent is paid in perpetuity (and adjusted to consumer price index, which also incorporates developments in the price of housing). In addition, improvements directly result in higher rent (and not indirectly through property taxes). The proposed system is intended to be most generous in terms of time (perpetual), although it is strict with regard to what is allowed. Because the floor area of a building is set in the deed, future rent readjustments will be based only on improvements (e.g. additional square metres or use for other functions), and not on changes in land value. Because improvements result in higher land values, lessees who invest in adding value to their homes (e.g. by adding another floor or extension) must pay a higher land value than do their neighbours who have not improved their homes. This is thus a negative development with regard to the third principle. Although land value is captured at the first transfer to this system, future land values will not be captured. The impact on market transactions is less than it is in the traditional system, in which uncertainties concerning future ground rents are likely to arise as the extension of a lease period approaches, thereby posing problems in obtaining a mortgage. Although the debate surrounding this system has generated considerable criticism with regard to the resulting gentrification issues, the same type of criticism could be levelled against the current system. In time, the termination of new rent adjustments could result in an improvement. Transactions. Setting the rent at the time of the transaction is a promising alternative in terms of many of the principle scores. Transactions are the best time at which to reap benefits, as the system involves obtaining additional funds from a cash flow, resulting in transparency for all parties at the time of the transaction. Taxation of cash flows fits to the practice that taxation of flows is currently more accepted than taxation on assets (Mirrlees et al., 2011). This system is based on the notion that the land values that will serve as a guide after new transactions are set and published by an independent body applying a transparent method of setting land prices throughout the city. This body also monitors transaction prices. Such an independent body is able to acknowledge the residual value between different elements of value. The regulations may also specify that the body should monitor the development of these values. The independent body could thus also set synergistic value based on the restriction of the lease. If the rules guiding the work of the committee clearly indicate that improvements may have no impact on land values, this may result in that this principle will be observed. Land values can be captured at every transaction, which is a moment at which decisions are taken and financing is arranged. This can protect existing residents from gentrification. Some gentrification may occur over time, however, as poorer households may not be able to buy in the area. The additional income obtained though the public authority also provides a public means with which to counter this development through affordable housing programmes. This alternative nevertheless has a negative score with regard to its impact on property markets. Studies on the stamp duty have revealed that taxation at transactions may have a considerable negative effect on household mobility (Hilber and Lyytikäinen, 2017). The system may make it very unattractive to move away from properties with low ground rents in areas that have become attractive. Because any additional value will go to the public landowner, lessees have no incentive to move. Moreover, alternative properties with the same location values will be expensive as well. Moving would thus result in a welfare loss, making it very unattractive, thus reducing the number of transactions on property markets. Discussion and conclusion Current political processes tend to step away from the idea that ground-lease systems are a good way to capture location values. In many contexts, the step taken is either to stop capturing values at all or to make one last transfer to a system of perpetual leasehold rights. In the Amsterdam case, this step is taken alongside more stringent regulation of square metres of buildings and criteria of use. The overall result is that ground-lease systems represent a shift away from the classic systems based on the ideas of Henry George. Such systems are based on the principle that location values are not the result of the improvements produced by the holders of the property rights, and that they can thus be captured without any effect on economic development. Moreover, capturing unearned increments can reduce the taxation of earned increments, thereby increasing productivity. One of the main drivers of this change is the unpopularity of these programmes with lessee-voters. In this context, there has been a major shift from the situation of about a century ago, when many housing systems were based on landlords and tenants. In these systems, increases in land value resulted in higher rent for tenants to pay to their private landlords. Higher rents for ground leases ensure that these higher rents do not remain with the landlords, instead flowing to the community (i.e. the local authority). The situation is different in the context of owner-occupants. Higher land rents are associated with higher out-of-pocket costs for current dwellers. These costs are raised by local authorities that are responsive to voters. From a systemic view, the step towards perpetual leases is not that different from one towards no adjustment at the end of the lease. It only postpones this step. The issue concerns the relative advantage of such a system over full ownership, which necessarily resides in additional control over land use, exceeding what can be achieved through public means. In this case, however, it is important to consider the relatively static nature of such controls. Public regulations can change over time, and many of these changes are part of the normal societal risks that owners must bear (Alterman, 2010). In most cases, lease conditions were set decades ago, and they cannot be changed without the consent of the parties involved. Moreover, there are restrictions on the types of agreements that can run with the land. This is related to the fact that, unlike contracts, such conditions that run with the land are also binding on parties who have not signed the contract, but who are holders of the right. This is a doctrinal principle in civil code systems, as it is associated with the movement beyond feudality (Akkermans, 2008). According to this principle, these conditions may not impose obligations to act in a certain way. Owners may not set droits du seigneur (manorial rights) to compel tenants to act to meet the requirements of the owner. Rights to tolerate or to condemn certain acts are acceptable. The right to condemn certain acts is likely to be well-suited to specification in public law, provided that there is sufficient confidence that it will not be prohibited by the legislature. For public authorities that do not have much confidence that the legislature will provide them with sufficient public powers in the future, setting lease conditions could be a feasible alternative to ascertaining that these kinds of conditions may be valid over time. In this sense, ground leases are a relatively static instrument. An alternative system of setting prices in a transparent manner at every transaction is promising. This situation also leaves several details to be worked out, including issues concerning whether the standing committee has set the proper value. In some cases, parties might not be satisfied with the value that has been set, thus necessitating the organisation of review proceedings, which may result in uncertainty distorting the property market. Another negative effect could result from a freeze of the property market. The main negative point is that such a system would limit supply of housing on the housing market. The Public leasehold systems additional locational value that becomes available after a sale would go to the local authority, and not to the holder of the ground lease. On the other hand, a ground-lease holder purchasing another dwelling would have to pay the full price to both the local authority and the supplier of right. The result would therefore be that the welfare of long-term lessees would deteriorate if they were to move. This effect has also been identified in contexts in which houses are subject to stringent rent control. Under such conditions, people tend to remain in their current housing for much longer (Munch and Svarer, 2002). Although this is part of the aimsif people are not forced out of their housing due to large rent adjustments, they will be less likely to moveit may have negative effects on housing and land markets. Experiences with rent control have demonstrated that more refined rent-control systems, which allow for limited rent increases, have less negative effects on the functioning of housing markets than do more stringent systems (Skak and Bloze, 2013;Lind, 2015). A transaction-based system, which captures only a part of any value increases, may therefore provide a middle ground. A transaction-based system could also resemble the functioning of a transfer tax, in which a certain percentage of the transaction price is due. Recent research suggests that lock-in effects of transfer taxes are low (Slemrod et al., 2017). It must be noted, however, that such taxes have a relatively low percentage of the value (2.9 per cent of the sale price in the study of Slemrod et al., 2017). In certain neighbourhoods, however, real changes in location value that accumulate over time as a resident lives in the location may reach a much larger percentage, thereby resulting in a larger lock-in effect. Payment at transaction increases the cost of transactions, thereby reducing the number of transactions and possibly having a negative effect on the functioning of the property market. Overall, public ground-lease systems face issues relating to public land-value capture. There is no quick fix, as none of the systems examined in this analysis score better on all of the principles addressed. The transformation of ground-lease systems into systems that capture only improvement values can be seen as an opposite development with regard to the principles on which ground-lease systems are based. Novel introductions of ground lease to support affordable housing make a fit to this transformation. Ground-lease systems are no more a neutral, "Georgean", way to capture ground values without impacting behaviour, but are used as a land policy tool to steer land use. The long-term character of ground leases makes that terms and conditions set long ago, must be applied much later. In many contexts, citizens have been able to politicize this application and public authorities have charged less ground rent than would be allowable by the terms and conditions of the lease. This process is relevant for current decisions on novel introduction of ground lease to facilitate affordable housing.
v3-fos-license
2018-09-14T13:12:19.587Z
2013-01-17T00:00:00.000
52555497
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=26844", "pdf_hash": "ebf1bd9e134c8f9255be64346717bcea9fd8dc39", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46761", "s2fieldsofstudy": [ "Medicine", "Engineering", "Materials Science" ], "sha1": "ebf1bd9e134c8f9255be64346717bcea9fd8dc39", "year": 2013 }
pes2o/s2orc
Effect of the Cavity-Cavity Interaction on the Stress Amplitude in Orthopedic Cement The presence of porosities in bone cement (polymethylmethacrylate) in total hip prosthesis (THP) cemented is necessary for the diffusion of antibiotics, but it is a critical characteristic of weakening by the effect of stress concentration and the interconnecting pores. The aim of this study was to analyse by the finite element method (FEM), the size influence of micro-cavities in cement assuming the junction cup-bone, and the effect of cavity-cavity interaction on the stress level and distribution in cement according to the human stance defined by the implant position axis compared to that of the cup. Introduction PMMA has been the standard product in the orthopaedic industry for decades.It is better in having bone-bonding, bioactive properties, the ability to fill a bone deficiency with greater strength and minimize the load transfer to bone.But one of the problems encountered by experts of cemented arthroplasty is the presence of defects (porosities, inclusions and cracks) in cement whom is the main cause of damage to the binder [1]. The cement allows the distribution of antibiotics.The satisfaction of this last property requires some density of porosities.But in mechanical view may be damaging if these microvoids can locally present at the region of stress concentration producing a possible fracture of cement [2].The stress intensity factor for crack emanating from microvoid is higher than the ordinary crack.Several studies have analyzed the porosities effect in the cement on its mechanical behavior [1][2][3][4][5].However, the probability of survival of cemented THP implanted recently is very high.Improved techniques for preparation and placement of cement as well as the methods of realization contribute to this success.Progress remains to be done.Until now, cement fixation of the cup of a total hip prosthesis has been little studied compared to the femoral component. The main aim of this work is double: the first is to simulate all the porosity to a cavity in cement (coalescence phenomenon) and to analyze by the FEM the stress level in this binder according to the orientation of the implant axis compared to that of the cup.The second is to analyze the effects of intercavities interaction on the mechanical behavior of the cement. The originality of this work resides not only on the analysis of the size of the cavities, their voluminous fractions but also on the effect of cavity-cavity interaction on the level and distribution of the equivalent stress in orthopedic cement. Materials and Methods In this bidimensional FEM analysis of a right pelvic bone, the polyethylene cup having an internal diameter of 28 mm and external diameter of 54 mm, it was fixed in a hemispherical acetabular cavity through of cement PMMA with a diameter of 56 mm.The standard body weight is about 80 kg [6].The cement thickness is 2 mm.Materials properties were shown on Table 1.The load was applied to the femoral head is 800 N [1] and was uniformly distributed. An embedding imposed on the pubic symphysis.A worthless displacement imposed along the axis (x = 0) on the sacroiliac joint Figure 1. The FE analyses were performed using the commercial code Abaqus Version 6-5 [7].A mesh model with Triangular elements to six nodes was established in the cement and the pelvis bone and quadrilateral to four nodes for the other components of the prosthesis Figure 2. We opted for three orientations defined by the orientation of the neck femoral head compared to the axis of the cup of 0˚, 25˚ and 50˚ whom reflect the positions of the human body (Figure 3). Results In all of this study, we analyse the distribution of Von Mises stress in cement for three types of loading, charac-terized by the neck femoral position compared to the cup axis. Results represented on the Figures 3 and 4 showed that the distribution of the equivalent stresses in the cement, depending on the nature of loading, is not homogeneous.There exist strongly zones whose intensity depends on the nature loading.Indeed, the orientation of 50˚ of the implant axis compared to that of the cup generates the strongest stress. Effect of the Cavity's Size (Diameter) The above analysis was made on compact and dense cement in the absence of any porosity.Such a material presents a risk of infection for the subject.To minimize this risk, cement must contain porosities.Our objective is to analyze the effect of this porosity on the stress level and distribution in cement.With this intention, we considered the presence of a cavity (R = 50 μm) in the part of the cement being under very strong stress. Zones of cement are mechanically differently requested.The stress distribution around the cavity is illustrated on Figure 5.These last shows clearly stress intensification around this defect whose level grows with the implant orientation axis compared to that of the cup. The voluminous fraction of the pores characterized by their size variation plays a determining role in the mechanical behavior of the cement.The coalescence of these pores gives birth to cavities.Our objective is to analyse the density effect of these defects on the stress level and distribution in this structure component.Let us recall that the rupture stress of a solid is closely related to the density of porosity according to the relation ( 1 This parameter determines the orthopedic cement durability.Its effect on the stress magnitude and distribution is illustrated in Figure 6.These figures show the stress variation around the defect according to its size and nature loading.The defects with important sizes generate the stronger stress.In other words, a cavity concentrates stress more than one pore.This shows clearly that the phenomenon of pores coalescence presents a risk of intensification stresses in the cement.This risk is more important as the implant axis orientation compared to that of the cup. Figure 7 shows the variation of the equivalent stress generated in the cement around a large cavity in function of the implant orientation relative to that the cup.This stress increases gradually as the orientation increases and is highly localized in the contact zone cement/cup corresponds to a distance equal to 0.4 mm.This size, creating the most significant stresses in cement, was selected for further analysis. Effect of the Cavity-Cavity's Interaction The preceding analysis showed that the existence of mi- crocavities in cement is the seat of stress concentration.The study on the effect of cavity-cavity interaction is of great importance for the survival of a THP cemented. We considered the presence of two cavities (R = 50 μm) in the cement part of the most strongly requested according to the femoral head orientation compared to the cup axis (Figure 8).Results show that the stress in cement between two cavities increases with the decrease of cavity-cavity inter-distance (Figure 9(a)).Cement is under strong stress not in the part of contact with the implant but in the zone between two cavities.The stress intensity has increased by about four times.A rapprochement between two cavities induced a stress of strong amplitude. They largely exceed the stress threshold of tensile rupture (25 MPA) of this component and reach the level of rupture in compression (80 MPA).From such behavior can result the interconnection from the cavities could lead to a great probability of rupture of the cement and thus of the prosthesis.This risk of damage increases with the augmentation of the implant orientation (Figures 9(b) and (c)). For better illustrating this effect, an analysis of the stress distribution according to the implant axis inclination compared to that of the cup along the distance between two cavities was carried out (Figures 10(a)-(c)).These figures show that this stress is increasing when these cavities approach one of the other. It reaches its limiting value when these two defects are one in the vicinity very close to the other and when the position of the neck femoral is strongly oriented compared to the axis of the cup. Discussions Three of porosity's types can remain in the cement [9,10]:  The gas porosity resulting from trapped air during mixing of the cement components in the polymerization.These bubbles are always perfectly regular, almost spherical.The diameter of these cavities varies from a few millimeters to micrometers. Porosity by vacuum or removal is related to shrinkage during polymerization in vivo.It is the source of inside surface cavities blistered where the emerging spheres making footprints in relief.In a number of cases, these cavities are less regular and can initiate cracks whose starting point is likely to shrinkage;  Porosity by including blood, bone or soft tissue in the cementing of the implant. Porosity is an important determinant of mechanical performance of cement.It mainly affects the tensile strength, which is already weak cement, and fatigue, which undermines its long-term effectiveness [11]. Cement is a vector possible pharmacological and porosity is a local broadcast media, including antibiotics and antimitotics.One of its drawbacks is the potential creating irregularities areas of high stress concentrations and thus crack initiation.Coalescence of pores cavities can be fatal for the sustainability of the prosthesis.Indeed, the presence of cavities in the cement weakens by notch effect.This risk increases with the size and the volume fraction of the cavities and with the position of the human posture.The cavity-cavity interaction, in this component, generates an equivalent stress of high amplitude.The latter, about three times greater than its tension rupture when these defects are very close in the vicinity of one another, the threshold tends to compression.Such a configuration has a high risk of rupture of the cement and thus a high probability of loosening of the prosthesis.Moreover, coalescence of the cavities, characterized by a large size, can lead to the same effect.The existence of porosities in the bone cement weakens considerably.The life of the prosthesis is closely related to the porosities concentration in the binder. To fully ensure his function, the cement must ensure both a good transfer of antibiotic load and good adhesion between the elements of the structure, density, porosity must be optimized.This interaction effect tends to disappear when the two cavities are far from each other.They behave as if they are isolated from one another. Conclusions In this study, a finite element model is developed to calculate Von Mises stress under applied load to the femoral head.The effect of the femoral head orientation compared to the cup Position axis and the existence of the pore in the cement on Von Mises stress. The following general result may be drawn as a conclusion of this study:  The distribution of the Von mises stress in cement is not homogeneous.The zone of contact implant-cup is the place of strong stress its intensity depends on the implant position compared to the cup axis;  The presence of cavities in cement is the seat of stress concentration by notch effect.The stress is strongly localized around this defect.It is not uniformly distributed around the cavity.Its level is closely dependant in keeping with defect and with the nature loading;  The presence of cavities in the cement weakens it by notch effect.This risk increases with the size and voluminous fraction of cavities as well as with the position of human posture;  The level of the stress depends on the cavity-cavity interdistance.A short distance generates an intensity of stress three stronger than its stress rupture in tension.The stress rupture in compression is reached.The cement is strongly weakened.The effect of interaction cavity-cavity is marked less when these two defects are far one from the other. Figure 2 . Figure 2. Mesh of the analyzed prosthesis. Figure 3 . Figure 3. Distribution of the stress in the cement according to the femoral head orientation compared to the cup position axis. Figure 4 . Figure 4. Variation of the stress in the cement according to the femoral head orientation compared to the cup position axis. Figure 5 . Figure 5. Influence of the nature loading on the stress distribution around the cavity according to the femoral head orientation compared to the cup axis. Figure 6 . Figure 6.Variation of the stress in the cement around the cavity according to its size and the femoral head orientation.(a) 0˚; (b) 25˚; (c) 50˚. Figure 7 . Figure 7. Variations of the stress in the cement around the cavity depending on the nature of the load. Figure 8 . Figure 8.Effect of the cavity-cavity interaction according to the femoral head orientation compared to the cup axis. Figure 9 . Figure 9. Variation of the stress in the cement around the cavity according to the cavity-cavity inter-distance and the femoral head orientation: (a) 0˚; (b) 25˚; (c) 50˚. Figure 10 . Figure 10.Variation of the stress in the cement according to the interdistance cavity-cavity and the femoral head orientation: (a) 0˚; (b) 25˚; (c) 50˚.
v3-fos-license
2016-05-12T22:15:10.714Z
2015-07-31T00:00:00.000
2929393
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/srep12717.pdf", "pdf_hash": "cd2263dc9d613437e09883617504a67310c02da8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46768", "s2fieldsofstudy": [ "Biology" ], "sha1": "cd2263dc9d613437e09883617504a67310c02da8", "year": 2015 }
pes2o/s2orc
RD-1 encoded EspJ protein gets phosphorylated prior to affect the growth and intracellular survival of mycobacteria Mycobacterium tuberculosis (MTB) synchronizes a number of processes and controls a series of events to subvert host defense mechanisms for the sake of residing inside macrophages. Besides these, MTB also possesses a wide range of signal enzyme systems, including eleven serine threonine protein kinases (STPKs). The present study describes STPK modulated modification in one of the hypothetical proteins of the RD1 region; EspJ (ESX-1 secretion associated protein), which is predicted to be involved in virulence of MTB. We have employed knock-out MTB, and M. bovis BCG as a surrogate strain to elaborate the consequence of the phosphorylation of EspJ. The molecular and mass spectrometric analyses in this study, confirmed EspJ as one of the substrates of STPKs. The ectopic expression of phosphoablative mutants of espJ in M. bovis BCG also articulated the effect of phosphorylation on the growth and in survival of mycobacteria. Importantly, the level of phosphorylation of EspJ also differed between pathogenic H37 Rv (Rv) and non pathogenic H37 Ra (Ra) strains of MTB. This further suggested that to a certain extent, the STPKs mediated phosphorylation may be accountable, in determining the growth and in intra-cellular survival of mycobacteria. Scientific RepoRts | 5:12717 | DOi: 10.1038/srep12717 strains which might be regulated by PhoP 8 . Nevertheless, molecular mechanisms behind differential activity of these proteins are still unknown. Expressions of STPKs as proteins involved in pathogenesis, have also been reported earlier 9 , suggesting differential control of signaling in diverse strains. Although, the genetic rationale for diminished virulence of MTB Ra has been elucidated 10 to reveal the comparative behavior, but the molecular mechanism is still unidentified. Nevertheless RD1 region as well as all the STPKs are co-inherited in both the strains. It is very plausible to infer that co-inheritance of STPKs and RD1 locus in these strains tune the physiology of MTB which modulate their differential behavior (Table 1). A recent study on the comparative gene expression analysis has identified 22 genes which were consistently expressed at higher levels in Rv than in Ra under a variety of growth conditions, and among them seven of the genes were involved in cell wall and cell processes 11 . We have investigated the interrelationships between EspJ (encoded by Rv3878) with STPKs of mycobacteria and their differential behavior in pathogenic Rv and in non-pathogenic Ra strains. EspJ, so far known as a hypothetical protein, has been putatively categorized as a regulatory protein 12 and annotated under functional category "cell wall and cell process" in Tuberculist database. This protein is present both in Ra and in Rv strains of mycobacteria but is absent in M. bovis BCG. Herein, we have elucidated the role of phosphorylated and un-phosphorylated EspJ in the growth of mycobacteria. Surprisingly, a higher degree of phosphorylation in Rv was observed over Ra which may imply, the distinctive behavior of this protein in pathogenic and in non-pathogenic strains. Further, in order to identify the key residues undergoing phosphorylation, we used LC/MS/MS which are potentially being used for the identification of phosphorylation sites at several instances 13 . Using the proteomics and bioinformatics tools, and coupling with the data received through in vitro kinase assay, we have identified phosphorylation sites in EspJ. Generation of phosphoablative mutants by site directed mutagenesis, followed by the transfer of these phosphoablative alleles in M. bovis BCG; we have deciphered its role in the growth and in persistence of mycobacteria. This phenotype was also confirmed by knocking-out the gene from MTB and then complementing with wild type and phosphoablative genes. Results Detection of putative phospho-motifs in RD1 encoded proteins. Web sites in RD1 encoded proteins (Table 2). Based on the comparative scores among these proteins, we have predicted EspJ as a possible substrate of mycobacterial kinase. An added criterion for the elaborative study of this protein has also been the presence of Rxx(S/T) motif, which exists in most of the substrates for STPK, including FtsZ protein, which regulates cell division in mycobacteria 14 . Bioinformatics analysis suggested Ser 70 , Ser 85 and Thr 144 as other most probable phosphorylation sites in EspJ protein. Expressions of EspJ and PknG proteins in different mycobacterial strains. The espJ and pknG ORFs were amplified by PCR from the genomic DNA obtained from different mycobacterial strains, using gene specific primers. The presence of espJ in Rv and Ra, and its absence in BCG and in MS were confirmed by PCR. Likewise, it was observed that the MTB orthologous pknG is present only in Rv, Ra and in BCG (Fig. 1a). In order to find out the transcription of espJ and to relate with the protein, we quantified the mRNA transcripts of the gene in pathogenic and in non-pathogenic mycobacteria at real time. The results were overwhelming and very conclusive. The nonpathogenic Ra showed a constitutive level of the gene expression in stationary and in log phases of growth with no sign of changes in the transcription level, while pathogenic counterpart Rv has six times higher transcripts in stationary phase as compared to logarithmic phase of growth (Fig. 1b). This suggested that the level of EspJ may play the role in regulating the growth potential in these two strains. Immunoblottings with EspJ and PknG antisera were done to further confirm the expression of EspJ and PknG proteins in different mycobacterial strains (Fig. 1c). Presence of PknG and absence of EspJ in M. bovis BCG, enabled us to employ it as a surrogate strain for the present study to address the role of EspJ in mycobacterial physiology. Confirmation of EspJ phosphorylation in vivo and in culture filtrate proteins of mycobacteria. In order to look for the comparative in vivo phosphorylation of EspJ protein in Rv and in Ra, the cell lysates were immunoprecipitated with the EspJ antiserum and then detected by western blotting. An interesting observation was established; showing that the amount of phosphorylated protein in Rv is substantially higher than the Ra (Fig. 1d). Further to affirm the differential phosphorylation of secreted EspJ protein; the log phase culture filtrates of Rv, Ra and BCG, were immunoblotted with EspJ antiserum and with anti-phospho Ser/Thr antibody (Fig. 1e). We questioned that whether the secretory nature of this protein mimics during intracellular infection, the cell lysates of infected and uninfected macrophages were subjected for immunoblotting with anti-EspJ serum. The data confirmed that protein is secreted out in the cytosol of macrophages (Fig. 1f). Although, till at this stage data revealed that EspJ is phosphorylated inside mycobacteria but yet not reported the involvement of any specific mycobacterial kinase. We randomly selected some of the STPKs to look for the degree of phosphorylation of EspJ in in vitro, and made an observation that this protein is phosphorylated irrespective of the kinases used. Hence, we extended the studies using PknG which showed sustained and considerable effects on EspJ. Phosphorylation of EspJ by Ser/Thr protein kinase and mass spectrometric analysis. To substantiate bioinformatics predictions, recombinant EspJ was phosphorylated with PknG and with PknK in vitro, and was confirmed by kinase assay. Increased levels of phosphorylation were observed in EspJ as compared to MBP, which is used as a universal substrate (Fig. 2a,b). Phosphorylated EspJ was further resolved by 2DE and analyzed by mass spectrometry to confirm and identify the phosphorylation sites. The data reported phosphorylated residue at Ser 70 position also by LC/MS/MS analyses of protein as compared to un-phosphorylated protein (Fig. 2c,d). The phosphorylation events were further ascertained by ProQ diamond staining (Fig. 2e) and by immunoblotting with pS/T antibody (Fig. 2e).The mutant protein was expressed and purified by Ni-NTA column. The kinase assay using the protein as substrate observed two fold reductions in the level of phosphorylation with PknG as compared to its wild type counterpart (Fig. 2f). In addition to EspJ, we have also used two other RD1 encoded secretory proteins CFP-10 and ESAT-6, to look whether these proteins are also getting phosphorylated by PknG or not. Kinase assay didn't demonstrate a significant level of phosphorylation in these proteins (Fig. 2g). This experiment conclusively reported that among the secretory proteins of RD1 region, only EspJ is significantly phosphorylated. Comparative analysis of EspJ and its phosphoablative mutants in mycobacterial growth. To explore the role of EspJ protein, the gene was transferred into surrogate strain; M. bovis BCG. The growth of mycobacteria was monitored either by counting the CFU on MB7H10 medium plates or by MGIT 960 system. Temporal growth curve (growth unit verses time) as well as increment in growth units for BCG, containing EspJ and its mutants (Fig. 3a,b) were analyzed for the fixed period (9-12 day) of time. The results revealed the slow growth of recombinant BCG, which has a copy of espJ as compared to BCG, containing vector alone. This finding indicated towards the partial involvement of EspJ protein in slowing down the growth of mycobacteria. On the basis of this information and further to corroborate the effect of phosphorylation on EspJ, we generated phosphoablative mutants by site directed mutagenesis and transferred all these alleles into M. bovis BCG. Recombinant BCG expressing mutant allele EspJ_S70A restored the multiplication as compared to BCG over-expressing wild-type EspJ (Fig. 3c,d). The rBCG expressing other mutant alleles were also studied extensively, but didn't show apparent restoration in growth defect (data not shown). Phagocytosis and intracellular survival of recombinant BCG expressing EspJ and phosphoablative mutants. To study the role of phosphorylation of EspJ in survival of mycobacteria, THP-1 cells were infected with BCG at MOI of 1:10 as described in experimental procedure. Phagocytosis was allowed to occur for 4 h. Cells were washed, and the remaining extracellular bacilli were killed by Equal amounts of CF proteins harvested from MTB and BCG strains were quantified and resolved by 12% SDS-PAGE. Western blot analysis was done using EspJ antibody. Since, BCG lacks EspJ ORF its CF was used as a negative control. The same blot was re-probed with anti pSer/Thr antibody to demonstrate differential in vivo phosphorylation of EspJ protein in Rv and in Ra. Densitometry was done as described. The proteins were immunoblotted with GarA antbody (which is secretory in nature) to show the equal level of loading (lower panel). The blots were also probed with hsp70 antibody to rule out any necrosis during harvesting. Since the bands were undetectable, the data cannot be shown. Besides secretion of EspJ in culture medium, the protein is also found to be present in the cytosol of infected macrophages at 24 h of infection (f). underlined that mutation at Ser 70 position is responsible for increasing the growth of mycobacteria, is also important for persistent survival of mycobacteria inside the macrophages. Figure 2. In vitro phosphorylation of EspJ protein Relative response of MTB containing wild type and phosphoablative mutant of espJ. We further ascertained the role of EspJ phosphorylation by generating MTB Ra knock-out strain (Fig. 4a,b). The growth kinetics of both knock-out and its complemented strains (wild type espJ as well as phosphoablative allele espJ_S70A) demonstrate involvement of this protein in growth of mycobacteria (Fig. 4c). We used these bacteria to study the intracellular survival inside THP-1 cells. The patterns of growth (Fig. 4d) were very similar to that shown in Fig. 4c. The espJ knock-out MTB multiplied more efficiently as compared to wild type strain. Since, EspJ has been reported to utilize Esx secretion system; we hypothesized that the events in intracellular survival may be due to transportation of phosphorylated protein in the macrophage (Fig. 4e). Phylogenetic analysis of EspJ orthologs in mycobacterial genus. Phylogenetic study among closely related orthologs also reveals the presence of EspJ protein mostly in slow grower mycobacterial strains. Multiple sequence alignment using MUSCLE, demonstrated conservation of phosphorylation sites in a wide range of mycobacterial strains. Interestingly, Ser 70 residue is also found to be important in slow grower mycobacteria as it is found to be substituted with other amino acids in fast growers (Fig. 5a,b). Discussion With the advent of STPKs in mycobacteria it has been well documented that phosphorylation of proteins involved in determining the virulence, effects growth and later the pathogenicity of mycobacteria 15,16 . Alongside, it has also been reported that mycobacterial STPKs are expressed differentially in pathogenic and in non-pathogenic strains 9 . These findings indicate towards an existing relationship between mycobacterial STPKs and virulence factors which determine their differential response in diverse strains. Deletion of certain regions of chromosome occurs during evolution of non-pathogenic mycobacteria from pathogenic counterpart M. bovis. RD1, a 9.5-kb DNA segment is deleted in all the BCG sub-strains 17 . Concurrently, Ra has also been evolved from its parental strain. Genomic analysis of Ra reveals deletion of chromosome region from Rv genome similar to events happened for attenuation of BCG 10 . Interestingly, in Ra; RD1 region which encodes several virulence factors 18 remains intact. These events may infer a differential behavior of this protein in pathogenic and in non-pathogenic strains. It is established that post-translational modification is a central mechanism in modulating a protein for its differential activity. Based on these evidences we hypothesize that RD1 encoded protein may have undergone differential phosphorylation during the process. In this study, we made efforts to establish relationship between STPKs and RD1 encoded proteins and further its differential response in pathogenic and in non-pathogenic strains. In order to understand the role of MTB STPKs signaling pathways in mycobacterial physiology, several efforts have been made to identify the substrates. It has previously been shown that proteins with the fork head-associated (FHA) domain get phosphorylated by STPKs of MTB and hence protein containing this motif has been identified as a substrate [19][20][21] . Similarly the RXS/T and RXXS/T motifs have been identified in Rv0019c and FtsZ proteins, which are reported to be phosphorylated by PknA 14 . Using in silico approach we have screened RD1 encoded proteins for the presence of such putative motifs. Besides that, web based tools like Kinasephos 2.0, Disphos 1.3, Netphos 2.0 and NetPhosBac1 have also been used to predict phosphorylation potential of all RD1 encoded proteins ( Table 2). All these bioinformatics studies suggest presence of such motifs in RD1 encoded proteins including EspJ. At the outset, we looked for transcription of espJ and expression of the protein in Rv and in Ra. The results indicated that this protein is expressed in the late stage of mycobacterial growth and its phosphorylation is variable in virulent and in avirulent mycobacteria (Fig. 1). The EspJ which was picked up among the RD1 encoded proteins through bioinformatics analysis, was further confirmed by in vitro kinase assay with PknG (one of the MTB STPKs). Although we could see the partial phosphorylation of EspJ with other STPKs (Fig. 2a), nevertheless the purpose of addressing this mechanism was to look the differential nature of EspJ in phosphorylated and in unphosphorylated forms, irrespective of the kinase used. Although the promiscuity of PknG cannot be claimed over other STPKs in mycobacteria, the rationale behind selecting PknG, is that it acts as a virulence factor and its differential expression occurs in pathogenic and in non-pathogenic strains of MTB 9 . Apart from that, PknG regulates mycobacterial physiology via phosphorylation of a wide range of substrates. It also regulates glutamate metabolism via phosphorylation of GarA 22 . Kinase assay suggests possible phosphorylation of EspJ protein encoded by Rv3878 gene (Fig. 2a,b). Further, in order to identify phosphorylation site(s), we used mass spectrometry, which is generally used to identify phosphorylation site in phospho-proteins 23 . Mass spectrometry data reveal Ser 70 as a major phosphorylation site in EspJ protein (Fig. 2c,d). To authenticate the phosphorylation site, phosphoablative mutant for Ser 70 was generated by site directed mutagenesis. The in vitro kinase assay using phosphoablative mutant as a substrate showed the abrogation of phosphorylation of EspJ protein, which confirmed the involvement of Ser 70 residue (Fig. 2e,f). Phosphorylation may occur at several positions in a protein. We used the bioinformatics tool to identify several putative phosphorylation sites in EspJ protein, which are characteristic features of phospho-residue of MTB STPK, like presence of RXXS/T motifs. We made mutants, at all such positions, but didn't get significant difference in the phosphorylation level by in vitro kinase assay with PknG (data not shown). We further predicted the possible role of EspJ in relation to mycobacterial physiology. Although, most of the orthologs of EspJ are uncharacterized, its closest ortholog MSMEG_0069 (M. smegmatis) has been annotated as translation initiation factor IF-2. Alignment of MTB EspJ (Rv3878) with MSMEG_0069 demonstrates large similarity between these two proteins (Fig. 5c). Earlier studies suggest involvement of phosphorylation as a mechanism to modulate translational event. In prokaryotes like in streptomycetes, ribosome-associated STPKs, phosphorylate 11 proteins which result in 30% loss of ribosomal activity 24 . Apart from that, activity of elongation factor-Tu involved in protein synthesis of MTB, is largely dependent on its phosphorylation 25 . Since, EspJ protein has been annotated as a regulatory protein in M. smegmatis, and is associated with the category of cell wall and cell processes 12 , we articulated that EspJ might participate in the growth and in survival of mycobacteria. To analyze whether presence of this protein is specific to slow grower mycobacteria; all available mycobacterial genome sequences were analyzed for the presence of a locus encoding EspJ protein. As evidenced from phylogenetic tree, EspJ is mostly present in slow grower mycobacteria (Fig. 5a,b). Moreover, substitution of Ser 70 site with phosphoablative alleles in slow grower mycobacteria intrigues us to see the biological relevance of this site in modulation of growth behavior of mycobacteria. The EspJ of mycobacteria is associated in virulence, while the similar kind of protein in E. coli has been predicted to be involved in secretion system and controlling the Src kinase activity 26 . We analyzed the protein sequence of these two proteins and inferred that although they share the same name but are entirely heterologous. To corroborate the biological effect of this protein we used M. bovis BCG as a surrogate model strain which contains all the STPKs but lacks EspJ due to deletion of RD1 region. Expression of PknG in M. bovis BCG was confirmed prior to use it as a surrogate strain (Fig. 1c). Recombinant BCG strains were generated by transferring espJ alleles downstream of the hsp60 promoter through integrative vector pMV361. Interestingly, we observed the slow growth of recombinant BCG as compared to wild type. This study along with phosphorylation event encouraged us to further investigate the effect of EspJ phosphorylation, on the growth of mycobacteria. When we transferred phosphoablative mutant allele EspJ_S70A in M. bovis BCG, we observed a considerable amount of increase in the growth of rBCG as compared to rBCG having wild type EspJ allele. Abolition of the growth defect after transferring EspJ_S70A allele in M. bovis BCG suggests involvement of this residue in phosphorylation by STPKs. Interestingly this residue is similar to phosphorylated residue found in FtsZ protein regulating cell division of mycobacteria and is part of Rxx(S/T) motif found in most of the phosphorylated residues in MTB 14 . The role of EspJ phosphorylation in mycobacterial growth has also been established by gene knockout strategy in MTB (Fig. 4). Overall, these findings for the first time demonstrate the involvement of phosphorylation as one of the mechanisms through which STPK associate with EspJ and orchestrate the growth of mycobacteria. Since, a large proportion of EspJ is secreted into the culture medium and in macrophages (Fig. 1e,f); we detected its comparative phosphorylation level between Rv and Ra in a log phase culture filtrate to delineate the in vivo phosphorylation of EspJ by STPKs. As expected, we found a higher level of phosphorylation in pathogenic strain compared to non pathogenic strain (Fig. 1d). We checked whether this phenomenon is not due to necrosis of the bacilli, we confirmed, using the hsp70 antibody. Since, hsp70 is not a secretory protein; we were unable to detect the bands on the blot. In agreement to our findings, a recent report on PE/PPE protein implicated the differential phosphorylation between Rv and Ra strains, which helped in determining the pathogenic phenotype 27 . Differential phosphorylation of EspJ, encoding a virulence factor suggests that loss of phosphorylation may be a mechanism adopted by non-pathogenic mycobacteria during evolution from pathogenic mycobacteria. In summary, the phosphorylation of RD-1 encoded protein is unique and may be critical for growth and survival of mycobacteria. One of the proteins encoded by this region; EspJ undergoes phosphorylation by STPK as evidenced by LC/MS/MS analyses. Proteins of pathogenic (Rv) and nonpathogenic (Ra) mycobacteria can be distinguished based on the variable levels of phosphorylation of EspJ. Study with a surrogate strain, M. bovis BCG; where STPKs are present and EspJ is deleted, demonstrated that in order to sustain the growth of mycobacteria, EspJ gets phosphorylated. This was further confirmed by complementing the phosphoablative mutant of EspJ in KO strain of MTB. Methods Mycobacteria cultivation and growth. Mycobacterial strains (Table 3) were cultured in Middlebrook 7H9 medium (Difco) supplemented with Albumin Dextrose Catalase (BD). A suspension of bacilli was prepared in 10 ml Middlebrook 7H9 broth by repeated vortexing log phase culture of mycobacteria. 0.5 ml of the diluted (10 -3 ) culture was transferred to BBL MGIT tube and incubated at 37 °C, and monitored for the change in fluorescence. The BACTEC MGIT 960 system monitors fluorescence (in Growth units) every hour. The growth obtained was recorded about every 24 h during the first 120 h. The prepared dilutions were also plated on MB7H10 agar and incubated at 37 °C to detect CFU and to eliminate the possibility of any other bacterial contaminations. Plasmid construction, mutagenesis and protein purification. The ORFs of pknG and RD1 encoded espJ were amplified from MTB Rv genomic DNA using flanking HindIII anchored primers (Table 4). Both the genes were subcloned into pTriEx-4, expression vector (Novagen) and were used to transform E. coli BL21 (DE3). E. coli strains DH5α and BL21 (DE3) were cultured in Luria-Bertani medium. Over-expressions of His-tag recombinant proteins were done using 0.2 mM IPTG concentration at 18 °C for overnight and purification was done with Ni-NTA column chromatography. The protein showed a relative molecular mass of 43.0 kDa in E coli, while in mycobacteria it is expressed as 27.4 kDa monomeric protein. In addition, espJ was also sub cloned downstream of the hsp60 promoter Scientific RepoRts | 5:12717 | DOi: 10.1038/srep12717 into pMV361 vector. This vector contains an E. coli origin of replication (oriE), the attP and int genes of mycobacteriophage L5 (for integration in the mycobacterial chromosome) and a kanamycin resistance marker (Kan r ). The resulting plasmid was used for complementation in M. bovis BCG to explore the role of espJ and generation of phosphoablative mutants by site directed mutagenesis at T68V, S70A, S85A and T144V positions by PCR using overlapping primers ( Table 4). The veracity of all the clones was checked by DNA sequencing. Confirmed clones having wild type as well as mutated alleles were electroporated into M. bovis BCG 28 . Recombinant expressions of these proteins in BCG were ascertained by western blotting using EspJ antiserum. Antibodies. Polyclonal antisera for PknG, GarA and EspJ were raised in the rabbits in the animal facility of the institute. All the animal experiments were performed in accordance with the approved guidelines of CSIR-Central Drug Research Institute, Lucknow. Albino rabbits (1.5 kg) were obtained from the animal colony of the institute. The animals were maintained in an animal house accredited by the National Accreditation Board for Testing and Calibration Laboratories guidelines controlled by the Committee for the Purpose of Control and Supervision of Experiments on Animals (CPCSEA). CSIR-Central Drug Research Institute constituted ethics committee for use of laboratory animals has approved all the protocols for the study. The animals were maintained in standard conditions of temperature and humidity (temperature 22 ± 2 °C; humidity 45-55%; light intensity 300-400 lx) and given proper pellet diet and water ad libitum. Antibodies for hsp70 (Sigma) and pSer/Thr (Qiagen) were procured commercially. Bioinformatics analysis. Phosphorylation potential as well as putative phosphorylation sites in RD1 encoded proteins were predicted by publicly available online analysis tool ( 29 . Phylogenetic and molecular evolutionary analyses were conducted using MEGA version 6 30 . The phylogenetic tree was constructed by the neighbor-joining method, using Jones-Taylor-Thornton substitution model. In vitro kinase assay. To validate the MS-MS data for phosphorylation sites; quantitative and qualitative in vitro kinase assays were performed with either using ProQ Diamond (Invitrogen) or ADP-Glo ™ Kinase Assay (Promega) according to manufacturer's instruction. For ADP-Glo assay, PknG (25 ng) was used as a kinase and incubated either with EspJ or with the mutants (100 ng) in buffer containing 40 mM Tris (pH 7.5), 2 mM MnCl 2 , 20 mM MgCl 2 , 2 mM DTT, 0.5 mg/ml BSA, and 0.1 mM ATP for 1 h at 26 °C. The assay was also standardized using different concentrations of substrate (100, 50 and 25 ng). 2D gel electrophoresis and mass spectrometry analyses. Purified recombinant protein EspJ was subjected to in vitro kinase assay as described above, and 2DE was done to resolve these two proteins, using a standard protocol. Phosphorylated as well as un-phosphorylated EspJ spots from two different gels were excised and digested with Trypsin and processed further for LC/MS/MS analyses by HCT Ultra PTM Discovery System (ETD II-Bruker Daltonics) with 1100 series HPLC (Agilent) to identify phosphorylated sites. Briefly, the protein spots were excised from the coomassie stained gel, diced into small pieces in microcentrifuge tubes. The gel pieces were destained by washing them twice with 200 μ l of 25 mM ammonium bicarbonate (ABC, Sigma).The gel pieces were dehydrated thrice with 50 μ l of a 2:1 mixture of acetonitrile (LC-MS CHROMASOLV, Sigma) and 50 mM ABC for 5 minutes and twice with 25 mM ABC for 2 minutes. The proteins were reduced with 10 mM DTT in 100 mM ABC at 60 °C for 1 hr, washed with 25 mM ABC and alkylated with 100 mM iodoacetamide at room temperature for 20 minutes to alkylate the cysteine residues. Gel slices were dehydrated twice with two washes of acetonitrile followed by a final wash with 25 mM ABC. The gel pieces were dried in speed vac for 15 minutes and rehydrated with 30 μ l of 20 μ g/ml solution of trypsin (Proteomics grade, Sigma) overnight at 37 °C. The supernatant was acidified with 0.1% trifluoroacetic acid (TFA, Sigma) and a 1:1 (v/v) mixture of the sample and α -Cyano-4-hydroxycinnamic acid (CHCA) was spotted on the MALDI plate prior to analysis. The mgf file generated after the sample run was used for MTB UniProt database search using a Mascot software package (Matrix Science London). The peptide precursor mass tolerance was set to 1.2 Da, and fragment ion mass tolerance was set to 0.6 Da. Carbamidomethylation on cysteine residues was used as fixed modification and oxidation of methionine and phosphorylations of serine, threonine, and tyrosine were used as variable modifications. Phosphopeptides were detected in phosphorylated sample. THP-1 macrophage cells infection with recombinant mycobacteria. The macrophage cell line THP-1 was cultured at 37 °C and 5% CO 2 in RPMI-1640 medium (2 mM L-glutamine,10 mM HEPES, 1 mM sodium pyruvate, 4.5 g/L glucose and 1.5 g/L sodium bicarbonate), supplemented with 10% heat inactivated fetal calf serum (FCS). The cells were differentiated into macrophage-like cells by treatment with phorbol myristate acetate (PMA) before infection. The cells having a density of 5 × 10 5 cells/ml were seeded in tissue culture plates. The single cell suspension of log-phase mycobacteria was prepared
v3-fos-license
2024-07-07T15:06:22.456Z
2024-07-05T00:00:00.000
270998500
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "96383e63954b6ff1bcee2d1a4d7eb6bfa4abfdee", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46769", "s2fieldsofstudy": [ "Medicine" ], "sha1": "f60d2c8499a19fcf6e942d245c0c16f727dbad73", "year": 2024 }
pes2o/s2orc
Performance of federated learning-based models in the Dutch TAVI population was comparable to central strategies and outperformed local strategies Background Federated learning (FL) is a technique for learning prediction models without sharing records between hospitals. Compared to centralized training approaches, the adoption of FL could negatively impact model performance. Aim This study aimed to evaluate four types of multicenter model development strategies for predicting 30-day mortality for patients undergoing transcatheter aortic valve implantation (TAVI): (1) central, learning one model from a centralized dataset of all hospitals; (2) local, learning one model per hospital; (3) federated averaging (FedAvg), averaging of local model coefficients; and (4) ensemble, aggregating local model predictions. Methods Data from all 16 Dutch TAVI hospitals from 2013 to 2021 in the Netherlands Heart Registration (NHR) were used. All approaches were internally validated. For the central and federated approaches, external geographic validation was also performed. Predictive performance in terms of discrimination [the area under the ROC curve (AUC-ROC, hereafter referred to as AUC)] and calibration (intercept and slope, and calibration graph) was measured. Results The dataset comprised 16,661 TAVI records with a 30-day mortality rate of 3.4%. In internal validation the AUCs of central, local, FedAvg, and ensemble models were 0.68, 0.65, 0.67, and 0.67, respectively. The central and local models were miscalibrated by slope, while the FedAvg and ensemble models were miscalibrated by intercept. During external geographic validation, central, FedAvg, and ensemble all achieved a mean AUC of 0.68. Miscalibration was observed for the central, FedAvg, and ensemble models in 44%, 44%, and 38% of the hospitals, respectively. Conclusion Compared to centralized training approaches, FL techniques such as FedAvg and ensemble demonstrated comparable AUC and calibration. The use of FL techniques should be considered a viable option for clinical prediction model development. Introduction The increasing adoption of electronic health records (EHRs) across healthcare facilities has led to a wealth of data that can be harnessed for developing prediction models for various medical applications.Such models may improve patient stratification, inform clinical decision-making, and ultimately enhance patient outcomes.In the field of cardiovascular medicine, combining records from multiple centers has successfully been used in training clinical prediction models (CPMs) (1).Such multicenter models tend to generalize better and are more robust than those derived from individual centers.Although models trained on data from a single center may perform well within their local hospital settings, they require a large number of records for training, and their performance often deteriorates when applied to new centers or other patient populations.However, sharing patient data between centers is not always straightforward.Concerns about patient privacy, the implementation of new regulations such as the General Data Protection Regulation (GDPR), and the challenges of integrating data from different centers all pose significant challenges.There is a growing need to implement strategies for training prediction models on multiple datasets without sharing records between them. Federated learning (FL) has emerged as a promising approach to address this challenge.FL is a machine learning approach that enables multiple parties to build a shared prediction model without needing to exchange patient data. However, implementing FL comes with its own set of challenges.Aside from logistical and communication issues, an important question is whether FL has a detrimental impact on the quality of learned models (2).While promising, the impact of FL on model quality has yet to be thoroughly examined in various areas of medicine. Understanding the potential benefits and limitations of FL in developing multicenter prediction models helps facilitate a more effective and privacy-preserving use of electronic patient data in risk prediction.To that end, our analysis investigates the potential of FL as a viable strategy for multicenter prediction model development. FL has rarely been studied in the cardiovascular context (3)(4)(5) and not yet in the transcatheter aortic valve implantation (TAVI) population, which is the focus of this study.TAVI is a relatively new and minimally invasive treatment for severe aortic valve stenosis.The Netherlands Heart Registration (NHR) is a centralized registry that holds records of all cardiac interventions performed in the Netherlands, including those of TAVI patients who are treated in the 16 hospitals performing this operation.Across these 16 hospitals, the TAVI patient population could vary for a number of reasons, such as regional population demographic differences. Risk prediction models for TAVI patients have been developed using data originating from a single hospital (6) or combining records from multiple centers (1,7,8).In a previous study, we evaluated the performance of one such centralized, multicenter TAVI early-mortality CPM and observed the model to have a moderate degree of external performance variability, most of which could be attributed to differences in hospital case-mix (9).However, the performance of such models in an FL approach, compared to a centralized or local approach, remains unknown. We aimed to evaluate the impact of two important FL techniques: federated averaging (FedAvg) (10) and mean ensemble (henceforth referred to as ensemble) (11), explained further in the Materials and methods section, on the predictive performance of TAVI risk prediction models.This performance is compared to a centralized model and local center-specific models (Table 1). Materials and methods This study adhered to the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) statement (12).This study meets all five of the CODE-EHR minimum framework standards for the use of structured healthcare data in clinical research (13). Dataset In this nationwide retrospective multicenter cohort study, we included all patients who had a TAVI intervention in any Dutch hospital for the 9-year period from 1 January 2013 to 31 December 2021.Data were collected by the NHR (14).Permission was granted for this study to use the data and include a pseudonymized code indicating the center in the dataset (Supplementary Appendix A). The outcome of interest was the 30-day post-operative mortality.Mortality data were obtained by checking the regional municipal administration registration, Basisregistratie Personen (BRP). Patients lacking the outcome measurement of 30-day mortality were excluded. No ethical approval was needed according to the Dutch central committee of Human Research, as the study only used previously collected cohort registry data.All data in this study were fully anonymized before we accessed them.Approval for this study was granted by the Committee of Research and Ethics of the Netherlands Heart Registry on 2 February 2021. Model strategies Four different model development strategies (henceforth referred to as models) were considered in our experiments (Table 1).A central model was derived using the combined records from all hospitals (Supplementary Figure S1).The derivation of such a model consists of two steps: (1) performing variable selection from the list of candidate predictor variables (explained further in Section 2.3); and (2) fitting predictor variable coefficients.Leveraging the entire dataset enables capturing relationships between predictors and outcomes across multiple centers.Due to the nature of the central design, all data between hospitals are shared, including individual patient record variables. In the next strategy, multiple local models were trained, one for each hospital's dataset (Supplementary Figure S2).The derivation of each center local model would follow in much the same steps as in the centralized model strategy.As the local models are specific to each hospital, they avoid the need to share any data between centers. In addition to these baselines, we considered two popular FL techniques: FedAvg and an ensemble model.To conceptualize the idea behind FedAvg, one can think of averaging the knowledge of a classroom where students train on their schoolwork and then share their key learnings with a central teacher who combines them to create a better understanding for everyone.In the case of the current study, each participating center trains a local model for one epoch (that is, one pass on all the data) and shares its model parameters with a central server (10).Once each center has shared model parameters, model updates are aggregated by the central server (Supplementary Figure S3).This new aggregated model is then sent back to the centers for further local training in the next epoch.This process continues until convergence or a pre-specified number of epochs is reached. The ensemble model approach is similar to combining votes from a diverse group, where the final prediction is the most popular choice (similar to how a majority vote wins an election).For the ensemble model in this case, a local model is fitted on each center's data.The ensemble's prediction for each patient is then formed by averaging the predictions of each local model from all hospitals (Supplementary Figure S4) (15).With this strategy, only hospitallevel models are transmitted between the centers.For all model development strategies, we fitted logistic regression models with Least Absolute Shrinkage and Selection Operator (LASSO) penalization.This approach results in automatically selecting variables deemed predictive of the outcome. Model recalibration Table 1 provides a framework for summarizing, among others, the aspect of model recalibration.Ensuring a model is wellcalibrated before its application in practice is critical.If a decision is to be made based on a predicted probability from a model, then the predicted probability should be as close as possible to the true patient risk probability.This is what calibration performance measures. The recalibration aspect describes the addition of a final step to the model derivation process, where recalibration of the intercept and slope of the linear predictor is performed using the model's predictions on the training data.The derived recalibration function is used thereafter whenever the model makes predictions.Specifically, in recalibration, we align the true outcomes from the different centers with their corresponding model predictions followed by fitting the recalibration function.This is done by fitting a logistic regression model in which the predicted 30-day mortality probability is the sole covariate to predict the true 30-day mortality outcome.As listed in hospital.In the central case, a single recalibration function would be learned on the combined training dataset predictions from all hospitals.In the federated recalibration approach, a federated learning strategy (such as FedAvg) would be used to derive a single recalibration function while also avoiding the need to share the uncalibrated predictions between centers. In our main analysis, we focused on the results from the FL techniques with central recalibration and did not investigate all options, such as learning the recalibration function in a federated manner. Candidate predictor variables The TAVI dataset included variables for patient characteristics (e.g., age, sex, and body mass index), lab test results (e.g., serum creatinine), relevant medical history (e.g., chronic lung disease), and procedure characteristics (e.g., access route and use of anesthesia) (Supplementary Table S1).All 33 candidate predictors were collected prior to the intervention.Threshold values for the body surface area (BSA) were used in summarizing patient characteristics. In all model strategies, we used LASSO to perform automatic variable selection.In the case of FedAvg, LASSO was first used on each hospital dataset.Later, the selected predictors from each hospital-local LASSO were aggregated via center-weighted voting and a center agreement strength hyperparameter (Supplementary Methods S1). Experimental evaluation We adopted two primary evaluation strategies to fit the type of evaluation: a 10-fold cross-validation (CV) approach for the internal validation and leave-center-out analysis (LCOA) for the geographic validation. In some cases, a hospital-local model could not be fitted due to the insufficient number of records for the prevalence of outcomes.We compared the performance of the local model to the other models only in cases where a local model was successfully derived and reported the cases where fitting a local model failed. Cross-validation For cross-validation, we first randomly partitioned the entire dataset into ten equal subsets, stratified by the outcome.In each iteration of the CV, we utilized nine subsets (90% of the records) for model training and held out the remaining subset (10% of the records) for testing.This process was repeated 10 times, each with a different test set.In the case of local, FedAvg, and ensemble, the entire dataset was first partitioned by hospital, and thereafter each hospital dataset was randomly partitioned into 10 equal subsets stratified by the outcome. Leave-center-out analysis For the federated strategies, we conducted a LCOA for a more robust external geographic validation (9).In this approach, we created as many train/test dataset pairs as there were hospitals in the dataset.For each pair, the training set encompassed records from all hospitals but one (the excluded hospital), while the test set solely contained records from the excluded hospital.This method allows us to evaluate how well each model performed when applied to a new center. Pooling results In the context of CV, mean metric values and confidence intervals (CIs) for a model were derived from the individual metric results per test set.During this process, the predictive performance of each model was computed separately for each test set, generating 10 metric values.These 10 values were then averaged to arrive at the mean pooled metric, and their standard deviation was used to compute a 95% CI. During the LCOA, performance was calculated per external hospital and then pooled via random effects meta-analysis (REMA) with hospital as the random effect to give a mean estimate and 95% CI. Performance metrics Discrimination was evaluated using the area under the ROC curve (AUC-ROC, henceforth referred to as AUC).The AUC metric summarizes a model's ability to discriminate between events and cases.It involves sensitivity (also called recall in information retrieval) and specificity across all possible threshold values.Calibration was evaluated by the Cox method using the calibration intercept and slope and their corresponding 95% CIs (16).A model's predictions were deemed to be miscalibrated if either (1) the 95% CI for its Cox calibration intercept did not contain the value zero (miscalibration by intercept) or (2) the 95% CI of its intercept did contain the value zero, but the 95% CI for its Cox calibration slope did not contain the value one (miscalibration by the slope) (16). In addition, calibration graphs showing a model's predicted probabilities vs. the observed frequencies of positive outcomes were drawn for visual inspection. Net reclassification improvement (NRI) was calculated between the predictions of any two models in either validation strategy (CV and LCOA) (Supplementary Methods S2). Significance testing Bootstrapping with 3,000 samples was used to test for a difference in (paired) AUCs between two prediction models (17).This test was run per AUC of each test fold dataset during CV.Analogously, the test was applied per AUC of each external center dataset in the LCOA. Sensitivity analyses Apart from the main experimental setup, we considered two additional modifications to it in the form of sensitivity analyses. First, to see what effect the recalibration step was having on the two FL approaches (FedAvg and ensemble), we evaluated their performance without recalibration in a sensitivity analysis.In a second sensitivity analysis, we excluded hospitals with a low TAVI volume from the dataset and re-evaluated the models' performance results.In this case, we defined a low TAVI volume to be any hospital that performed fewer than 10 TAVI procedures in any year of operation after its first year. Hyperparameter optimization Hyperparameters for LASSO and FedAvg were optimized empirically on the training data (Supplementary Methods S3).For LASSO, we optimized the regularization parameter lambda, while for FedAvg, we optimized on the learning rate, number of training epochs, and variable selection agreement strength (Supplementary Methods S1). Handling of missing data Variables with more than 30% missing values were not included as predictors. The remaining missing values were assumed to be missing at random and were imputed using the Chain Equations (18).As shown in Table 1, imputation was done on the center-combined dataset for the central model, while for the other models, imputation was handled separately per-center dataset.In both validation strategies (CV and LCOA), missing values were imputed separately on the training and test sets (further information is provided in Supplementary Methods S4). Fitting final models From each of the four strategies, a "final" version of their model was fitted using records from the complete dataset.We used the resulting final models to report on and compare the predictor variables selected by each model strategy.In the case of central and FedAvg, the "final" model comprised of just one single logistic regression model, while for local, the "final" model was a set of h hospital-local models (where h is the number of hospitals in the dataset).The ensemble produced a "final" model comprised of h local models and one top-level model, which averaged the predictions from the h hospital-level models. Software All statistical analyses were performed in the R programming language (version 4.2.1) and R studio (version 2023.03.1).The metamean function from the meta package was used for conducting the REMA, and the roc.test function from the pROC package was used for the bootstrap testing for the difference in AUCs between two models.The "mice" package in R was used for imputing missing values (mice version 3.14.0).All the source code used in this analysis was documented and made openly available on GitHub (https://github.com/tsryo/evalFL).Experiments were carried out on a desktop machine with 16 GB of memory and an i7-10700 2.9 GHz processor and took approximately 2 days to run. Results The results in this section are structured into four sub-sections.First, we provide summary statistics of the TAVI dataset and its pre-processing.We then report on the models' predictive performance measures from cross-validation and LOCA.Third, we report on selected predictor variables in each model type.Finally, we present results from the sensitivity analyses.Figure 1 provides a graphical overview of the experimental setup, methods, and key findings from our analyses. Between 2013 and 2021, there were 17,689 patients with a TAVI intervention in one of the 16 Dutch hospitals (labeled A-P).In total, 1,028 patients lacked an outcome measurement; therefore, these patients were excluded from the analysis. The final TAVI dataset consisted of 16,661 records with an average outcome prevalence of 30-day mortality of 3.4%.The prevalence ranged from 1.2% to 5.8% between hospitals, with an intra-quartile range (IQR) of 2.8%-3.9%(Supplementary Table S2).From the list of all 33 candidate predictor variables, only the variable of frailty status was excluded for having more than 30% of its records missing. Model performance Predictive performance results for each model across both internal validation (CV) and external validation (LCOA) are reported in the following. Due to the lower volume of TAVI records and low outcome prevalence, fitting a hospital-local model failed in some iterations of the CV analysis.From the 10 folds during CV, a local model could not be fitted in 100% of folds in centers L and P, 90% of folds in center M, 70% in center I, 60% in center K, 40% in N, 20% in C and J, and 10% in centers F and O (Supplementary Table S3). Leave-center-out The meta-analysis pooled mean AUC from the LCOA was 0.68 (95% CI: 0.66-0.70)for the central model (Figure 2B, Supplementary Table S4), and AUC values ranged from 0.62 to 0.76 between external centers (Supplementary Table S7).FedAvg also had a mean AUC of 0.68 (95% CI: 0.65-0.70),and its AUC values for the individual centers ranged from 0.56 to 0.80.For the ensemble, the mean AUC was 0.67 (95% CI: 0.65-0.70),and AUC values ranged from 0.46 to 0.76 between external hospitals.Bootstrap AUC testing from LCOA showed that both FedAvg and ensemble outperformed central in one hospital (P) (Supplementary Table S8).In another two centers (C and N), FedAvg outperformed ensemble, and in one hospital (H), central outperformed ensemble. Calibration Calibration performance results varied across the different models and validation strategies. Calibration graphs showed that all models suffered from overprediction in the higher-risk ranges.To better inspect the lowerrisk probabilities (found in the majority of the records), calibration graphs for a model were visualized excluding the top 2.5% of highest predicted probabilities. Leave-center-out In the LCOA, the mean meta-analysis pooled calibration intercept for the central model was −0.01 (95% CI: −0.16 to 0.15) and the calibration slope was 0.88 (95% CI: 0.76-1.01)(Supplementary Table S9).Miscalibration was detected in 44% of external hospital validations for the central model (Supplementary Table S11).In FedAvg, the calibration intercept was 0.01 (95% CI: −0.16 to 0.18), the calibration slope was 1.04 (95% CI: 0.89-1.19), and miscalibration occurred in 44% of the external hospitals.The ensemble model had a calibration intercept of 0.01 (95% CI: −0.14 to 0.16), the calibration slope was 0.97 (95% CI: 0.82-1.12),and miscalibration was seen in 38% of centers.Similar to the calibration graph from CV, the calibration graph in LCOA showed central to most closely follow the line of the ideal calibration graph (Figure 3B). Net reclassification improvement NRI comparison results showed central models to outperform the rest in predicting positive outcomes during CV and LCOA.From CV, local models were superior to the rest for predicting negative outcomes, while during LCOA, central models showed a higher NRI than the rest for negative outcomes.In both CV and LCOA, FedAvg beat ensemble in the case-negative group.In the LCOA case-positive group, ensemble had a better NRI than FedAvg.Full results from comparing model predictions using NRI can be found in Supplementary Results S1. Predictors selected From the final models fitted using the whole dataset, FedAvg and ensemble both used the same set of 20 variables (Supplementary Tables S12, S13).AUCs from cross-validation (A) and leave-center-out analysis (B) of TAVI patient 30-day mortality risk prediction models.Next to each model's name, its mean AUC is given. Calibration graphs from cross-validation (A) and leave-center-out analysis (B) results.The calibration graphs are shown after trimming the 2.5% highest predicted probabilities to focus on the bulk of the sample.The legend shows the calibration intercept and slope of each model, respectively, as obtained from the Cox method ( 16).Mean values for cross-validation were obtained by computing performance metrics on the combined predictions from all corresponding test sets.An asterisk (*) is placed after the names of the models where miscalibration was detected by way of the Cox method, and a hat (^) symbol is placed if miscalibration occurred in the calibration slope.The calibration intercept and slope values shown in the legend are calculated from all the predictions, including the 2.5% highest predicted probabilities.The hospital-local models used between 2 and 14 variables (IQR 4-9).Selected variables occurring in at least 50% of all local models were age, left ventricular ejection fraction (LVEF), body mass index (BMI), BSA, and procedure access route.In the case of two hospitals (L and P), no local model could be trained due to insufficient TAVI record volumes with a positive outcome. The central model selected 19 predictor variables (Supplementary Table S12), which comprised 13 predictors already selected by the other strategies, plus an additional 6 new predictors [Canadian Cardiovascular Society (CCS) class IV angina, critical preoperative state, dialysis, previous aortic valve surgery, previous permanent pacemaker, and recent myocardial infarction].More information on the considered and selected variables can be found in Supplementary Table S14. Sensitivity analyses In such an analysis, where the recalibration step from model training was skipped for FedAvg and ensemble models, we saw that both performed significantly worse in calibration but not in AUC.In the second sensitivity analysis, where three low-volume heart centers (P, O, and N) were excluded from the analysis, the performance of the ensemble model remained mostly unchanged, while AUC was negatively affected for the other models.From this same analysis, an improvement was observed in calibration during CV for FedAvg and a worsening for central was observed during LCOA.Full results from the two sensitivity analyses are available in Supplementary Results S2. Summary of findings In this study, we investigated the performance of two FL approaches compared to central and local approaches for predicting early mortality in TAVI patients.We showed that FedAvg and ensemble models performed similarly compared to a central model.The hospital-local models were worse in terms of average AUC compared to the other approaches. Testing for AUC differences showed the central model to outperform local and FedAvg models but not ensemble during internal validation.The local models, however, did not significantly outperform the federated ones, suggesting that the AUC performance of FedAvg and ensemble lied somewhere between that of the central and local models. Central and federated models performed similarly well in terms of calibration, whereas local model predictions were more frequently miscalibrated.Furthermore, in two cases, the local models could not be fitted due to the low number of positive outcome records in their datasets.Although local models were calibrated by design to their corresponding hospital-local training datasets (Table 1), this was often not sufficient to produce a good calibration on their corresponding test sets.While the federated models may not have been calibrated by design, they offered more options for recalibration (such as global, local, or federated recalibration).This could provide model developers with more fine-grained control over tradeoffs between maintaining data privacy and improving model calibration. In the main experiments, the choice was made to use the central recalibration strategy (as opposed to local or federated) for the federated approaches.Although this approach requires the sharing of patient outcome data and model predictions between centers, it does offer the most promising recalibration approach of the three options. In terms of NRI, there was an observed improvement from local to FedAvg and ensemble to central when looking at the outcome-positive group of records during internal validation (Supplementary Results S1). When comparing the two federated approaches, it is difficult to say that one strategy was better than the other, as both had strengths and weaknesses.In terms of discrimination, FedAvg seemed to be slightly superior to the ensemble model.For model calibration during internal validation, FedAvg and ensemble showed nearidentical results; however, in the external validation, the ensemble approach was miscalibrated in fewer external hospitals. From an interpretability standpoint, the FedAvg model would be preferred to the ensemble one, as it delivers a single parametric model with predictor variables and their coefficients.The ensemble, on the other hand would, comprise a list of parametric models (which may not all use the same variables), plus a top-level parametric model that combines the outputs from the aforementioned list.While the ensemble model is not as easily interpretable immediately, techniques like metamodeling could be useful to bridge this gap (19). It is worth noting that, although easily interpretable, the FedAvg model was more costly to develop than the ensemble one regarding computing resources.Depending on the number of hyperparameter values considered, we saw that the training times for the FedAvg model could easily become orders of magnitude larger than those for the ensemble model.In the current experiments, we developed our in-house frameworks for both federated approaches and encountered more hurdles with the FedAvg strategy-these included issues such as model convergence problems and the need to use a more elaborate variable selection strategy, which introduced the need for an additional hyperparameter. Strengths and limitations Our study has several strengths.It is the first study on employing federated learning in the TAVI population and one of the very few FL studies in cardiology.It is also based on a large national registry dataset consisting of all 16 hospitals performing TAVI interventions in the Netherlands.In addition, we provided a framework (in Table 1) of the various important elements to consider when adopting FL strategies in this context.We also considered multiple aspects of predictive performance and employed two validation strategies to prevent overfitting and optimism in the results.Finally, two sensitivity analyses were conducted to understand the robustness of our findings. Our research also has limitations.We looked at FL prediction models for TAVI patients, considering only one outcome: the 30day mortality.However, early post-operative mortality is a relevant and important clinical outcome in the TAVI patient group. From a privacy perspective of local hospitals, we did not evaluate additional techniques that could be used to preserve patient privacy at local centers (such as differential privacy). We also considered only one type of ensemble approach (mean volume-weighted ensemble) and only one type of federated aggregation approach (FedAvg),although a number of alternatives are available in both cases (11,20).Although relatively small, the group of patients excluded from the analysis due to missing outcome values could have somewhat biased our results in model performance.Changes over time in TAVI intervention modalities and patient selection protocols could also have impacted model performance estimates (21). Finally, we did not extensively tune hyperparameters, which might have affected the performances of the FedAvg and ensemble models (22). Comparison with literature Few studies have investigated the impact of FL in the cardiology domain (23)(24)(25).In only one study, the authors look at risk models for TAVI patients (23).In this study, Lopes et al. developed non-parametric models for predicting 1-year mortality after TAVI on a dataset from two hospitals.They compared hospital-local model performance against that of federated ensemble models and found the ensemble models to outperform the local ones.Our findings on the ensemble model's superior performance align with the study by Lopes et al.However, we expanded on their findings, first, by evaluating predictive performance with a much larger number of hospitals (16 vs. 2); second, by considering model calibration performance and NRI in addition to AUC; third, by performing additional geographic validation; and finally by considering a centralized model strategy as a baseline in addition to local and federated ones. Another study by Goto et al. looked at training FL models to detect hypertrophic cardiomyopathy using ECG and echocardiogram data from three hospitals (24).The authors considered the AUC metric for discrimination and looked at FedAvg and local hospital models.They reported that the FL models outperform local models in terms of AUC, something we also observed in the current study. In other medical domains, FL models have previously been evaluated on their performance compared to models derived from non-FL techniques. A similar study to ours that described the benefits of using centralized models compared to federated and local ones is that by Vaid et al. (26).In their study, the authors developed prediction models for COVID-19 patient 7-day mortality outcomes and reported that in five out of five hospital datasets, the models derived from a central development strategy outperformed both local and federated models in terms of AUC.This finding was corroborated in our study for the local models but not for the federated models, which performed on par with the central ones.Differences in the domain of application and in the datasets may explain this.From inspecting the NRI of our models, however, it became clear that the central models offered an improvement on the federated ones, albeit not a statistically significant one.The findings from Vaid et al. (26), namely, that local models tended to underperform compared to central and federated models (in AUC but also in calibration), align with our findings. (2) looked at eight previous studies of prediction models that used a centralized model approach and attempted to reproduce these eight models with the modification of using FL in their development strategies.From the eight models they evaluated, only one looked at hospital as the unit of the federation and reported a coefficient estimate for extrapulmonary tuberculosis in individuals with HIV.This coefficient differed significantly between the centralized and federated approaches.However, in a different setting, we observed similar findings with respect to the coefficients of our federated and centralized TAVI risk models. Implications and future studies For clinicians wanting to adopt a federated learning approach for developing prediction models for TAVI patients, our recommendation would be to use the ensemble strategy if predictive performance is most important, while the FedAvg strategy should be considered if one is willing to sacrifice a bit of model performance for better interpretability. From the federated learning aspects overview (Table 1), possible model strategy setup options were described.While we attempted to make a comprehensive experimental setup, the purpose of this study was not to evaluate all possible options from this table.This methods' overview could thus be further used to guide an evaluation of how predictive performance would change if one explored the various setup options. Further studies should be done to refine the FedAvg and ensemble models, focusing on the use of additional techniques to enhance privacy-preservation and hyperparameter tuning (22).The evaluation of model performance should also be considered for other outcomes in addition to the 30-day post-operative mortality, as well as for other FL models in addition to the two types considered here.Future research could also investigate further aspects of model predictive performance by incorporating additional metrics, such as model's sharpness, the area under the precision-recall curve (AUC-PR), and the F1 score.In addition, the questions of investigating model performance in terms of scalability and computing resource requirements are important and merit future research. The limitation of fitting a local model in centers with an insufficient number of case records emphasizes an issue that has not been extensively covered.This area represents a potential direction for future research to improve predictive modeling in such contexts. Our experiments focused on parametric models, or more precisely models that use logistic regression.It is unclear whether Conclusion Both the FedAvg and ensemble federated learning models had comparable AUC and calibration performance to the central risk prediction model of TAVI patients.This suggests the FedAvg and ensemble models are strong alternatives to the central model, emphasizing their potential effectiveness in the multicenter dataset. The heterogeneity in performance across different hospitals underscores the importance of local context and sample size.Future research should further explore and enhance these distributed learning methods, particularly focusing on the robustness of federated learning models across diverse clinical settings. FIGURE 1 FIGURE 1Graphical summary of the dataset used, prediction models considered, validation strategies employed, and main findings for the current study on 30day mortality risk prediction models for TAVI patients.Key questions are as follows: In the context of multicenter TAVI risk prediction models, what is the impact on model performance from adopting two federated learning strategies (FedAvg and ensemble) compared to central and local-only model strategies?Key findings are as follows: Both federated learning strategies (FedAvg and ensemble) had comparable performance, in terms of discrimination and calibration, to that central models and outperformed the local-only models.Take-home message is as follows: Use of federated learning techniques should be considered a viable option for TAVI patient clinical prediction model development. Table 1 Calibration: does the model fitting process also calibrate the model predictions?Recalibration: recalibration (of any kind) applied to the model after its fitting?Stacking predictions CV: during CV, how were the model predictions from the test folds stacked (combined) before computing performance metrics?Stacking predictions LCOA: during LCOA, how were the model predictions from the test centers stacked (combined) before computing performance metrics?Obtaining performance: when computing performance metrics for a model, what set of predictions were used? a In the case of local models, for each individual center, the model predictions from all of its test folds during CV were stacked together.Each set of these stacked predictions was then used to obtain the per-center local model performance measures.Pooled performance across all center local models was then calculated with a REMA pooling of the individual center performance results.Yordanov et al. 10.3389/fcvm.2024.1399138Frontiers in Cardiovascular Medicine 03 frontiersin.org Yordanov et al. 10.3389/fcvm.2024.1399138 Yordanov et al. 10.3389/fcvm.2024.1399138Frontiers in Cardiovascular Medicine 09 frontiersin.orgthe current findings would translate into federated learning for non-parametric or deep learning models.Performance variations observed across different models emphasize the importance of selecting the appropriate model development strategy for each individual setting.Finally, examining the potential benefits and limitations of federated learning in cardiology, in general, merits future research.
v3-fos-license
2021-04-24T04:37:23.741Z
2021-02-28T00:00:00.000
233378376
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.orientjchem.org/pdf/vol37no1/vol37no1/OJC_Vol37_No1_p_65-70.pdf", "pdf_hash": "43f1b20cd63cb751fb013c9fdf47d6fdb025a01c", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46771", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "43f1b20cd63cb751fb013c9fdf47d6fdb025a01c", "year": 2021 }
pes2o/s2orc
Photocatalytic Degradation of Naphthalene by UV/ZnO: Kinetics, Influencing Factors and Mechanisms Conventional wastewater treatment is not able to effectively remove Aromatic hydrocarbons such as Naphthalene, so it is important to remove the remaining antibiotics from the environment. The aim of this study was to evaluate the efficiency of UV/ZnO photocatalytic process in removing naphthalene antibiotics from aqueous solutions. This was an experimental-applied study that was performed in a batch system on a laboratory scale. The variables studied in this study include the initial pH of the solution, the dose of ZnO, reaction time and initial concentration of Naphthalene were examined. The amount of naphthalene in the samples was measured using GC. The results showed that by decreasing the pH and decreasing the initial concentration of naphthalene and increasing the contact time, the efficiency of the process was developed. However, an increase in the dose of nanoparticles to 0.8 g/L had enhance the efficiency of the process was enhanced, while increasing its amount to values higher than 0.8 g/L has been associated with a decrease in removal efficiency. The results of this study showed that the use of UV/ZnO photocatalytic process can be addressed as a well-organized method to remove naphthalene from aqueous solutions. INTRODUCTION Polycyclic aromatic hydrocarbons (PAHs), are in the category of hazardous and toxic substances and have recently been identified as carcinogenic compounds 1 . PAHs introduced into the water to some degree undergo physical, chemical and biological changes leading to their gradual, though slow, degradation. They are sorbed by suspended matter, aquatic organisms and bottom sediments 2 . Naphthalene as albocarbon, camphor tar, white tar, or naphthene, is made of two fused benzene rings and is obtained from coal 2,3 . naphthalene is widely used for disinfection and insecticides 4 . has been reported in most industrialized countries and its permissible level in drinking water is 0.05 mg/L according to the standard of the world health organization, so its removal from industrial effluents is important 5,6 . Among the methods of removing organic pollutants from water and wastewater, biodegradation of wastewater by microorganisms has been considered, but due to benzene rings in aromatic hydrocarbons, the biological removal efficiency is severely reduced and microorganisms are not able to break these structures 7,8 . Therefore, in recent years, many researchers have focused on the use of modern advanced oxidation processes (AOPs) 9 . Among the various methods of this technique, the photocatalytic oxidation method using the nanophotocatalyst is a very effective method 10,11 . This method is done by irradiating ultraviolet radiation to the surface of a semiconductor such as ZnO and TiO 2 12,13 . These materials are excited by radiation and produce hole-electron pairs in the surface layers 14,15 . The resulting holes have strong oxidizing properties and electron is a good reducing agent; by producing hydroxide radicals, the degradation of pollutant organic molecules occurs 16,17 . Regarding the research on the use of nanophotocatalysts in the removal of naphthol, the following studies can be mentioned: Luo et al., studied and evaluated the catalytic degradation of β-Naphthol using Degussa P25; in their study, the effect of factors, e.g., pH, concentration of the reactant and catalyst dosage was assessed 18 . Lee et al., by hydrolysis of tetra isopropoxide (TTIP) at 100-600 o C, produced TiO 2 nanoparticle and investigated the catalytic activity of this catalyst with activated carbon for degradation of β-Naphthol; the removal efficiency in this pollutant has been reported to be more than 90% 19 . Although several experimental studies carried out in recent years regarding the Total pollutant removal by using nanoparticles from aquatic solution there is still a significant gap in the relevant literature with reference to the investigation adsorption capabilities of the UV/ZnO for the removal of naphthalene from aquatic solution. To the best of the authors' knowledge, there are almost no papers in the literature specifically devoted to a study of the application UV/ZnO reactor for the adsorption of naphthalene and the implementation of the design optimum process conditions in the removal naphthalene from aquatic solution. EXPERIMENTAL This research is an experimental study that was performed in a batch system and on a laboratory scale on synthetic solutions. Naphthalene was prepared from Merck & Co. Company. To do this, the stock solution of Naphthalene (1000 mg/L) was prepared weekly and stored in the dark at 4°C. The solutions with the desired concentrations were then prepared using the stock solution. Effect of parameters was studied in the pH values of 3, 5, 7, 9 and 11, naphthalene concentrations 5, 10, 25, 50 and 100 mg/L, the dose of zinc oxide nanoparticles in the range of 0.1, 0.2, 0.4, 0.6, 0.8 and 1 g/L, and ultraviolet lamp power (15 watts). In order to adjust the pH of the solution, 0.1 N sulfuric acid and sodium hydroxide were utilized. The experiments were performed in a 2-liter glass reactor with dimensions of 30 x 12 x 9 cm. The irradiation source was a 15 watts low-pressure UVc lamp. The lamp was inside a very transparent quartz coating with a diameter of two centimeters. The lamp was placed in the center of the container and the reactor was completely covered with aluminum foil so that the sample could be better irradiated and the radiated light would not be wasted. Inside the reactor, a magnetic stirrer was used to completely mix the sample for irradiation. To remove the nanoparticles, the samples were centrifuged at 3600 rpm for 10 minute. The remaining naphthalene in solution was measured using GC With Flame Ionization Detector (company: Agilent USA. Model: GC7890-MS5975). Method of analysis The concentration of naphthalene was determined by gas chromatography (GC). To inject the sample into the GC, it is necessary to transfer the sample from the aqueous phase to the organic phase. For this purpose, 10 mL of the sample was first mixed with 2 mL of dichloromethane and the mixture was put on a stirrer. After half an hour, dichloromethane, which was heavier than the aqueous sample, was separated and this process was repeated 2 more times and finally, 6 mL of naphthalene-containing dichloromethane was gently heated to bring the volume to 1 mL (it should be noted that due to the high solubility of naphthalene in dichloromethane, it remains in the organic phase and does not transfer into the gaseous phase). Finally, the temperature setting of the device was as follows: the initial temperature of the oven stays at 65 o C for one minute; the injector temperature in the splitless mode is set at 200 o C, and the detector temperature is set at 210 o C. Finally, after preparing the sample and adjusting the device, one microliter of the sample was injected into the device. RESULTS AND DISCUSSION Scanning electron microscopy (SEM) and transmitting electron microscopy (TEM) have been employed for studying the phase and crystalline structure and determining the size and shape of zinc oxide nanoparticles. Fig. 1a shows the SEM image and Fig. 1b shows the TEM image of the zinc oxide nanoparticles used in this study. The results showed that zinc oxide particles have a diameter of 50 nm and the particles used are nano in size and crystalline. The effect of contact time was shown in Fig. 3. As can be seen in this figure, with increasing the contact time, the removal percentage of naphthalene has enhanced, so that, for example, for the power of 15 watts, the removal efficiency was reached from 53% in 10 min to 96% in 60 minute. In this study, the removal efficiency was amplified with increasing contact time. However, the removal efficiency had the highest increase in the first 10 min of the process, and over time, the increasing trend of naphthalene removal efficiency was diminished. The rapid degradation of naphthalene contaminants in the first 10 min of the process can be attributed to free radicals generated by the electron excitation of zinc oxide nanoparticles 20,21 . Although the excitation process of zinc oxide nanoparticles and the production of hydroxyl free radicals did not decrease with increasing contact time, due to the formation of intermediate organic compounds caused by the degradation of naphthalene contaminants, some of the hydroxyl radicals produced were used to degrade these compounds 22,23 . As a result, the removal of naphthalene contaminants is reduced. These results are consistent with the research of other researchers 24 . Figure 4 shows the effect of different concentrations of ZnO catalysts on the removal efficiency of naphthalene at pH = 3 and a concentration of 5 mg/L of naphthalene at irradiation time of 60 min for 15 watts. As can be seen in this figure, with increasing the dose of zinc oxide nanoparticles up to 0.8 g/L, the removal efficiency of naphthalene was enhanced and at higher doses, the efficiency was diminished. Initially, increasing the nanoparticle dose is led to an increase in the naphthalene removal efficiency, but the removal efficiency then decreased with increasing this amount. Therefore, the optimal value of 0.8 g/L was selected. The causes of this phenomenon are attributed to the increase in solution turbidity, reduction in UV-penetration, increase in path traveled by optical photons and decrease in the total excitable surface due to the contamination of the contaminant on the catalyst surface 25,26 . Effect of naphthalene concentration As shown in Fig. 5, the removal efficiency drops with increasing naphthalene concentration. This can be explained by the fact that increasing the concentration of naphthalene reduces the number of active sites on the catalyst surface (due to their high concentration of contaminant molecules) and thus the rate of production of oxidants such as hydroxyl free radicals due to receiving UV-rays is reduced and therefore the reaction rate is also reduced 27,28 . As shown in the figure, at a concentration of 100 mg/L, the amount of naphthalene removal after 60 min is equal to 87%, while at a concentration of 25 mg/L, this pollutant is completely eliminated after almost 30 hours. research more than any other material due to its very good properties and is one of the most active catalysts. It can be said that only TiO 2 can compete with it in terms of activity 29 . Some of the important properties that have caused this are chemical resistance, non-toxicity, cheapness and stability. Moreover, the energy gap between its levels is 3 to 3.2 electron volts, which can be excited by light with a wavelength shorter than 385 nm. Unlike metals, which have a continuous space for moving because of free unbound electrons, semiconductors have an energy-free region, in which no energy levels are created to facilitate the recombination of the electron-hole pair created by light activation in semiconductor solids. This energy-free region, which must extend from the top of the valence band and the bottom of the conduction band (which contains the vacancies of the electrons), is called the band gap 30,31 . If the energy of an optical photon is equal to or greater than the band gap of a semiconductor, the absorption of this photon by the semiconductor solids will excite an electron (e -) from its valence band and transfer it to the conduction band. At this time, an electron vacancy or positive charge called a hole (h + ) also occurs at the valence band. The pair of electrons and holes formed either recombine to produce heat energy or participate in redox reactions with compounds adsorbed on semiconductor surfaces. Due to the need for UV-light to perform, this catalytic role refers to that photocatalyst 32 . The wavelength of 380 nm causes the electron to be excited, and the oxygen in the solution accepts the excited electron. A positively charged holes remain and produce an OH 0 radical by adsorbing OHions in the environment. The following reactions describe the production of hydroxyl radicals in titanium dioxide 33 : Reaction kinetics Several factors may affect the quality of the reactions. Depending on the concentration of the Mechanisms of photocatalytic reactions ZnO has been used for photocatalytic CONCLUSION To remove non-biodegradable contaminants, instead of employing conventional biological processes, high efficient and faster photocatalytic process can be used. The percentage of naphthalene removal with a concentration of 100 ppm at pH 0.1, 0.2, 0.4, 0.6 and 0.8 after 60 min was obtained to be 76.1, 83.9, 89.2, 92.1% and 93.8 respectively. By reducing the concentration of contaminants, the removal efficiency increases so that for a concentration of 25 ppm naphthalene, the complete removal of naphthalene was acquired after 60 minute. As the concentration of ZnO nanoparticles (established on it) increases, the percentage of contaminant removal increases. The rate constants at concentrations of 25, 50, 100 ppm were equal to 0.0289, 0.0257, 0.0174; it is observed that with decreasing concentration, the reaction rate also increases. ACKNOwLEDgMENT The authors are grateful to the Zahedan University of Medical Sciences for the financial support of this study (Code: 9097). Conflicting interest There is no conflicting interest in this study. contaminant on the reaction rate, a reaction can be zero, first and second order. Based on the kinetic models of photocatalytic processes, it is determined that this process is the first order, and the Langmuir-Hinshelwood model is a kinetic model that is used to describe first order reactions at the interface between solid-liquid phases and its equation is as follows 31 : R= d c /d t = Kr K a C/(1+ K a C) In this regard, C, Ka, and Kr represents the initial concentration, adsorption constant and rate constant, respectively 28 . Kr in all concentrations examined is between zero and one. In concentrations of 25, 50 and 100 mg/L is equal to 0.0272, 0.0214 and 0.0157 1/minute. If the graph of (Ln (C 0 /C e )) against time and changes of these parameters is displayed as a straight line, the slope of the line is equivalent to a constant rate of the first-order reaction. Fig.7 shows the values of K and R 2 for the photocatalytic degradation reaction of naphthalene under optimal conditions. As can be seen, the trend of concentration changes with time follows a first-order kinetic model and the effect of the concentration on the rate constant of reaction can be seen (increasing the initial concentration decreases the rate constant values of reaction).
v3-fos-license
2017-04-13T18:19:25.728Z
2015-06-08T00:00:00.000
2966381
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0128934&type=printable", "pdf_hash": "e9674c901ad741b2fbb3a73d24fefa73ab79ddea", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46772", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "e9674c901ad741b2fbb3a73d24fefa73ab79ddea", "year": 2015 }
pes2o/s2orc
Changes in Macrophage Gene Expression Associated with Leishmania (Viannia) braziliensis Infection Different Leishmania species cause distinct clinical manifestations of the infectious disease leishmaniasis. It is fundamentally important to understand the mechanisms governing the interaction between Leishmania and its host cell. Little is known about this interaction between Leishmania (Viannia) braziliensis and human macrophages. In this study, we aimed to identify differential gene expression between non-infected and L. (V) braziliensis-infected U937-derived macrophages. We deployed a whole human transcriptome microarray analysis using 72 hours post-infection samples and compared those samples with their non-infected counterparts. We found that 218 genes were differentially expressed between infected and non-infected macrophages. A total of 71.6% of these genes were down-regulated in the infected macrophages. Functional enrichment analyses identified the steroid and sterol/cholesterol biosynthetic processes between regulatory networks down-regulated in infected macrophages. RT-qPCR further confirmed this down-regulation in genes belonging to these pathways. These findings contrast with those from studies involving other Leishmania species at earlier infection stages, where gene up-regulation for this metabolic pathway has been reported. Sterol biosynthesis could be an important biological process associated with the expression profile of macrophages infected by L. (V.) braziliensis. Differential transcriptional results suggest a negative regulation of the genetic regulatory network involved in cholesterol biosynthesis. Introduction American tegumentary leishmaniasis is a public health problem in Central and South America, affecting 18 countries with approximately 1.5 million new cases each year. Colombia, Brazil and Peru present 75% of the cases of cutaneous leishmaniasis in Latin America [1]. Species within the Viannia subgenus are prevalent in America and are linked to cutaneous and mucocutaneous leishmaniasis, with L. (V.) braziliensis being the major etiological agent of the latter. The development of this group of diseases is multifactorial, is determined by the infecting species and the host immune system, and depends on the survival and replication of the parasite inside macrophages [2]. Parasite survival depends in part on the capacity of the parasite to counter the leishmanicidal mechanisms of the macrophage and to modulate the host immune response [3]. The parasite has developed different strategies to achieve replication, such as delaying phagosome maturation, inactivating acid hydrolases of the phagolysosome [4,5], disrupting antigen presentation [6,7], suppressing the Th1 cell response [8], and altering metabolic pathways in the macrophage [9]. Although many studies have examined macrophage gene expression and the factors that influence leishmaniasis pathogenesis, these studies vary according to the infecting species and the stage of infection. Peacock The genes differentially distributed between species encode proteins involved in parasite-host interactions and parasite survival in the macrophage [10]. These findings highlight the importance of studying the interaction between macrophages and each individual Leishmania species. Many of the works examining the effect of Leishmania infection on macrophage gene expression have been conducted using species of the Leishmania subgenus [11][12][13][14], in contrast with the small number of studies conducted on species of the Viannia subgenus. No studies reporting the global gene expression of macrophages challenged with L. (V.) braziliensis were found. Meanwhile, most studies that have described the gene expression profile of macrophages have been performed during the initial stages of infection, between 0 and 24 hours [12,13,[15][16][17][18][19][20]. Few works exist on more advanced stages of infection [8,11]. The expression profiles of macrophages challenged with different Leishmania species mostly exhibit generalized down-regulation [14,21]. However, gene expression was up-regulated in macrophages infected with L. (V.) panamensis for 24 hours [22]. In contrast, macrophages infected with L. (L.) chagasi for 24 hours did not show differences between genes with positive regulation and genes with negative regulation [16]. This finding indicates that there are differences in the expression profiles of macrophages depending on both the infecting species and the stage at which the parasite-host interaction is studied, therefore, it could be incorrect to extrapolate the results from studies performed with other species. The objective of this work was to determine the global gene expression profile of macrophages derived from the U937 cell line associated with L. (V.) braziliensis infection and to identify, based on functional enrichment analysis, the biological processes linked to the genes differentially expressed between non-infected macrophages and those infected for 72 hours. Two hundred and eighteen differentially expressed genes were identified in the study, out of which 71.6% exhibited down-regulation. Functional enrichment analyses revealed that sterol biosynthesis, including that of cholesterol, is the most relevant process linked to the expression profile of infected macrophages. Macrophage differentiation and in vitro infection The U937 cell line (American Type Culture Collection-CRL-1593.2, United States) was cultured at an initial concentration of 1X10 5 cells/mL in a final volume of 5 mL of RPMI-1640 medium (Sigma-Aldrich R410, United States, MO) supplemented with 10% fetal bovine serum (FBS). Cells were incubated at 37°C and 5% CO 2 . The reference strain of L. (V.) braziliensis (MHOM/BR/00/M2903, Centre National De Reference Des Leishmania, France, Montpellier) was cultured at an initial concentration of 1X10 6 parasites/mL for six days in Schneider medium (Sigma-Aldrich S9895, United States, MO) supplemented with 10% FBS at 26°C, until the stationary phase was achieved. The differentiation of 6.75X10 6 U937 cells into macrophages was induced by treating with 100 ng/mL phorbol-12-myristate-13 acetate (PMA) for 120 hours at 37°C and 5% CO 2 on a glass substrate [23]. The macrophages were infected with L. (V.) braziliensis promastigotes opsonized with inactivated AB + human serum. A ratio of 15 parasites per macrophage was used. Infected macrophages were incubated for two hours at 34°C and then washed three times with PBS (pH 7.4) to remove non-bound parasites. Finally, 10% FBS-supplemented RPMI medium was added, and the samples were incubated at 34°C and 5% CO 2 for 72 hours. Infection was assessed microscopically by determining the percentage of macrophages that were infected. RNA extraction was performed on experimental units where infection was above 60% [21]. RNA extraction, cDNA preparation, and microarray hybridization Two experimental groups were considered for the assay: macrophages infected with L. (V.) braziliensis and non-infected control macrophages. Both groups were processed in the same way, except that RPMI was added to control macrophages instead of parasites. Every infection or control experiment was performed in triplicate. Total RNA was extracted 72 hours after infection, a stage in which amastigotes were replicating (S1 Fig and S1 Table). Infected and non-infected macrophages were washed three times with sterile PBS (pH 7.4) and treated with 500 μL of TRIzol Reagent (Invitrogen 15596-018, United States, CA) following manufacturer protocols. An equivalent volume of 70% ethanol was added to the aqueous phase. This solution was transferred to an RNeasy Mini Kit spin column (QIAGEN 74104, Netherlands) to perform RNA purification and precipitation according to the manufacturer's protocols. RNA quality and quantity were determined using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific Inc, United States, MA). The integrity of extracted RNA was evaluated using an Agilent 2100 Bioanalyzer system with an RNA 6000 Nano LabChip kit (Agilent Technologies Ltd, United States, CA). All extracts had a 260/280 ratio above 1.9 and an RNA integrity number (RIN) greater than 7.8. The extracts were kept at -70°C until they were used. cDNA preparation and hybridization were performed by ALMAC Diagnostics (Great Britain, Durham) Ribosomal RNA was reduced with a RiboMinus Human Transcriptome Isolation Kit (Life Technologies K155001, United States, CA). Single-strand cDNA was synthesized using a NuGEN Ovation RNA amplification system (NuGEN Technologies Inc. 3100-12, United States). cDNA was fragmented and labeled with biotin using a Gene Chip WT Terminal Labeling Kit (Affymetrix Inc. 900670, United States, CA). Each labeled fragment was hybridized on the Affymetrix GeneChip Human Exon 1.0 ST Array (Affymetrix Inc. 900649, United States, CA) for 16 hours at 45°C. Finally, the hybridized arrays were washed, stained and scanned using the GeneChip Fluidics Station 450 and the GeneChip Scanner 3000 (Affymetrix Inc, United States, CA) to obtain raw data of the expression levels. Microarray data analysis The raw data generated by the Affymetrix Expression Console (EC) software version 1.1 were pre-processed using the Robust Multichip Average (RMA) algorithm based on the standard RMA background correction, quantile normalization, and median polish summarization. Data quality control employed hierarchical clustering analysis and principal component analysis (PCA). Exon QC Summary version 1 was used to evaluate the integrity of the expression data based on ALMAC Diagnostics Gene Chip Quality Control standards. Identification of differentially expressed genes Pre-processed data were filtered to eliminate non-informative transcripts based on the intensity of the probe sets versus the background noise (p = 0.01) and the variance (α = 0.8). Analysis was performed with the Feature Selection Workflow tool (version102) developed by ALMAC Diagnostics. Differentially expressed transcripts within the experimental groups (i.e., infected and noninfected macrophages) were identified using analysis of variance (ANOVA) statistical tests and post-hoc comparisons. The effect of inflation due to the use of multiple tests was reduced using the false discovery rate (FDR) for the ANOVA p values [24]. Differentially expressed transcripts were selected based on a minimum 1.5 log 2 fold expression change between groups and p 0.05. The expression values of the differentially expressed genes were normalized by the probe set median and were clustered hierarchically using Euclidean distance and Ward's linkage algorithms. Raw and processed data were deposited into the Gene Expression Omnibus (GEO) database with access number GSE61211 based on MIAME (Minimum Information About a Microarray Experiment) guidelines [25]. Functional enrichment analysis Two tools were used for this analysis: the Functional Enrichment Tool (FET), developed by ALMAC Diagnostics based on Gene Ontology (GO) annotations [26], and the commercial software Ingenuity Pathways Analysis (IPA, Ingenuity Systems, www.ingenuity.com). The analyses enabled the identification and organization of biological entities associated with the lists of differentially expressed genes [27]. Each entity was organized according to the statistical value obtained in the enrichment analysis [28], which was adjusted for "multiple testing" [24]. This analysis was used to evaluate the probability that the association between a specific gene and a biological entity was random. Validation of differential gene expression by RT-qPCR The RNA samples used for microarray analysis were treated with DNAse I (Invitrogen 18068-015, United States, CA) following manufacturer protocols. cDNA used as a template in RT-qPCR was obtained using a High Capacity RNA-to-cDNA reverse transcription kit (Life Technologies 4387406, United States, CA). RNA in the absence of reverse transcriptase enzyme was used as a negative control in each reverse transcription experiment. RT-qPCR was used to validate the results obtained with microarrays. Fourteen genes were selected: 10 were randomly chosen from the group of 218 genes with differential expression, and four were part of the cholesterol biosynthesis pathway. β2M and GNB2L1 were used as housekeeping genes because their expression levels did not vary in infected and non-infected macrophages in microarray assays. The sequences of the primers used to amplify the housekeeping and selected genes are presented in S2 Table. qPCR assays were performed using a SsoFast EvaGreen Supermix kit (Bio-Rad 172-5201, United States, CA) following the manufacturer's protocols. Assays were run in a CFX96 thermocycler (Bio-Rad, United States, CA). Expression values were determined through the ΔΔCq method [29] using the Gene Expression module of the software CFX manager TM version 3.0 (Bio-Rad, United States, CA). Noninfected macrophages were used as the control condition to compare the gene expression levels in non-infected and infected macrophages. These assays were repeated three times, with three biological replicates for each of the two experimental conditions. Identification of genes differentially expressed by infected versus noninfected macrophages Leishmania is capable of modulating the immune response and the signaling pathways of macrophages to promote its survival in the host cell. To determine the effect of L. (V.) braziliensis infection at 72 hours on the gene expression of macrophages derived from the U937 cell line, the expression profiles of non-infected and infected macrophages were compared using microarray analysis. Microarray analyses identified 12,629 genes, out of which 218 were differentially expressed between non-infected macrophages and those infected for 72 hours with L. (V.) braziliensis. The 218 genes were selected based on log 2 fold expression change greater than 1.5 or less than -1.5, with p < 0.05. From the differentially expressed genes, 71.6% were downregulated, with log 2 fold expression change s between -1.5 and -2.63. Over-expressed genes showed log 2 fold expression change values between 1.5 and 3.01 (Fig 1a and 1b, S3 Table). Within the group of over-expressed genes, some linked to the macrophage cytoskeleton were detected, such as TUBA3, TUBB2B AND SEPT11. Genes related to the immune response, such as the chemokine CXCL2 and the gene WNT7B, which belongs to the WNT family, were also detected. Other genes identified were linked to heavy metal regulation, such as metallothioneins MT1G and MT3. Within the group of negatively regulated genes, HMGCS1, STARD5, MSMO1 (SC4MOL) and STARD4, HMGCR, DHCR7, SC5DL, FADS1, FADS2, APOB48R and APOL6, which are related to lipid metabolism and transport, stood out. Few of the down-regulated genes were linked to the immune response. These genes included one that encodes interleukin 18 (IL-18) and one that encodes the transcription factor signal transducer and activator of transcription 2 (STAT2). Fourteen genes were validated through RT-qPCR, and results agreed with those obtained through microarray analysis for 12 genes. However, the genes potassium channel subfamily K member 3 (KCNK3) and sphingosine-1-phosphate receptor 2 (S1PR2) did not exhibit changes in expression through RT-qPCR between infected and non-infected macrophages, while they were identified as over-expressed via microarray analysis (Fig 2). Identification of biological entities through functional enrichment analyses Functional enrichment analysis was performed using the FET tool and the IPA software to identify biological entities linked to the differentially expressed genes. IPA analysis identified significant differences in mRNA levels of genes encoding proteins involved in steroid biosynthesis, associated with the genes SC5DL, HMGCR and DHCR7, and the degradation of ketone bodies, associated with the genes HMGCS1 and suggesting this may be an important pathway involved during infection. Analysis using the FET tool identified sterol and cholesterol biosynthesis as the relevant biological processes in the interaction between macrophages and L. (V.) braziliensis, associated with the genes SC5DL, SC4MOL, HMGCR, C14ORF1, HMGCS1 and DHCR7. The results from functional enrichment analysis, along with RT-qPCR validation of expression levels of some of the genes linked to cholesterol biosynthesis, suggest that this biological process could be negatively regulated in macrophages infected with L. (V.) braziliensis for 72 hours (Fig 3). Discussion This study showed that most of the genes differentially expressed between non-infected macrophages and those infected with L. (V.) braziliensis for 72 hours are down-regulated. According to functional enrichment analyses, the biosynthesis of steroids, sterol and cholesterol were among the canonical pathways identified by IPA or FET tools. This study is the first to report the global gene expression of macrophages infected with L. (V.) braziliensis and the associated metabolic pathways. In the host cell, the macrophage-Leishmania interaction triggers a series of mechanisms aimed at destroying the parasite. However, Leishmania has the capacity to alter the signaling pathways of the macrophage to establish itself in the phagosome for replication [4,30,31]. The parasite is capable of inducing alterations in signaling molecules and changes at the transcriptional level through positive or negative macrophage gene regulation. Furthermore, the metabolic processes of the macrophage can change according to the type of parasite [9]. Our results show that L. (V.) braziliensis infection can have a primarily suppressive effect on host gene transcription, as has been previously reported for other Leishmania species [8,13,32]. Zhang and peers undertook a meta-analysis in which they included 5 studies on gene expression in macrophages infected by different Leishmania species belonging to the Leishmania subgenus, performed at initial infection time points. The authors found that most genes were down-regulated in all of the studies [18]. The general gene repression of macrophages infected by Leishmania, found in these studies, may be related with the capacity of the parasite to alter the expression of genes involved in several pathways causing a cumulative effect of many different down-regulated pathways [11,33,34]. These results and those reported in our study differ from those obtained by Ramírez and peers, who reported the over-expression of most of the genes in macrophages infected for 24 hours with L. (V.) panamensis [22]. This variability in results suggests the possibility that macrophage gene expression may be linked to the infecting species and the stage of infection in which expression is evaluated. The expression pattern of macrophage genes upon invasion by a microorganism is dynamic and dependent on the stage of infection. At the initial stages, a pattern of expression of genes linked to the defense mechanisms of the macrophage (the innate and adaptive immune responses) is expected. This phenomenon has been demonstrated in different studies where macrophages have been challenged with Leishmania spp.; in the first 24 hours, most of the regulated genes are those related to the immune response [8,13,22]. These results contrast with those obtained in this work, where a limited number of genes related to the immune response was identified. Among these, the genes that encode transcription factor STAT2 and IL-18 were identified. These two molecules are important for an effective inflammatory response RT-qPCR validation of the expression levels obtained from microarray assays. Out of the 218 genes with differential expression between infected and non-infected macrophages, as identified through microarray assays, 14 genes were evaluated through RT-qPCR to validate their expression levels. Of these genes, 85.7% (12/14) of their RT-qPCR expression data were consistent with the results obtained by microarray analysis. [8,11,15,18,35]. It is possible that the low representation of genes linked to the immune response in this work was due to the stage of macrophage gene expression evaluation, 72 hours post-infection, when the infection has already been established and the macrophage may be adapting its metabolism to confront the infection [9]. To identify the biological processes to which the differentially expressed genes were linked, the microarray results were analyzed using the IPA and FET tools. The results revealed that the pathways with statistical significance were the biosynthesis of steroids, sterol and cholesterol probably related to the process of infection. Although these results were based on the changes registered for few genes important in these pathways, the others genes identified were associated with another biological processes but with low statistical significances. The findings related to the main canonical pathways suggest at least two scenarios regarding the negative transcriptional regulation of the genes involved in the synthesis of cholesterol in the macrophage. In the first scenario, signaling is normal. That is, the macrophage has high levels of cholesterol and is responding to these levels. In the second scenario, signaling is abnormal. That is, the macrophage has normal (or even below normal) levels of cholesterol, but the signaling pathway is being incorrectly activated, possibly through factors related to the infection. During the initial interaction between macrophages and promastigotes, cholesterol plays an important role because it is one of the most abundant molecules in lipid rafts, cell membrane micro-domains that serve as platforms for the interaction between different proteins involved in a series of processes including cell signaling, adhesion, invasion, and secretion [36][37][38]. Previous studies have shown that during the first 24 hours of L. (L.) major infection, there is upregulation of HMGCR at transcriptional level and there is an increase in cytoplasmic lipid droplets, from which cholesterol is an important component. [39]. However, it is important to consider that the regulation of cholesterol biosynthesis exhibits end-product inhibition and that once the cell detects high levels of cholesterol, the molecule itself triggers the negative regulation of its own synthesis. It is possible that this phenomenon is occurring at 72 hours of infection, assuming that cholesterol synthesis increased in the initial moments of infection. A second scenario that could explain the improper negative regulation of cholesterol can be due to factors related to the parasite. Recently, a mechanism of lipid metabolism regulation related to the protein mammalian/mechanistic target of rapamycin (mTOR) has been described. In 2011, Peterson and peers proposed that mTORC1 could regulate the transcription of genes linked to the biosynthesis of cholesterol by modulating the transcription factor sterol regulatory element-binding proteins (SREBPs). They propose that mTORC1 regulates the subcellular localization of Lipin1, a transcriptional co-activator and phosphatidic acid phosphohydrolase enzyme, which when dephosphorylated is translocated to the nucleus. The researchers report for NIH3T3 cells, that after pharmacological inhibition with Torin1, Lipin1 resides in the nucleus, SREBP-dependent gene transcription is repressed, and SREBP 1 and 2 nuclear levels are reduced [40]. During early infection by L. (L.) major, Jaramillo and peers reported that the glycoprotein gp63 induces mTOR cleavage and inhibits the formation of the mTORC1 complex [41]. Assuming that this mechanism is shared by L. (V.) braziliensis and that its effects in mTOR signaling prevail at later infection times, it is possible that in the context of our work, gp63 down-regulates mTOR reducing Lipin1 phosphorylation. This in turn, could facilitate the movement of non-phosphorylated Lipin1 into the nucleus leading to decreased nuclear levels of SREBP-dependent gene transcription. We did not find any studies in the reviewed literature evaluating the global gene expression of macrophages infected by L. (V.) braziliensis associated to negative regulation of genes related with cholesterol biosynthesis. However, two studies performed on macrophages infected with L. (L.) major reported decreased cholesterol levels 72 hours after infection through the downregulation of the gene that encodes the enzyme HMGCR [42,43]. In contrast, in two studies performed using macrophages infected with L. (L.) amazonensis and L. (L.) major, the authors observed increases in cholesterol biosynthesis during the initial stages of infection [35,39]. According to these reports, it can be suggested that cholesterol biosynthesis can be positively or negatively regulated in macrophages infected by Leishmania spp. and that this regulation likely depends on the stage of the infection and the infecting species. To the extent of our knowledge no other study has explored the gene expression pattern of infected macrophages by the parasite L. (V.) braziliensis. However, Novais and peers reported the changes that occur in human skin after infection with L. (V.) braziliensis when compared with normal skin using a genome-wide transcriptional analysis [44]. Those authors reported repression of both cholesterol and free fatty acid biosynthesis in the infected skin samples, findings which are in agreement with our in vitro observations in infected U937 derived macrophages with L. (V.) braziliensis. Our Table. Conditions for amplification of the genes selected to perform RT-qPCR validation of microarray results. Primer sequences and annealing temperatures are reported for each gene. (DOCX) S3 Table. Differentially expressed genes identified by microarray assays. The 218 genes with differential expression between non-infected macrophages and those infected with Leishmania braziliensis are shown. They present log 2 fold expression greater than 1.5 or less than -1.5, with p < 0.05. (DOCX)
v3-fos-license
2021-09-28T01:08:51.183Z
2021-07-15T00:00:00.000
237751982
{ "extfieldsofstudy": [ "Environmental Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-022-06763-x.pdf", "pdf_hash": "7b7b32d63b327ba69a5800f2e11b53c90f61f32d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46773", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences", "Engineering" ], "sha1": "0890181f03bd69125829877a9cb84006f515775e", "year": 2022 }
pes2o/s2orc
Development and validation of a model for soil wetting geometry under Moistube Irrigation We developed an empirical soil wetting geometry model for silty clay loam and coarse sand soils under a semi-permeable porous wall line source Moistube Irrigation (MTI) lateral irrigation. The model was developed to simulate vertical and lateral soil water movement using the Buckingham pi (π) theorem. This study was premised on a hypothesis that soil hydraulic properties influence soil water movement under MTI. Two independent, but similar experiments, were conducted to calibrate and validate the model using MTI lateral placed at a depth of 0.2 m below the soil surface in a soil bin with a continuous water supply (150 kPa). Soil water content was measured every 5 min for 100 h using MPS-2 sensors. Model calibration showed that soil texture influenced water movement (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p$$\end{document}p < 0.05) and showed a good fit for wetted widths and depths for both soils (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$nRMSE$$\end{document}nRMSE = 0.5–10%; \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$NSE \ge$$\end{document}NSE≥ 0.50; and d-index \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ge$$\end{document}≥ 0.50. The percentage bias \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left( {PBIAS} \right)$$\end{document}PBIAS statistic revealed that the models’ under-estimated wetted depth after 24 h by 21.9% and 3.9% for silty clay loam and sandy soil, respectively. Sensitivity analysis revealed agreeable models’ performance values. This implies the model's applicability for estimating wetted distances for an MTI lateral placed at 0.2 m and MTI operating pressure of 150 kPa. We concluded that the models are prescriptive and should be used to estimate wetting geometries for conditions under which they were developed. Further experimentation under varying scenarios for which MTI would be used, including field conditions, is needed to further validate the model and establish robustness. MTI wetting geometry informs placement depth for optimal irrigation water usage. Agriculture is the largest consumer of blue water 1 at 70% of all global freshwater resources 2,3 . Novel irrigation technologies such as sub-surface irrigation and porous pipes promote water conservation 2 . Moistube Irrigation (MTI) is a semi-permeable porous pipe that has reported improved field water use efficiency (fWUE). MTI is a sub-surface irrigation technology whose discharge is facilitated by an applied pressure, or at zero pressure, it utilizes soil water matric potential ( ψ ) that causes a pull effect, thus facilitating discharge. MTI is a semi-permeable porous wall tubing; thus, its wetting geometry is classified as a line source emitter 4,5 . Soil water movement under various irrigation technologies has informed irrigators on the effective placement depth and lateral spacing that promote crop water use efficiency (cWUE) and fWUE. There is limited empirical knowledge on models that facilitate the estimation of wetted perimeters under porous wall emitters. Knowledge of soil wetting geometry is critical in optimizing MTI irrigation system design (lateral placement depth and spacing) and operation (discharge rates, irrigation set times and satisfying irrigation water requirements). To maximize the advantages offered by sub-surface irrigation, knowledge of soil wetting geometries aids in irrigation network design, i.e., emitter spacing and placement depths, which subsequently improve irrigation schedules 6 , minimize run-off losses, promotes higher irrigation uniformity 7,8 , increases water productivity (WP) and fWUE 4,9 . Soil wetting geometries can be determined either experimentally or using modelling tools. The former is expensive and time-consuming. Modelling is a time-saving exercise, and numerical models have gained wide applicability over their counterparts (analytical and empirical models) because of their robustness and use of Using t Buckingham π theorem 16,19 , four πs were derived as presented in Eqs. (3)- (7). The four πs were derived because the Buckingham π theorem states that if there is a physically meaningful equation involving a certain (1) (W, Z) = f (V , q, k, D) (2) f (V , q, k, W, Z, D) www.nature.com/scientificreports/ number, n, of physical variables in a problem and these variables contain 'm' primary dimensions, the equation relating all variables will have (n-m) dimensionless groups. There were six variables with two primary variables, namely W and Z , which resulted in 4 dimensionless groups. where Multiplying the π 3 and π 4 yielded the dimensionless soil water content per unit length of MTI (V * ) as presented by Eq. (8). By taking the square root of the product of π 4 and (π 2 ) 2 yielded the dimensionless wetted width (W * ) as presented by Eq. (9). The square root of the product of π 4 and (π 1 ) 2 yielded the dimensionless wetted depth (Z * ) as presented by Eq. (10). Schwartzman and Zur 15 and Singh, Rajput 19 postulated that there exists a relationship amongst dimensionless parameters. For this research, the relationships are as presented in Eqs. (11) and (12); where A 1 , A 2 , b 1 , and b 2 are constants for a 2-dimensional flow model. The constants A 1 and b 1 , were determined from the graphical plot of V * and W * whereas the constants A 2 and b 2 were determined from the graphical plot of V * and Z * . Combining Eqs. (8) and (9) and Eqs. (8) and (11) yielded the wetted width ( W ) and wetted depth ( Z ) functions presented in Eqs. (13) and (14), respectively. Experimental design and data collection. Soil hydraulic parameters and textural characteristics. The silty clay loam (34% clay, 58% silt, 8% sand) was obtained from the University of KwaZulu-Natal's Ukulinga Research Farm in Pietermaritzburg, KwaZulu-Natal, South Africa (29° 39′ 44.8ʺ S 30° 24′ 18.2ʺ E, altitude: 636 m). The coarse sand soil (98% sand and 2% gravel) was obtained from Genie sand in Pinetown, KwaZulu-Natal, South Africa (29° 48′ 08.7ʺ S 31° 00′ 37.8ʺ E). Soil samples were subjected to soil textural analyses using the hydrometer method. The experiment sampled five depths for textural analysis, and the resultant textural data was fed into the SPAW model (Saxton and Willey, 2005) to determine saturated hydraulic conductivity ( k s ) was derived. Other soil hydraulic parameters total porosity ( θ r ), residual soil water content ( θ s ), and shape fitting parameters ( n , m , and α) ( Table 1) were laboratory determined using the soil-water retention pressure method 4,29,30 . The 50 cm depth soil sample for the silty clay loam was used to fit the van Genuchten parameters (3) f (π 1 , π 2 , π 3 , π 4 ) = 0 www.nature.com/scientificreports/ because the 50 cm plot provided a smooth curvilinear shape and the resultant parameters closely aligned with Rawls, Brakensiek 31 . The sandy soil was commercially acquired hence the absence of varied sampling depths. The methods were selected based on the reliability of results and equipment availability. Measurement of soil wetted front. The soil was air-dried, crushed and sieved through a 2 mm sieve. Thereafter, the soil was loaded into a soil bin measuring 1 m (H) × 1 m (W) × 0.5 m (B). The soil bin had transparent Plexiglass walls, and soil loading was done gently to avoid compaction and possible crushing of the MTI tubing. To prevent MTI collapse under the soil surcharge, MTI was supplied with water before loading the soil till it was turgid. To prevent MTI smearing and potential nano-pore blocking, the water was supplied upon reaching the MTI burying level. The MTI lateral was placed at a depth of 0.2 m below the soil surface, and upon soil loading, MPS-2 sensors were simultaneously installed at prescribed depths ( Table 2). The initial soil-water content for the silty clay loam was 1.02 × 10 -6 m 3 and 6.64 × 10 -7 m 3 for the sandy soil. Both soils were packed at a bulk density of 1.4 g cm −3 . The MPS-2 sensors measured water potential (− 10 to − 500 kPa) and temperature, and they were calibrated by soaking them in de-ionised water for a period of 72 h before installation. The de-ionised water was used to substitute for the conventional mercury porosimeter experiment for calibration. The calibration curve was obtained from the MPS-2 and MPS-6 operators manual (Decagon Devices Inc, 2017). Water to the MTI lateral was supplied at a pressure head of 150 kPa, which discharged 2.39 l h −1 m −1 ( Table 3). The experiment was carried out in two phases. The first dataset of measured variables was used for model calibration, whilst the measured variables from the second phase dataset were used for model validation. The measured variables are summarised in Table 4. Both the first and second phases were carried under identical conditions. Soil water-retention curves derived from the soil-water retention experiment were used to determine the volumetric soil water content. The experimental equipment set-up is shown in Fig. 1. Table 2. MPS-2 sensors placement depths and lateral spacing for the respective soils. process was done to determine the models' constants A 1 , A 2 , n 1 and n 2 . To improve the models' precision and accuracy, iterations were carried out on constants A 1 and A 2 . Models' validation. The parameter variability-sensitivity analysis (PV-SA) method was employed to validate the models (Sargent, 2013). A sensitivity analysis was carried out to assess the effects of k , q , and D on both W and Z . The sensitivity analysis was done by holding all the other variables constant and assessing the functional relationship a particular "active variable" had with W and Z . Table 4 summarise the two independent experimental phases and their respective purposes. Models' evaluation. The study applied the following criteria; normalised root mean square error ( where O i and P i = observed and predicted value(s), respectively, O i = mean observed data, and x = number of observations. nRMSE defined the developed model's accuracy whilst PBIAS defined the bias provided by the developed model. The error index nRMSE showed the model's performance but did not indicate the degree of over or under-estimation hence the use of the NSE and PBIAS statistical tools in the analysis. The NSE statistic measured the residual variance vs the measured data variance, ranging from −∞ to 1. NSE values between 0.0 and 1.0 are considered acceptable. PBIAS measured the tendency of the simulated data to either under-or over-estimate the observed values. Low magnitudes presented optimal model simulation whilst positive values represented model under-estimation, and negative values represented model over-estimation 33 . The study further employed the prediction efficiency ( P e ). The P e was determined by the R 2 values obtained by regressing the observed and simulated values. A summarised performance rating for the recommended statistics is shown in Table 5. Scenarios. The developed models were used for situational analysis. The MTI placement depth was maintained at 0.2 m. The operational discharge was varied between 0.27 and 3.19 l h −1 m −1 , which translated to an operating pressure range of 20-160 kPa. The selected discharges used for scenario analysis were selected on the Results and discussion Model calibration. The calibration steps for the four wetting geometry models' are outlined below. Silty clay loam soil. The study followed the outlined steps below to determine the values for the constants A 1 , A 2 , b 1 , and b 2 19 . Recorded volumetric soil water content and observed wetted distance values were used to calibrate the developed soil wetting geometry models. The wetted W and Z were physically measured using grids demarcated onto the transparent plexiglass whilst the volumetric soil water content was measured using the MPS-2 sensors. Step 1: The dimensionless variables V * , W * , and Z * were estimated using Eqs. (8)- (10) utilising observed values of the requisite variables from the soil bin experiments. In order to improve the models' accuracy and precision, the calibration step performed iterations on the constants A 1 and A 2 and the resultant equations are shown in Eqs. (22) and (23). This signified a high-water content in the lateral direction as compared to the vertical direction. This is typical in fine-textured soils (Bouma, 1984). Conversely, the calibration results yielded a scenario where A 1 < A 2 , which subsequently resulted in a W < Z , an observation atypical of finetextured soils. This was because the application times during the experiment promoted the border effect within the confined soil bin. Sandy soil. The simulation steps for the sandy soil followed similar steps as those described for the silty clay loam soil. The relationships between the dimensionless volumetric soil water content per unit length of MTI ( V * ) and the dimensionless wetted width ( W * ) and wetted depth ( Z * ) are depicted in Fig. 3. The resultant wetted width ( W ) and wetted depth ( Z ) for the soil are shown in Eqs. (21) and (22). W * was plotted against V * , similarly Z * was plotted against V * and the resulting power relationships yielded Eqs. (24) and (25) with R 2 = 0.72 and R 2 = 0.84, respectively (see Fig. 3). In order to improve the models' accuracy and precision, the calibration step performed iterations on the constants A 1 and A 2 and the resultant equations are shown in Eqs. (28) and (29). The power indices for the sandy soil, b 1 and b 2 were approximately equal. However, the constant A 1 < A 2 , which resulted in a Z > W , a phenomenon attributed to soil hydraulic characteristics. Gravity forces dominated the soil water movement mechanism in the coarse-textured soil. It is worth noting that the data in Figs Considering Eq. (31), a decrease in k by order of magnitude, i.e., migrating to fine-textured soil, yielded a 24% decrease in W and a 119% increase in Z . Likewise, in Eq. (32), a decrease in k by order of magnitude, i.e., migrating to fine-textured soils, resulted in approximately 15% increase in W and an approximately 23% increase in Z. To assess the sensitivity of the soil wetting geometry with respect to discharge ( q ), the parameters V , D , and k are held constant, and the resultant relationships are outlined by Eqs. (32) and (33). Doubling q for the silty clay loam soil resulted in a 5% decrease in W and a 27% increase in Z . Similarly, for the sandy soil when q was doubled, there was a 4% increase in W and a 6% increase in Z . Table 6 presents a summarized sensitivity evaluation containing hypotheticals and the resultant wetted horizontal and vertical wetted distances. MTI exhibited an increase in both W and Z in sandy soil whilst it exhibited an increase in Z and a decrease in W for silty clay loam. Regarding the silty clay loam soil, the findings contradict Schwartzman and Zur 15 , who posited that an increase in q results in an increase in W and a decrease in Z of a Gilat loam soil under sub-surface drip irrigation. The difference in behaviour is attributed to the porous nature of MTI, wherein discharge is not of a point source nature. The sensitivities of W and Z to placement depth D for the respective soils was characterised by Eqs. (34) and (35). An increase in D by a unit magnitude (from 0.2 to 0.3 m) for the silty clay loam soil resulted in an approximately 3% decrease in W and an approximately 53% increase in Z . For the sandy soil, a unit increase in D resulted in a 4% increase in W and a 6% increase in Z . According to Bresler 38 W increases for low k values (fine textured soils) and Z increases by a high magnitude for soils with a high k value (coarse textured soils). To gauge the sensitivity of W and Z to V, the parameters D , q and k were assumed constant. This yielded Eqs. (36) and (37). Models' validation. In order to obtain simulated wetted distances ( W and Z ), an estimation of a range of values for V was made using the sensitivity analysis relationships in Eqs. (36) and (37), and the resultant simulated Z and W for the semi-permeable porous wall line source 2-D flow model were computed. A correlation test based on the R 2 was carried out on a plot of simulated W and Z against observed W and Z (Fig. 4). The correlation coefficients R 2 > 0.75 showed a good agreement between the observed and simulated. The silty clay loam soil exhibited a wetting pattern on the soil surface, so for that reason, the experimental data from the MPS-2 sensor buried at 0.1 m was excluded. The soil surface wetting phenomenon was observed in both phases of the experiment. A similar observation was made by Kanda, Senzanje 4 for an MTI tubing buried at a depth of 0.2 m. Fan, Huang 39 also made a similar observation and posited that shallow buried depth facilitates upward water movement, a phenomenon observed in fine-textured soils. The models' evaluation revealed a satisfactory performance. For instance, the silty clay loam models' had a nRMSE of 0.84% and 8.80% for W and Z , respectively, a NSE > 0.5, a PBIAS < ± 25% and an index of agreement ( d ) of 1 and 0.98 for W and Z, respectively (Figs. 5 and 6). The sandy soil exhibited a satisfactory performance as evidenced by a nRMSE of 0.3% and 2.5% for W and Z respectively, a NSE > 0.75, a PBIAS < ± 15% and an index of agreement (d) of 0.6 and 0.3 for W and Z, respectively. The model underestimated the wetted depth ( Z ) for the sandy soil while overestimating the wetted width ( W ). A one-way ANOVA revealed a statistically significant difference ( p < 0.05) in both the observed and simulated wetted W and Z under sandy soil. For the silty clay loam soil, there was no statistically significant difference ( p > 0.05) between observed W and simulated W ; similarly there was no statistically significant difference ( p > 0.05) between observed Z and simulated Z. Scenario analysis. Silty clay loam. Soil texture influenced water movement ( p < 0.05). Under the silty clay loam soil, the models over-estimated the wetted width ( PBIAS ≤ 18% ), which signified a satisfactory model performance. Other model evaluation metrics (nRMSE ≤ 0.05 , EF < 0.5 ) revealed a good model performance for the wetted (Fig. 7). Interestingly, the model produced a very good agreement between the observed and simulated results at operating pressures of 80-160 kPa. The anticipated result was a good agreement at the manufacturer's design operating pressures of 20-60 kPa. This observation can be potentially attributed to the nature of the Ukulinga soil used for the experiment. The wetted depth (Z) model performed poorly at lower discharges and operating pressures (20-100 kPa). It exhibited a relatively satisfactory performance at operating pressures of (36) Silty clay loam soil (Fig. 8). Overall, the silty clay loam soil exhibited a pronounced lateral geometry compared to the vertical geometry. The finding concurs with Fan, Huang 39 , who postulated that wetted depth is low in high clay content soils. In addition, a wetting front experiment by Cote, Bristow 11 attributed the fine-textured soil wetting pattern to the dominance of capillarity and matric tension. Saefuddin, Saito 40 observed pronounced radial soil water movement in a silt profile under a multiple outlet ring-shaped emitter. Fan, Huang 39 , in their MTI wetting front experiment, observed a slow lateral water movement after initial wetting of the silt-loam soil after 48 h, much similar to this study timeline. Sandy soil. Granular soils exhibit a high vertical water movement due to granular pores and dominant gravitational forces; thus, the sandy soil had a greater wetted depth than the silty clay loam soil. The findings concur with a study by Cote, Bristow 11 , who performed a wetting front experiment with a sandy soil under a trickle source emitter. Ghumman, Iqbal 41 attributed the high wetted depths in sandy soils to porosity. Although this study used a line source porous pipe, the findings relate due to the soil hydraulic characteristics factor. Another attribute to high Z in sandy soils is the low macroscopic capillary length in sand which favours gravity as compared to lateral or upward movement. Siyal and Skaggs 42 observed a similar phenomenon in modelling a sandy soil using HYDRUS 2D. The model poorly simulated the wetting distributions under an operating pressure range of 20-160 kPa (Table 7). Irrigation implication. The developed soil wetting geometry models can be adopted to estimate soil wetting geometries for particular soils in question, thus ensuring optimal placement depth and spacing of MTI laterals. Knowledge of wetting geometries can potentially aid irrigators in adopting optimal lateral placement depth and spacing. The wetting depth models can inform irrigators on irrigation application times, thus availing optimal volume to the root zone and minimizing deep percolation and soil water loss due to evaporation. For fine-textured soil, shallow buried depth avails water to the soil surface that will be lost due to soil evaporation. The lateral water front for fine-textured soils expanded faster than that of coarse-textured soils; hence lateral www.nature.com/scientificreports/ placement under fine texture soils should be strategically placed to promote an optimal overlap between row crops. For coarse texture, close lateral placement will be required to create an optimal wetting front overlap. (21) and (24)-(25) may require testing in different geographical locations and assess their universal suitability. The study was conducted for 20 h under a silty clay loam and 96 h for the sandy soil. The soil bin's lateral dimension (width) limited the testing times for silty clay loam. The implication was the influence of border effects on soil water movement if the experimental times went beyond 20 h. Likewise, for the macroscopic sandy soil, the experimental times were limited to 96 h because of the soil bin's depth restrictions. Models' development was also limited to the following constant inputs; placement depth ( D = 0.2 m) and discharge ( q = 2.39 l h −1 m −1 ). The wetted depth Z is not entirely an independent variable as factors such as crop-specific root water uptake influence the vertical soil-water movement. Conclusions and recommendations The study adopted the Buckingham π theorem or dimensional analysis to develop, calibrate and validate models' that simulate soil wetting geometries for MTI as a function of soil hydraulic conductivity ( k ), placement depth ( D ), emitter discharge ( q ), and soil water content ( V ). Soil texture significantly affected the wetting geometry under MTI. The silty clay loam models satisfactorily simulated the wetted width or lateral soil moisture movement. It, however, failed to simulate the wetted depth at an operating pressure range of 20-100 kPa, which translated to an operating discharge of 0.27-1.18 l h −1 m −1 . It can be concluded that the models are prescriptive, i.e., they should be used for the soils and conditions that they were developed under. The study also noted, judging from the wetting pattern of the fine-textured soil, there is potential in MTI to provide plants with water with minimized deep percolation losses. The study was done in a soil bin on bare homogenous soil. The researchers recommend the study be carried under field conditions for both cropped and un-cropped soils and test the suitability of the developed models. Furthermore, the experiment was carried on dry soil. Thus an investigation on the soil wetted pattern under an initially moist soil should be carried out and compared to the current study ( Supplementary Information 1).
v3-fos-license
2021-10-23T15:21:20.017Z
2021-10-21T00:00:00.000
239544129
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://acp.copernicus.org/articles/22/5515/2022/acp-22-5515-2022.pdf", "pdf_hash": "a509aef5ac5abe6ffe2f02eb85c095a04db88c75", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46774", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "5426544f6a9f9c93f496e187755fb5a709853380", "year": 2021 }
pes2o/s2orc
Offline analysis of the chemical composition and hygroscopicity of sub-micrometer aerosol at an Asian outflow receptor site and comparison with online measurements Abstract. Filter-based offline analysis of atmospheric aerosol hygroscopicity coupled to composition analysis provides information complementary to that obtained from online analysis. However, its application itself and comparison to online analysis have remained limited to date. In this study, daily submicrometer aerosol particles (PM0.95, 50 % cutoff diameter: 0.95 μm) were collected onto quartz fiber filters in Okinawa Island, a receptor of East Asian outflow, in the autumn of 2015. The chemical composition of water-soluble matter (WSM) in PM0.95 and PM0.95 itself, and their respective hygroscopicities were characterized through the offline use of an aerosol mass spectrometer and a hygroscopicity tandem differential mobility analyzer. Thereafter, results were compared with those obtained from online analyses. Sulfate dominated the WSM mass (60 %), followed by water-soluble organic matter (WSOM, 20 %) and ammonium (13 %). WSOM accounted for most (93 %) of the mass of extracted organic matter (EOM) and the atomic O to C ratios (O : C) of WSOM and EOM were high (mean ± standard deviation were, respectively, 0.84 ± 0.08 and 0.79 ± 0.08), both of which indicate highly aged characteristics of the observed aerosol. The hygroscopic growth curves showed clear hysteresis for most samples. At 85 % RH, the calculated hygroscopicity parameter κ of the WSM (κWSM), WSOM, EOM, and PM0.95 (κPM0.95) were, respectively, 0.50 ± 0.03, 0.22 ± 0.12, 0.20 ± 0.11, and 0.47 ± 0.03. An analysis using the thermodynamic E-AIM model shows, on average, that inorganic salts and WSOM respectively contributed 88 % and 12 % of the κWSM (or κPM0.95). High similarities were found between offline and online analysis for chemical compositions that are related to particle hygroscopicity (the mass fractions and O : C of organics, and the degree of neutralization), and also for aerosol hygroscopicity. As possible factors governing the variation of κWSM, the influences of WSOM abundance and the neutralization of inorganic salts were assessed. At high RH (70–90 %), the hygroscopicity of WSM and PM0.95 was affected considerably by the presence of organic components; at low RH (20–50 %), the degree of neutralization could be important. This study not only characterized aerosol hygroscopicity at the receptor site of East Asian outflow, but also shows that the offline hygroscopicity analysis is an appropriate method, at least for aerosols of the studied type. The results encourage further applications to other environments and to more in-depth hygroscopicity analysis, in particular for organic fractions. standard deviation were, respectively, 0.84±0.08 and 0.79±0.08), both of which indicate highly aged characteristics of the observed aerosol. The hygroscopic growth curves showed clear hysteresis for most samples. At 85 % RH, the calculated hygroscopicity parameter  of the WSM (WSM), WSOM, EOM, and PM0.95 (PM0.95) were, respectively, 0.50±0.03, 0.22±0.12, 0.20±0.11, and 0.47±0.03. An analysis using the thermodynamic E-AIM model shows, on average, that inorganic salts and WSOM respectively contributed 88 % and 12 % of the WSM (or PM0.95). High similarities were found between offline and 5 online analysis for chemical compositions that are related to particle hygroscopicity (the mass fractions and O:C of organics, and the degree of neutralization), and also for aerosol hygroscopicity. As possible factors governing the variation of WSM, the influences of WSOM abundance and the neutralization of inorganic salts were assessed. At high RH (70-90 %), the hygroscopicity of WSM and PM0.95 was affected considerably by the presence of organic components; at low RH (20-50 %), the degree of neutralization could be important. This study not only characterized aerosol hygroscopicity at the receptor site 10 of East Asian outflow, but also shows that the offline hygroscopicity analysis is an appropriate method, at least for aerosols of the studied type. The results encourage further applications to other environments and to more in-depth hygroscopicity analysis, in particular for organic fractions. Introduction Hygroscopicity of atmospheric aerosols is a key property related to its effects on climate and air quality. It influences the 15 aerosol's light scattering and absorption ability (Titos et al., 2016;Zhou et al., 2020) and therefore affects visibility and the radiative balance of the Earth. Moreover, it influences the capability of aerosol particles to act as cloud condensation nuclei (CCN) under supersaturated water-vapor conditions, which further influences the radiative balance by affecting the optical property and lifetime of clouds (Mcfiggans et al., 2006). In addition, the absorption of water by aerosol particles might serve important media for aqueous-phase reactions (McNeill, 2015;Cheng et al., 2016). The hygroscopicity of aerosol particles 20 might also influence their adverse effects on human health: aerosol particle deposition in a human body is expected to depend on hygroscopic growth under high relative humidity (RH) in the respiratory system (Braakhuis et al., 2014). The hygroscopicity of atmospheric aerosol is governed by the chemical composition. It is often represented by hygroscopicity parameter . Several hygroscopicity studies have been performed for atmospheric particles or particles generated by extracts 25 from atmospheric aerosol samples. Whereas the ability of the particles to grow to cloud droplet size under supersaturated water vapor conditions has been investigated using a cloud condensation nucleus (CCN) counter, alternatively, the growth of particle size as a result of humidification under sub-saturated conditions has also been investigated, for example using a hygroscopicity tandem differential mobility analyzer (HTDMA). At water activity (aw) of around 0.9 or higher, inorganic salts such as NaCl and (NH4)2SO4 present high  values of 0.5-1.4; atmospheric organic aerosol (OA) components present intermediate  values 30 of 0.01-0.5 (Petters and Kreidenweis, 2007). By contrast, black carbon retains almost no water (Guo et al., 2016) and its  value can be inferred as zero. The  values of ambient aerosol particles are explained by the combination of water uptake by https://doi.org/10.5194/acp-2021-704 Preprint. Discussion started: 21 October 2021 c Author(s) 2021. CC BY 4.0 License. respective components in the mixture. In the low to middle RH range, the deliquescence and efflorescence of inorganic salts can strongly affect the hygroscopic growth of atmospheric particles, and could result in hysteresis according to the history of RH . For the hygroscopicity of ambient particles, the composition of inorganics, including the degree of neutralization, affects their contribution to particle hygroscopicity Freedman et al., 2019). In addition, the contrasting hygroscopicity of organics and inorganics are responsible for variations of their hygroscopicity (e.g., 5 Gunthe et al., 2009;Cerully et al., 2011;Pierce et al., 2012;Levin et al., 2014;Deng et al., 2018). The dominant components of atmospheric aerosols govern the dependence of aerosol hygroscopicity on locations: hygroscopicity in the forest atmosphere (Gunthe et al., 2009;Hong et al., 2014), where OA dominates the aerosol composition, is generally less than that in the marine atmosphere Pringle et al., 2010), where inorganic salts dominate. Moreover, while the oxygenation of OA relates to its hygroscopicity (Kuwata et al., 2013), correlation from analysis of atmospheric aerosols can be poor (Kuang 10 et al., 2020). Whereas multiple compositional factors are expected to control the aerosol hygroscopic growth as explained above, studies elucidating variations of hygroscopic growth under different atmospheric environments are few, which can be attributed to the lack of hygroscopicity analyses coupled with chemical composition analyses. For characterizing the hygroscopicity of atmospheric aerosols, offline analysis, i.e., the collection of aerosol samples on 15 substrates, followed by analysis of the hygroscopicity of chemical components therein, provides information that complements information obtained from online analysis. Such offline analyses have been conducted for urban aerosols (e.g., Aggarwal et al., 2007;Mihara and Mochida, 2011) and aerosols in remote environments (e.g., Silvergren et al., 2014;. For offline methods, hygroscopicity of aerosol particles with size up to ~1 µm or larger was analyzed, providing data for hygroscopicity in a wide range of particle sizes, which are often difficult to obtain by online analyses. 20 Furthermore, whereas information related to the mixing state is lost, offline methods enable investigation of the hygroscopicity of specific compound groups in aerosols, for example, water-soluble matter and humic-like substances (Gysel et al., 2004). Moreover, whereas field deployments of online instruments such as HTDMA might be a heavy duty and hinder observations particularly at remote sites, offline analysis can be a good alternative for aerosol hygroscopicity studies. Recent studies have indicated that offline use of an aerosol mass spectrometer (AMS) can be a useful means to elucidate the contribution of OA 25 component to aerosol hygroscopicity because of its capability of quantifying organic mass in addition to organic carbon, and to characterize the chemical structure of OA (Mihara and Mochida, 2011;Lee et al., 2019). More offline studies, in particular those of the role of OA, should be undertaken to characterize aerosol hygroscopicity further. Positive and negative artifacts have been evaluated for offline analyses of the concentrations of aerosol chemical components 30 (Turpin et al., 2000;Chow et al., 2005). Sampling artifacts are inherent to offline analyses, and might also affect offline hygroscopic growth measurements. However, the propriety of the offline method for quantifying aerosol hygroscopicity is not evaluated tentatively. Bias might arise from sampling artifacts by adsorption or evaporative loss of compounds and degradation of collected aerosol components, as in the case of the quantification of chemical components. Although full resolution of the https://doi.org/10.5194/acp-2021-704 Preprint. Discussion started: 21 October 2021 c Author(s) 2021. CC BY 4.0 License. degree of such artifacts is difficult, comparison between offline and online results from measurements of chemical composition and aerosol hygroscopicity is expected to constrain the possible magnitudes of artifacts, and warrant the further utilization of offline methods. In this study, we analyzed the hygroscopicity of submicrometer (PM0.95, 50 % cutoff diameter: 0.95 m) aerosol samples 5 collected in Okinawa, a remote island in Japan. We interpreted results based on the chemical composition analysis, including the offline use of an aerosol mass spectrometer. Okinawa is considered a receptor site of aerosols from the Asian continent, thereby suited to characterize the nature of the hygroscopicity of aged atmospheric aerosols after long-range transport. Although a few reports have described the relation between the chemical composition and the hygroscopicity in Okinawa (Mochida et al., 2010;Cai et al., 2017), no report of the relevant literature has described a study of their mutual quantitative 10 relation. Here, based on measurements of the chemical composition of PM0.95 and the hygroscopic growth of the extracted water-soluble matter (WSM) in PM0.95, the hygroscopicity parameter  of WSM, water-soluble organic matter (WSOM), and PM0.95 at 20-90 % RH are characterized. Factors responsible for the hygroscopicity parameters are assessed. This study, an extension of our online aerosol hygroscopicity study (Cai et al., 2017), aims at characterizing the RH and composition dependence of the hygroscopicity of aged aerosols after their atmospheric transport. In addition, from a methodological 15 viewpoint, the offline analysis of the composition and hygroscopic growth using filter samples are assessed by comparison with online analysis. Aerosol sampling and extraction Aerosol samples were collected at Cape Hedo Atmosphere and Aerosol Monitoring Station (128. 15 °E, 26.52 °N) of the 20 National Institute for Environmental Studies, Japan, in Okinawa Island during 26 October and 9 November, 2015. It is a receptor site of Asian outflow after long-range transport (Takami et al., 2007;Lun et al., 2014). The aerosols were collected daily on quartz fiber filters using a high-volume aerosol sampler (Model-120 B; Kimoto Electric Co. Ltd.) equipped with a cascade impactor Tisch Environmental Inc.). Details of sampling periods for the respective samples are presented in Table S1. The respective means of the RH, air temperature, and wind speed during the sampling days were 75.4 %, 23.8 °C, 25 and 3.8 m s −1 . Precipitation was only observed on 30 October (Fig. S1). The PM0.95 samples collected on backup filters were used for analysis. The quartz fiber filters were pre-combusted at 450 °C for 6 h before use. The high-volume sampler was placed on the rooftop of a station building. Its inlet was located about 4 m above the ground. The flow rate of the sampler was about 1,100 L min −1 . For each sample, about 1,600 m 3 of air was aspirated. Blank samples were collected by operating the sampler for only 10 s. After sample collection, the filter samples were stored in freezers before analysis. For offline analyses using the HTDMA and the AMS, WSM and water-insoluble organic matter (WISOM) in each aerosol sample were extracted as follows. First, three punches (34 mm diameter) from each filter sample were ultrasonicated with 3 g water for 15 min. The solution was then filtered with a Teflon filter (0.20 m, Millex-FG; Millipore Corp.). For each aerosol sample, the extraction was repeated three times and the WSM solutions were combined. Then, the WISOM in the same sample punches were extracted by ultrasonication with first 3 g of methanol once and then 3 g of dichloromethane/methanol (2/1, v/v) 5 mixture three times. The extract solutions were filtered through the Teflon filter used for the filtration of WSM, and the solutions after filtration were combined. The combined WISOM solution was dried with a rotatory evaporator and was redissolved in dichloromethane/methanol (2/1, v/v) solution. For TOC analyses for WSM, three punches (diameter: 34 mm) of each filter sample were extracted ultrasonically once with 20 ml ultrapure water for 15 min before being filtered through a syringe filter; similarly, for IC analyses for WSM, one punch with a diameter of 34 mm was extracted with 10 ml ultrapure 10 water (20 min ultra-sonication) and then filtered (Müller et al., 2017b). Hygroscopic growth measurement for WSM The hygroscopic growth factors (gf) of WSM at 20, 30,40,50,60,65,70,75,80,85, and 90% RH in humidification and dehumidification branches were obtained using an HTDMA. For measurements, a WSM solution was nebulized using a homemade nebulizer equipped with a syringe pump to generate WSM aerosol particles. After the generated WSM aerosol was 15 passed through a Nafion humidifier (NH1, MH-110-12F-4; Perma Pure LLC), it was dried with two diffusion dryers in series containing silica gel (White, Middle Granule; Kanto Kagaku) and a molecular sieve (13X/4A mixture; Supelco and Sigma-Aldrich). The dried aerosol flow was transferred through an impactor (model 1035900; TSI Inc.) with a 0.071 cm diameter orifice in the front. It was then neutralized using an Am 241 neutralizer. The neutralized aerosol was passed through the first differential mobility analyzer (DMA1, Model 3081; TSI Inc.) in the HTDMA. The aerosol particles with 100 nm dry diameter 20 were selected. In humidification mode, dry 100 nm aerosol particles were then humidified using a second Nafion humidifier (NH2, MD-110-24S-4; Perma Pure LLC). In dehumidification mode, dry 100 nm particles were first humidified to > 97 % RH using a third Nafion humidifier (NH3, MD-110-24S-4; Perma Pure LLC) before the particles were transferred to NH2. The aerosol particles downstream of NH2 were scanned using a second DMA (DMA2, Model 3081; TSI Inc.) coupled with a 25 condensation particle counter (CPC, model 3775; TSI Inc.). The aerosol flow rates of both DMA1 and DMA2 were 0.3 L min −1 . The RH values at the outlet of sheath flow of the DMA1, the inlet of NH2, the inlets of sample and sheath flows of the DMA2, and the outlet of sheath flow of the DMA2 were monitored using RH sensors (HMT337; Vaisala). During the experiment, the RH inside the DMA1 was lower than 10 %. The residence time from the outlet of the NH2 to the inlet of DMA2 was calculated as 13 s. The gf at 90 % RH in humidification and dehumidification branches was measured separately 30 one month later than the gf at other RH. The gf is defined as the ratio of the mobility diameter of particles classified using DMA2 to the dry mobility diameter (100 nm), which were retrieved using the Twomey algorithm as described by Mochida et al. (2010). The mode gf of fitted lognormal distributions under different RH conditions were used to represent the hygroscopic https://doi.org/10.5194/acp-2021-704 Preprint. Discussion started: 21 October 2021 c Author(s) 2021. CC BY 4.0 License. growth of WSM and for the derivation of the hygroscopicity parameter (WSM) following the -Köhler theory (Petters and Kreidenweis, 2007) as where σ represents the surface tension at the solution-air interface, Mw and ρw respectively denote the molecular mass and density of pure water, dwet stands for the product of gf and ddry (here is 100 nm), R is the universal gas constant, and T is the 5 absolute temperature. In Eq. (1), T of 298.15 K was used considering the temperature at the outlet of sheath flow of DMA2 (24.22-26.59 °C). The surface tension of pure water was σ. The equations used to calculate the density and surface tension of pure water and the densities of dry inorganic salts are the same as those used in the online Extended AIM Aerosol Thermodynamics (E-AIM) model (Sect. 2.5). Measurement data were used to derive gf and WSM only if the RH values at the outlet of DMA2 meet certain criteria (Text S1). 10 Before hygroscopic growth measurements for aerosol extracts, the size selection performance of DMA1 and DMA2 was assessed using PSL particles of standard size (Text S2). Furthermore, the hygroscopic growth of pure ammonium sulfate (AS, 99.999 % trace metals basis; Aldrich) particles were measured following the same procedure as that for the WSM samples to confirm the HTDMA performance (Text S3). The gf of dry AS particles (RH = 7.22±0.04 %) was measured to quantify the 15 slight difference of sizing (1.9 %) between the two DMAs. This difference has been corrected for derivation of gf of WSM samples and AS particles. More details about the quality control of the offline analyses are presented in Text S4. Chemical composition analyses Ammonium, nitrate, sulfate, sodium, potassium, calcium, magnesium, chloride, and methane sulfonic acid (MSA) in WSM samples were quantified using an ion chromatograph (Model 761 compact IC; Metrohm AG). Concentrations of water-soluble 20 organic carbon (WSOC) in WSM samples were determined using a total organic carbon analyzer (Model TOC-LCHP; Shimadzu Corp.). The results are presented in Table S3. To characterize the chemical structures of WSOM and WISOM and to quantify their concentrations, WSM and WISOM samples were analyzed using a high-resolution time-of-flight mass spectrometer (AMS; Aerodyne Research Inc.; Decarlo et 25 al., 2006) by nebulizing the solutions using Ar and by transferring the generated particles to the AMS. Before analysis by AMS, the WSM aerosol flow was dried using two diffusion driers filled with silica gel. The WISOM aerosol flow was dried by two diffusion driers filled in series with activated carbon (to remove dichloromethane and methanol vapor) and silica gel. The AMS were operated in both V-mode and W-mode. The W-mode data were analyzed to obtain the atomic ratios of O to C Concurrent online measurements of ambient aerosol During the period of the filter sampling of PM0.95, the mass concentrations of non-refractory chemical components (sulfate, nitrate, ammonium, chloride, and organics) and black carbon (BC) in PM1 (50 % cutoff diameter: 1 m) were measured respectively using the same AMS as that for the offline analysis, and a filter-based absorption photometer continuous soot monitoring system (COSMOS; Kanomax, Osaka, Japan) (Mori et al., 2014;Ohata et al., 2019). Furthermore, the number-size 20 distributions of submicrometer aerosols were measured using a scanning mobility particle sizer (SMPS) composed of a DMA (model 3081; TSI Inc.) and a water-based CPC (model 3785; TSI Inc.). The AMS was operated in both V + pToF-mode and W-mode with time resolution of 30 min. The bulk mass concentrations of non-refractory aerosol components were derived from V-mode data. Composition-dependent collection efficiency (Middlebrook et al., 2012) was applied for quantification. The W-mode data were analyzed to obtain the O:C and H:C and densities of organics in the manner of the offline analysis. 25 The SMPS measured the aerosol number-size distributions at diameters of 13.8-749.9 nm every 5 min. The DMA in the SMPS was operated with an aerosol flow rate of 0.3 LPM and a sheath to aerosol flow ratio of 10:1. Compressed dry pure air was supplied to the CPC through an equalizer to complement its total inlet flow rate of 1.0 LPM. Temperature and RH of ambient air, wind speed and direction, and precipitation were measured using a weather sensor (model WXT520; Vaisala). The AMS was calibrated before both online and offline (Sect. 2.3) measurements using the same procedures as those reported by Deng 30 et al. (2018). The SMPS was calibrated using standard size PSL particles (Text S2) before ambient measurements. Furthermore, a hygroscopicity and volatility tandem differential mobility analyzer (H/V-TDMA) was deployed during 1-9 November 2015 to measure the size-resolved aerosol hygroscopicity and volatility. Related details have been presented by Cai et al. (2017). For comparison between offline and online data, the time windows for offline data were truncated to 10 am to 10 am (24 h). Online data were averaged for the one-day periods (Table S1). Prediction of WSM hygroscopicity based on E-AIM model Hygroscopic growth of the WSM sample for the water activity (aw) range of 0.10-0.99 was predicted without considering the water uptake by WSOM using the online Extended AIM Aerosol Thermodynamics Model III (E-AIM III, 5 http://www.aim.env.uea.ac.uk/aim/model3/model3a.php, last access: 1 August 2019; Clegg et al., 1998;Wexler and Clegg, 2002). The inorganic chemical components of WSM (sulfate, sodium, and ammonium) obtained from IC analysis and the WSOM obtained from TOC and offline AMS analyses were used for derivation. Potassium, calcium, magnesium, nitrate, and chloride were not considered in the E-AIM because of their very low concentrations (Table S3). The RH-dependent hygroscopicity parameters of WSM, WSM, were predicted from hygroscopic growth data following the -Köhler theory. The 10 RH-dependent hygroscopicity parameters of water-soluble inorganic matter (WSIM) in each WSM sample, inorg, were derived similarly to those for WSM. Details of these derivations are presented in Text S6. Estimating the hygroscopicity of WSOM, EOM, and PM0.95 The hygroscopicity parameters of WSOM (WSOM), EOM (EOM), and PM0.95 (PM0.95) were calculated on the assumption that the volumes of water retained by respective components are additive (Petters and Kreidenweis, 2007): 15 Therein, WSM is the hygroscopicity parameter of WSM particles; WSOM and inorg respectively denote hygroscopicity parameters of WSOM and WSIM. The WSOM/WSM and WSIM/WSM respectively stand for the volume fractions of WSOM and WSIM in WSM, as derived from offline IC, TOC, and AMS analyses (Text S7). 20 The hygroscopicity parameter of EOM was estimated on the assumption that the hygroscopicity parameter of WISOM, WISOM, is zero, as where / and / respectively represent the volume fractions of WSOM and WISOM in EOM (Text S7). With consideration of EC and WISOM but neglecting other water-insoluble inorganics in PM0.95, the hygroscopicity parameter 25 of PM0.95 was also estimated. respectively represent the volume fractions of WSM, WISOM, and EC among the sum of these three components (Text S7). Here, the hygroscopicity of EC, EC, was assumed to be zero. Mass concentrations and composition of aerosol components The This large proportion suggests that the studied aerosol was aged substantially, considering the much lower proportions against total OM in East Asian suburban (approx. 60 %; Müller et al., 2017a) and urban environments (27-45 %; Miyazaki et al., 2006), which are based on our mass conversions, assuming factors of 1.8 and 1.2 to convert WSOC to WSOM and WISOC to 15 WISOM, respectively. The mass ratio of EC to EOM was on average 11 %, which is similar to the proportion to total OM (12 %) based on earlier reported OC:EC over the Sea of Japan and offshore of Japan (Lim et al., 2003; a factor of 2.1 (Turpin and Lim, 2001) was assumed to convert OC to OM). As shown in Fig. 1d, the aerosol number-size distribution shows bimodal or broad unimodal characteristics. 20 For sulfate, organics, ammonium, and EC (BC), the relative abundances among them from the offline analysis showed moderate agreement with those from the online analysis (61.9 %, 22.0 %, 13.8 %, and 2.4 %, respectively, for sulfate, EOM, ammonium, and EC from offline measurements during the period with effective data). The coefficients of determination (r 2 ) of the mass fractions of sulfate, organics, ammonium, and EC (BC) in those four aerosol components from offline and online analyses were, respectively, 0.62, 0.31, 0.22, and 0.09 (Fig. S6) positive/negative artifacts, considering its low volatility (Johnson et al., 2004) and reported absence of SO2 artifacts for filters of other types (Eldred and Cahill, 1997). The mean mass concentration of BC from online analysis was almost equal to those of EC from the offline analysis: the ratio of the former to the latter was 1.02. Figure 2 presents backward air mass trajectories for PM0.95 sampling. For most days, the three-day trajectories passed over the 5 Asian continent and/or the Japan archipelago, but maritime air masses also arrived at the observation site during 6-8 November 2015. A comparison between air mass trajectories and aerosol concentration data shows that maritime air masses during 6-8 November are characterized by lower aerosol mass concentrations, but higher mass fractions of sodium (≥5 %) than the other days influenced by continental air masses. The mean mass concentrations of sulfate, WSOM, ammonium, and EC from offline analyses during 6-8 November were, on average, 1/6, 1/5, 1/5, and 1/2 of those during other days, respectively, whereas the 10 mean mass concentration of sodium during the period (0.07 g m −3 ) was similar to that of other days (0.06 g m −3 ). Two types of composition related to aerosol hygroscopicity were investigated: the O:C of organics and the molar ratio of ammonium to sulfate (after omitting sulfate that was preferentially neutralized by sodium) (RA/S; Text S7), which represents the degree of neutralization of sulfate by ammonium and sodium. For the derivation of RA/S, the neutralization of sulfate by 15 other cations was not considered because their contributions were small. The O:C of EOM from PM0.95 samples is presented in Figure 3a. The mean ± standard deviation of the ratio was 0.79±0.08. Although the value was 16 % lower than that of the mean O:C of OA from the online analysis of PM1 (0.95±0.09), they were in good agreement (r 2 =0.58; Fig. S7). The O:C values of EOM were in the range of 0.64-0.94, suggesting a highly aged nature of the observed OA (Canagaratna et al., 2015). The RA/S values from PM0.95 samples are presented in Fig. 3b: the mean ± standard deviation was 1.45±0.34. The results suggest 20 that except for the aerosols on 7 and 8 November, the studied aerosols were fairly acidic. To compare the offline and online analyses, RA/S was also derived by ignoring the neutralization of sulfate by sodium, which is presented as RA/S′ (Fig. 3b). The mean ± standard deviation of RA/S′ from the offline analysis (1.29±0.21) was similar to that from the online analysis (1.29±0.40). The two also showed good agreement (r 2 =0.52; Fig. S7). If a portion of sulfate was in the form of sodium sulfate at the time of online AMS analysis, then this fraction might have not been detected considering the high melting temperature of the salt. 25 However, the offline analysis suggests that the fraction of sulfate neutralized by sodium was, on average, only 5 %. Hence, it is not expected to affect the comparison strongly. On 7 and 8 November, the days under the influence of maritime air masses, the O:C and RA/S from the offline analysis values were, respectively, lower and higher than those during other periods when the air masses were from the Asian continent and/or 30 the Japan archipelago (Figs. 2 and 3). In addition, comparison between RA/S′ and RA/S from the offline analysis shows that sodium neutralized a larger fraction of sulfate on the two days. The results suggest that air masses from the Asian continent transported more aged and acidic aerosol, and that air masses from the North Pacific included less-oxygenated and moreneutralized aerosol. However, it is noteworthy that the possible influence of the external mixing state on the neutralization of https://doi.org/10.5194/acp-2021-704 Preprint. Discussion started: 21 October 2021 c Author(s) 2021. CC BY 4.0 License. aerosols is not considered. The more-acidic nature of the continental aerosol is expected to be contributed by the formation of sulfate during the transport. In addition, the oxidation of MSA from marine biological activity is expected to contribute to sulfate. The low relative abundance of sodium in the continental aerosol also accounted for the more-acidic nature. Hygroscopicity of WSM and PM0.95 5 The mean  standard deviation of the measured gf values for WSM particles are presented in Fig. 4 and Table S4. The mean gf values predicted from the E-AIM model without consideration of the water retained by WSOM are also shown in the figure. The gf values of the respective WSM samples are presented in Fig. S8. The mean ± standard deviation of gf at 40, 60 and 85 % RH in the humidification (dehumidification) branch were, respectively, 1.04±0.02 (1.09±0.03), 1.13±0.05 (1.22±0.01) and 1.53±0.03 (1.53±0.02) ( Table S4). The obtained gf of WSM at 90 % RH (gf(90%)) was slightly lower than that of the WSM 10 from Chichijima Island (1.76-1.79), which was also influenced by transport from East Asia but was much farther to the east of the Asian continent compared with Okinawa (Boreddy et al., 2014;. It was also lower than the mean values for WSM during a cruise over the East China Sea (1.99, Yan et al., 2017), which was nearer the Asian continent. On the other hand, the obtained gf(90%) of WSM was higher than that of the WSM obtained during a cruise over the Bay of Bengal (1. 25-1.43, Boreddy et al., 2016), which was influenced by anthropogenic or biomass burning air masses. 15 For three studies (Boreddy et al., 2014;Yan et al., 2017), the WSM were extracted from total suspended particles that contain higher mass fractions of inorganics and sea salts than those examined for this study. By contrast, the WSM in the last referred study was extracted from PM2.5 (50 % cutoff diameter: 2.5 m) with higher mass fractions of organics. These compositional differences should explain the observed differences of gf(90%). 20 Hysteresis of the hygroscopic growth of the WSM particles was observed for most samples except for those collected on 26 October and 2 and 6 November (Fig. S8). The hysteresis was expected to have been caused by the influence of inorganic salts, as indicated by the differences in the predicted hygroscopic growth in humidification and dehumidification branches from the E-AIM model, where only the water retained by inorganics is considered. Being different from the observation, the hysteresis was predicted for almost all samples, which might result from the uncertainty in the quantification of inorganic salts and/or 25 the influence of organic components on the hygroscopicity of WSM (Choi and Chan, 2002). The deliquescence of WSM in the humidification branch was observed in the RH of 50-70 %. In this branch, the WSM shows prominent water uptake at RH as low as 20 % (Fig. 4a), being in contrast to the absence of hygroscopic growth of pure AS (Fig. S2). Water uptake of WSM at low RH in the humidification branch can be enhanced by highly acidic conditions (Sect. 3.1) and/or the presence of WSOM (Gysel et al., 2014). In the dehumidification branch, efflorescence was not evident down to 30 % RH for most samples, 30 indicating the existence of metastable conditions to retain water after experiencing high RH. The samples collected on 7 and efflorescence. In addition, the high RA/S (2.01 and 1.97 respectively) on these two days could have contributed to the high ERH, which is supported by the distinctive ERH among different forms of ammoniated sulfate (Tang and Munkelwitz, 1994). Whereas the external mixing state of atmospheric aerosol is lost by filter sampling, the former possibility implies that the seasalt component enhances the ability to effloresce once mixed with other inorganic components. This characteristic, however, is expected to be important only if such aerosols are transported to drier environments. 5 The obtained gf values as a function of RH were converted to corresponding  values. The mean  standard deviation of the measured  values for WSM as a function of RH are presented in Fig. 4b and Table S4. The  values from the E-AIM by ignoring the water uptake by organics are also shown in the same figure. The WSM of respective WSM samples are presented in Fig. S9. In the humidification branch, the measured WSM averaged for each RH were 0.17-0.24 (at 20-50 % RH) and 0.50-10 0.56 (at 70-90 % RH), respectively below and above the marked increase in WSM with the increase in RH, presumably indicating the deliquescence of major inorganic salts. Comparison between measured WSM versus predicted WSM shows that, on average, the measured WSM were greater than predictions for all RH, suggesting the ubiquitous contributions of WSOM to the measured WSM. The results at RH < 70 % were more deviated from the 1:1 line than those at RH ≥ 70 % RH (Fig. S10), which might indicate dominant contributions of WSOM to WSM at low RH for some aerosol samples (Gysel et al., 2004;15 Aggawarl et al., 2007). In the dehumidification branch, except for the case at 20 % RH, where the corresponding  value was 0.17, the  values of WSM were modestly high, with values of 0.42-0.57. The lack of a large dependence on RH suggests that efflorescence did not occur. Even if it did, it was for minor fractions of inorganics. The contribution of WSOM to the hygroscopicity of WSM 20 was evident from the fact that, except the sample collected on 1 November, the E-AIM model predicted  by ignoring the water retained by WSOM at RH ≥ 65 % were lower than the measured values (Fig. S9). The  values of WSM from respective PM0.95 samples in dehumidification branches are presented in Fig. 4c. At high RH (≥ 65 %), the difference in WSM among different samples was small compared with that at low RH, indicating that the difference in the composition among aerosol samples did not result in large variation in the hygroscopicity of WSM at these RH conditions. Clear variations in 25 hygroscopicity among samples at low RH can be explained by the influence of the degree of neutralization of inorganic salts and the abundance of organics. For example, the WSM on 1 November at ≤ 60 % RH was higher than on other days, which was likely to be related to a low RA/S ratio (approx. 0.80), as evidenced by the large E-AIM predicted WSM on this day (Fig. S9); the WSM on 26 October was lower than that on other days, which might be explained by the high mass fraction of WSOM ( Fig. 1b) in addition to the high RA/S (1.66). The contribution of chemical composition will be discussed further in later sections. 30 By considering the atmospheric concentrations of WISOM and EC, the  values of PM0.95 were estimated (Table S4). In the humidification branch, the PM0.95 were 0. respectively, (Table S4). In the dehumidification branch, except for the case at 20 % RH, where the corresponding PM0.95 was 0.16, they were in the range of 0.40-0.54 (Table S4). At 90 % RH, PM0.95 was in the range of 0.47-0.52, which is higher than that measured at a supersite in Hong Kong (0.18-0.48, Cheung et al., 2015) that was influenced by clean maritime air masses and/or polluted Asian continental and coastal inflows. and accumulation modes suggests that the aerosols experienced in-cloud processing Hoppel et al., 1986). 10 Similar high hygroscopicity of particles in the Aitken mode (40 nm diameter) and the accumulation mode (150 and 200 nm diameters) suggests the dominance of sulfate in both modes . The mean aerosol volume-number concentration presented unimodal distribution with mode diameter of 260 nm, indicating that aerosol mass in the accumulation mode dominates the total aerosol mass in the submicrometer size range. With regard to the influence of aerosol hygroscopicity on aqueous-phase chemical reactions on mass basis, the hygroscopicity of large aerosol particles might be more important. 15 Offline analysis extended the online hygroscopic analysis of <200 nm particles to the whole submicrometer size range, where most of the aerosol liquid water mass exists. The PM0.95 and online at 85 % RH for 200 nm particles are compared for the days when both data are available (Fig. 6), which showed moderate positive correlations: r 2 of 0.15 and 0.31 respectively for dehumidification and humidification branches. Results obtained from comparison of PM0.95 and online indicate that offline aerosol hygroscopicity analysis can be used as an alternative method, at least for the studied type of aerosols, for which the 20 sampling bias for semi-volatile ammonium nitrate is not significant because of its low abundance. Hygroscopicity of WSOM and EOM The hygroscopicity parameters of WSOM and EOM were calculated based on WSM from measurements in the dehumidification branch and the predicted water uptake by inorganic salts (Eqs. 2 and 3). The results are presented in Fig. 7 and Table S4. At 75-85 % RH, where the deviation of the measured  of AS from that predicted from the E-AIM model was 25 slight (≤ 5 %; Fig. S2), WSOM values were 0.19-0.22. Those values were higher than the  of WSOM from US national parks and Storm Peak Laboratory (0.05-0.15 at RH = 90 %; Taylor et al., 2017) and from fresh Indonesian peat burning particles (0.18 at RH = 85 %; Chen et al., 2017). In the same RH range, EOM was in the range of 0.17-0.20, and was, on average 9 % lower than that of WSOM. The EOM from this study was higher than the  of OA in the Western/Central Los Angeles Basin that was influenced by marine air masses (0.14 at RH = 74-92 %; Hersey et al., 2011). It was also higher than the  of OA that OA in a supersite in Hong Kong, for which only an upper limit value of 0.29 (at 90 % RH) was reported (Yeung et al., 2014). It is noteworthy that the estimated mean WSOM and EOM of approx. 0.2 is higher than the default  value of organics (org) of 0.14 used in an atmospheric aerosol model (Kawecki and Steiner, 2018). The different org values of different types or different atmospheric regions reported in this study and earlier studies described above suggest the importance to consider different org values depending on the types and origins of OA in model calculations. The WSOM and EOM values derived from the 5 measurement in the humidification branch at 85 % RH where WSM particles would be mostly or fully dissolved in water were also presented in Table S4. The mean values in the humidification branch were slightly higher than those in the dehumidification branch, but the characteristics explained above also apply to this condition. The fractional contributions of WSOM to the water uptake by WSM and PM0.95, represented respectively as (WSOM/WSM×WSOM)/WSM and (WSOM/PM0.95×WSOM)/PM0.95, are presented in Table S5. The contribution of WSOM to the water uptake by WSM and PM0.95 10 was 10-12 % at 75-85 % RH. Factors affecting the hygroscopicity of WSM and PM0.95 The discussion presented in earlier sections indicates that the water uptake of WSOM and the degree of neutralization of the inorganic components influence the hygroscopicity of WSM. Here, the influences of WSOM and RA/S on WSM and PM0.95 at 20-90 % RH are assessed in light of the variations of the hygroscopicity. 25 The relation between the mass fraction of WSOM (fWSOM) and WSM at 20-90 % RH in the dehumidification branch is presented in Figs. 8a and 8b. Despite the narrow range of fWSOM, moderate negative correlation between fWSOM and WSM was observed for all RH conditions, except for 90 % RH, indicating the importance of the relative contributions of WSOM and WSIM (mainly sulfate + ammonium) to the hygroscopicity of WSM. The poor correlation between fWSOM and WSM at 90 % RH in 30 the dehumidification branch was probably attributable to measurement uncertainty. Comparison between fWSOM and WSM in the humidification branch shows high correlation, as presented in Fig. 8b. This dependence is explained by the low https://doi.org/10.5194/acp-2021-704 Preprint. Discussion started: 21 October 2021 c Author(s) 2021. CC BY 4.0 License. hygroscopicity of WSOM compared to that of WSIM. The shaded areas in Fig. 8b represent WSM predicted by application of mean (i.e., fixed) value of inorg and mean ± standard deviation of WSOM for 85 % using Eq. (2). The prediction captures the measured dependence of WSM on fWSOM at 85 % RH, supporting the importance of fWSOM. As in the case of WSM, dependence of PM0.95 on the mass fraction of EOM in PM0.95 (fEOM) in the dehumidification branch was also observed for ≥60 % RH except for 90 % RH (Figs. S12a and S12b). The result suggests that the mass fractions of organic components played an important 5 role in the variation of the hygroscopicity of aerosol particles. The relation between the degree of neutralization represented by RA/S and WSM at 20-90 % RH is also analyzed for the dehumidification branch (Figs. 8c and 8d). Although the correlations between RA/S and WSM were weak for ≥60 % RH, clearer negative correlations were observed for <60 % RH. This result implies that the degree of neutralization is important to the 10 variation of WSM under low RH conditions. The correlation was absent for WSM predicted from E-AIM versus measured RA/S (r 2 ≤0.33 for RH≤80 %). Therefore, the relation might be associated with the efflorescence behavior of inorganic components. Negative correlation at <60 % RH was observed (r 2 : 0.58-0.77) even after excluding two samples with high relative abundances of sodium, which showed high ERH (Fig. 4c). Therefore, the deliquescence of ammoniated sulfate itself might be related to RA/S. An alternative explanation is that RA/S is related to water uptake by organics and/or their influence on the 15 efflorescence of inorganic salts. Although the small amount of aerosol water at <60 % RH might not strongly affect the particle optical property, it might have an important role in chemical reactions in the particles. Therefore, the relation between inorganic composition and water uptake should be assessed further, in addition to the role of acidity itself in the reactions. As in the case of WSM, the relation between RA/S and PM0.95 in the dehumidification branch was analyzed (Figs. S12c and S12d). The result suggests that the degree of neutralization of inorganic aerosol components is also important in low RH conditions. 20 In the humidification branch, moderately to highly negative correlations were found between WSM and fWSOM (and PM0.95 and fEOM) at ≥ 70 % RH (Figs. S13 and S14), indicating the contribution of WSOM to the water uptake of WSM (or PM0.95), being similar to the case of the dehumidification branch. Moderate positive (or negative) correlations of WSM or PM0.95 with RA/S were observed at 60 and 65 % RH (or 30 % RH), but for other RH conditions, correlation was not evident (Figs. S13 and S14). 25 This result contrasts with the prediction of WSM from the E-AIM model (for inorganic components), which instead show moderate to high negative correlations with RA/S ratio, in particular at 50-70 % RH (r 2 ≥0.80; Fig. S15). The strong positive correlations between E-AIM predicted DRH and RA/S ratio were also found ( Summary and Conclusions The composition of aerosols and the RH-dependent hygroscopic growth of aerosol components under the influence of the outflow from the Asian continent, as well as the air masses over the Pacific, were characterized based on analyses of submicrometer aerosol samples collected on filters in autumn 2015 in Okinawa, Japan. This offline analysis compensated for online analysis in terms of the quantification and characterization of water-soluble components and PM0.95 (50 % cutoff 5 diameter: 0.95 m), and of the measurement of the hygroscopic growth as a function of relative humidity. This study characterized the RH-dependent hygroscopicity of submicrometer aerosols and their chemical components, in particular organics, in the outflow region of East Asia. Another important point is that results from offline analyses were compared to those collected using online methods to assess the consistency of the results from the two different approaches. The hygroscopicity parameter values of PM0.95 at 85 % RH from offline methods were close to earlier reported values from online hygroscopicity measurements performed during the field campaign. Results obtained from this study extended the characterization of the studied aerosols by online analysis (≤200 nm), toward the mass/volume based mean diameter of the submicrometer aerosols. On the other hand, the similarity of the hygroscopicity parameter values from offline and online methods suggest the propriety of the offline method on aerosol hygroscopicity analysis, at least for remote sites at which the 5 aerosols are aged and semi-volatile ammonium nitrate is not abundant. This finding encourages further studies of the hygroscopicity of aerosol components, particularly OA, the hygroscopicity of which is not yet characterized well. Given that precise analysis of the hygroscopicity of OA is not easy based on online analyses, the offline approach is useful for better understanding of the relation between chemical structure, sources and hygroscopicity of WSOM and other organic components, because of the richness of information from the AMS spectra. For example, the hygroscopicity of humic-like substances and 10 other organic fractions and their contributions to total particulate matter are worth elucidating by the extension of the approach of this study. data only represent less than 2 % of the aerosols in a day, whereas offline data represent more than 95 % of aerosols in a day (10 am -10 am).
v3-fos-license
2020-10-28T18:31:01.231Z
2020-07-24T00:00:00.000
229065569
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://revistes.ub.edu/index.php/matter/article/download/31971/32142", "pdf_hash": "37b79a5258f595257bb8eb026d5f96e42e10f73a", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46775", "s2fieldsofstudy": [ "Environmental Science", "Sociology" ], "sha1": "1c9c62934005c5f8e2cc4a98979bc0e64c9a25d1", "year": 2020 }
pes2o/s2orc
Affects, activisms and resistances facing the impacts of Capitaloceno: an embodied learning experience in Chile The planetary transformations of Capitalocene affect us in multiple and heterogeneous forms. In this context, activisms emerging as embodied, experiential and situated manifestations of affectation. This article is an exploration of the activisms and resistances against impacts that Capitalocene -specifically, the extractivismhas had in Chilean society, from the perspective and experience of our own trajectories as global south academics and activists, committed to the entanglements that emerge constantly in the face of the impacts. Our work refers to the affects and resistances that we as authors have had the chance to experience in spaces of training and companionship of activists who resist in territories affected by the mining, agro-export and energy industry; and those who studied the Diploma in Social Ecology and Political Ecology from the Group of Agroecology and the Environment at the University of Santiago, offered between 2013 and 2017. Based on these experiences, we argue that the "affective turn" offers an indispensable perspective about hegemony, resistances and political changes in the current crisis. Introduction This article is an exploration of our affectation and personal involvement with the destructive consequences of extractivism in Chile. Extractivism refers to the actualization and enhancement of a colonialist matrix of production in the Capitalocene (Ulloa, 2017) that transforms the corpo-materialities of the Global South in "raw material" for the global market (Gudynas, 2009;Svampa, 2013;Mansilla, 2017). In Latin America, this modality has been embodied not only by governments openly in favour of this approach but also for political coalitions self-defined as centre-left, under the promise of generating wealth for the whole population. It is also a continuation with colonial relations installed during the Europan rule, but perpetuated and reinforced by the local elites that inherited the states formed in the XIX century (Gonzáles-Casanova, 2006). Even though this promise has been broken on several occasions, this rupture is not an unexpected outcome, but rather a required condition of capitalism and the modern-colonial form of power that has shaped our region (Quijano, 2011). The intense exploitation leaves a legacy of pollution, drought and destruction that generates "unlivable" lives (Butler, 1993, p.3), uninhabitable territories and "sacrifice zones" (Holifield & Day, 2017;Bolados & Sánchez, 2017;Maino et Al., 2019). In our trajectories as educators, researchers and activists, we have witnessed how the malaise of the unlivable pushes the desire to act; that is, the resistances and insurrections (Rolnik, 2019, p.90). In Chile, these insurrections have managed to stale the development of energy projects, production and infrastructure (Carruthers, 2001, p.350-358;Schaeffer, 2017, p.93-94) and they seem to have reached a climax in the demonstrations of October 2019. In our own biographies, this malaise has motivated our own insurrect desire to act (Rolnik, 2019, p.90). With the support of the Agroecology and Environment Group (Grupo de Agroecología y Medio Ambiente, GAMA) from Universidad de Santiago de Chile (USACH) we designed and implemented a Diploma in Social and Political Ecology between 2013 and 2017, by María Paz Aedo as teacher coordinator and Gabriela Cabaña as a student and then a teacher. We managed to frame this formative programme under the figure of "outreach" without the pressure of professional or research orientations that are abundant in university courses of specialization. Thanks to that we constituted the Diploma as a space open to the exploration of affects through the exchange of testimonies, embodied experiences and thoughts of teachers and students. All with the aim of strengthening their potential for resistance and transformation (Koch, 2017;Cvetkovich, 2012). To give account of this process, this article is organised in three sections. First, we describe our experiences of the "unlivable" in territories affected by four of the main sectors of our extractive matrix: mining, agribusiness, forestry and energy. Then we refer to how we engage with the affective turn as an onto-epistemological context for our academic and political work. Finally, we offer our perspective on the trajectories and derived consequences of the Diploma as a liminal space rather than a formative one. Methodologically, our story relies on autoethnographic research. We explore our experience as actors embedded in reality (Hernández-Hernández, 2008, p.92), reflecting on the origin, development and unexpected effects of this space of learning. We have made use of our field notes elaborated in the context of our volunteering and companionship to communities and organizations in resistance; our reflections on processes of immersion, transitions and emergent phenomena in the context of the Diploma; and our experience participating in the "weaving" created with the alumni of that process. We aim explicitly at going beyond the supposition of autobiographical experiences as subjective and untransferable, claiming "the indissoluble mix between the traditionally called objective and subjective dimensions" through our narrative (Blanco, 2012, p.172) Unlivable territories Our story-journey on unlivable lives and sacrifice zones starts in the Atacama desert, the driest in the world. Most of the mining activity has been developed there, copper in particular, which is known as "Chile's salary" due to the centrality of its exports to Gross Domestic Product (GDP). In the hegemonic development imaginary, the desert is an "almost" empty, "almost" lifeless place. What damage could a few explosions here and there on a few bugs and a few people do, compared to all the richness that the mining business generates? Even the environmentalist mainstream associates "nature" to "green". How could the exploitation of a few square kilometres of barren land be significant? But the desert is not "empty". Its subtle threads of water and underground watersheds have sustained endemic wildlife, together with peasant and indigenous people, for hundreds of years. Life in the desert, a conjunction of water, soil, sun, beings and seeds, has been revered and embodied by the Quechua, Colla and Aymara -among other -people. In contrast, mining makes an intensive use of water in its processes and generates a huge amount of toxic residues that pile in pits, known as tailing dams, from which filtrations and particulate material pollute the air and the water. For decades, mining has been extracting and polluting the water basins that sustain the life in the desert, causing destruction in the valleys, the death of endemic species, displacement and precarisation of human communities (Quiroga et Al., 2003;Manzur et Al., 2004). In sum, transforming the desert into what it was supposed to be before: an unlivable place. In the Atacama region, in the city of Chañaral, for over 80 years the state-owned company Codelco threw mining waste to the river and the sea, increasing the coastline almost 2 kilometres offshore (Cortés, 2010). The beach has a greenish tone, a product of the minerals; the air is loaded with particulate material; the water is full of sediments that are substantially accumulated in and outside the pipes and faucets. The community has denounced the impacts for years and demanded, if not reparation, at least an acknowledgement of the affectations, in order to receive any mitigation or compensation. But in Chile, to confront the largest state-owned company (one of the few survivors of the post-dictatorship privatising wave) is like talking to a wall. There are no norms that allow a retroactive evaluation of damage, and nothing is more important than "the salary of Chile". In fact, in 2003, president Ricardo Lagos called the press to show him having a swim in the bay, asserting that the water was clean. The "green bay" of Chañaral, November 2019 All in all, mining continues to expand. The high flows of money generated through mineral export and the increased precarity of local economies generate a circuit of mutual dependency. The State needs money for social policies, people need money for living, and -in the context of neoliberal globalisation -businesses need "comparative advantages'' to justify their investments. The norms, purposely lax in their search to attract investment, and the externalisation of social-ecological costs to guarantee the profit have sustained the mining and the extractive industry (Aedo & Larraín, 2004). Worse, in Chile the social benefits are scarce and arrive slowly or not at all, because of the historical obedience to the Washington Consensus (1989) that has forced the state to reduce public spending and leave the provision of social services to the market (Quiroga et Al., 2003). On top of that, the effect of climate change is devastating and synergistic: the rain does not freeze on top of the mountains, and the water creates mudslides that drag downwards not just clay, but the mining waste that has been accumulating there. This happened in 2015 and 2017. We continue our travelling towards the centre of the country, where export-oriented agribusiness has invaded the fields historically oriented to the internal market. The food market, in the context of globalisation, moves in virtue of prices, above considerations like energy use, transportation time, the pressure it puts on ecosystems and the impacts on health (Manzur et Al., 2004;Valdés & Godoy, 2017). Governments promote monocultures to satisfy international demand, disregarding production for domestic consumption. Chile, world food power (Chile, potencia alimentaria) was an official slogan promoted in 2006. Additionally, the 1980 constitution (imposed by the dictatorship and still in place) defines water not just as a public good, but also as an "economic good". Due to this definition, the Chilean State grants property rights over the use of water to individuals and corporations, those buy and sell water without any regulation other than supply and demand over prices. This has concentrated the property of water rights among a few big agribusinesses and constrained the possibilities of life in the countryside for small and medium producers. The production of avocados, for example, has dried the basins of whole communities, taking local farmers to bankruptcy and reduced water availability below the sanitary limits (Guerrero, 2019). Just like in the northern part of the country, the destruction of the economy and local ecosystems forces people to migrate or recur to the big industry for work. But, unlike mining, the agribusiness offers temporary, unsafe and precarious jobs. Pointing towards "comparative advantages" to attract investment, norms on the use of pesticides are also very lax, working hours too long, and inspection more than deficient (Valdés & Godoy, 2017). The impact of cumulative contact with pesticide and chemical fertilizers on health goes deep to the genetic level (Valdés & Godoy, 2017). This system treats us as disposable bodies that produce and reproduce, no matter the conditions. The silence appears. The looks become sombre. It is the rage (Paz, field notes). More towards the south, the forestry industry, also export-oriented, sweeps with ecosystems and communities in a territory marked by the presence of Mapuche communities. The Mapuche cosmology, like that of other indigenous people of America, understands the world as an entanglement of human, non-human and morethan-human agents (Marimán et Al., 2006;Ñanculef, 2016) in a perspective that resonates with the affectiveand post-humanist turn (Rosiek et Al., 2019;Aedo et Al., 2017). Therefore, the destruction of water, soil and other living species due to the wide and homogeneous plantations of two exotic species (pine and eucalyptus) affects their territories, not just their "resources". The imaginary of "being Chilean" exoticizes Mapuche resistance epically, because during the conquest these people managed to defeat the Spanish army, forcing the Kingdom of Spain to form agreements on coexistence and uses of their territory. This view homogenizes and stigmatises the Mapuche people as "fighters" (peleador), admirable in the past but reprehensible in the present. Indians against development, terrorists, ignorant peasants… "Why do you complain?, you are never satisfied with anything, you want everything for free!" say the landowners, businessmen, the police, the military, the ultra-conservative right, while they displace, reduce and repress communities. The press and the governments of all political tendencies talk about "the Mapuche conflict" as if they -the "conflictive"-were the problem. In the democratic Chilean system, marked by forced consensus, conflicts are solved by suppressing them. The good Indian -as Hale (2004) calls, the "Indio permitido"-is the one that accepts the small subsidies to transform their fields for other exportable products, or that open their doors to the tourist that wants to know their exotic customs. The bad Indian -and its female counterpart, the "India Brava" (Richards 2007) -is the one that insists on their own traditions and ways, not accepting the hegemonic and colonialist role model (Toledo, 2004;Marimán et Al., 2006). Due to their disobedience towards the law, they can be legitimately repressed and even murdered. The Mapuche communities in resistance have in their count most of the dead killed by the police, of political prisoners and even disappearances during the post-1990 centre-left governments. Finally, it is important to highlight that Chile has 8 territories known as "Sacrifice In Tirúa we find one of the last non-contaminated lakes in Zones" (Zonas de Sacrificio) (Holifield &Day, 2017;Bolados & Sánchez, 2017;Maino et Al., 2019), affected by the concentration of multiple and synergic impacts from different companies and productive sectors (mainly energy and mining) present in those territories. There, small scale productive activities, like fishing and agriculture, are practically destroyed, and tourism barely survives. When there are peaks in pollution, the institutional response is to declare themselves incompetent to identify the cause and practically no sanctions are enforced because there are no legal dispositions that enable a synergic or historical environmental impact evaluation (Aedo & Parker, 2020). Besides, the Chilean norms offer a wide range of tolerance to what is considered a "severe" episode of toxicity. The boroughs of Quintero-Puchuncaví-Ventanas are one of those eight zones. Our engagement with these experiences challenged our academic training as sociologists from two universities that constitute the backbone of Chile's modernising project, the Universidad de Chile and Universidad Católica, respectively. From a researcher's point of view, we could see the need to analyse the conflicting rationalities and the failures in the Environmental Impact Assessments from a critical perspective, but we found it insufficient to give an account of the complexities of different forms of abuse and resistances. From the perspective of education, the promotion of an "Enlightened Ecologism" to support resistances had a problematic colonial undertone. From these concerns, we have come closer to the ontoepistemological perspective known as the affective turn. Inhabiting the "affective turn" Our first approximations to the affective turn were linked to the biology of knowing, developed by Francisco Varela and Humberto Maturana in their ecological-systemic approach, based on Gregory Bateson theories and research. In this frame, the experience and perspective of the world are interdependent; what we perceive as reality is a co-emergent and multiple phenomena (Varela et Al., 1992). Before separated or essential existences, mind and matter are "patterns of flux" sustained by affectation as the base of our interactions. What we understand as the rational mind is what comes to light in the last stage of emergency, from an "affective tone grounded in the body" (Varela et Al.,54, our translation). We can relate this outlook with the affects as pre-reflexive, embodied and emergent dynamics with the Spinozian and Deleuzian distinction of affect as potency: what is still about to happen in terms of "reaching the vital power of each body" (Lara, 2015, p.21, our translation). Deleuze's proposal is to acknowledge bodies in virtue of what they are capable of: specifically, overflowing the physical boundaries and conforming dynamic territories understood as "variable perimeters of action of particular potentialities" (Lara, 2015, p.21, our translation). In the encounter with others, the bodies amplify or reduce their territory, that is, the reach of their potency. Extending the concept of materiality beyond living bodies, we affirm that affects constitute swarms of movement and matter, characterised by the "permeability of the membranes between humans and those others with which it is enmeshed" (Braidotti & Hlavajova, 2018, p.16). Hence, we cannot give account of the totality of affectations nor control their emergencies. The bodies that consume polluted water; the seeds that are born from artisanal or biotechnological manipulations; the rivers that are damned and then overflow; the diseases; the resilience and the different forms of care; all conform dynamic weavings where hegemonies and resistances are actualised. In their condition of "creative, generative, mutant and mobile" swarms (Mira, 2017, p. Observing this doing, the first thing we find is that the desire to push the limits of the possible supposes the experience of impossibility. We have witnessed this where mining pollution, agrochemicals, loss of biodiversity and toxic gas emissions have constrained the potentialities of the territories until making them "unlivable". There, extractivism is configured as a policy of exploitation "that assigns value to certain lives over others. It discriminates and leaves aside the right to live and the protection of life, in exchange for progress (…)" (Sánchez, 2019, p.63). Exploitation affects all agents, human and non-human. Soils, waters, air and various forms of life are turned into merchandise within the production, export and consumption circuits. Thus, the native forest has less value than the fast-growing exotic species, more efficient at producing wood and pulp. River water must be channelled and dammed to satisfy agribusiness, so it will not be "wasted in the sea," as businessmen in the industry have explicitly stated. In the Capitalocene, all materiality that does not participate in the economic circuit is deemed useless. The same goes for the bodies of people, because Under this form of supply-chain capitalism we are incorporated in forms of selfexploitation (Tsing, 2009) and precarious working conditions (Zafra, 2017), both to sustain and to change our conditions of possibility. Under these premises, conservative and liberal sectors of the right accuse activists who fight for collective rights as being "lazy" and "wanting everything for free". At the same time, sectors of the traditional left accept precariousness and self-exploitation as a condition for political life. In the Chilean popular tradition, the Víctor Jara's song "El Aparecido" tributes Che Guevara by saying: "he never complained of cold, he never complained of sleepiness" ("nunca se quejó del frío, nunca se quejó del sueño"). Clearly, none of us could ever achieve such a de-corporealized ideal, but in the imaginary of the Latin American activist, the figure looms heavily. This creates a "loop of impossibility" that disempowers and causes pains (Aedo et Al., 2017, p.387): "I have to be able to" ("yo tengo que poder") as we hear frequently among activists; together with "what I do is not enough" ("lo que hago no es suficiente"). In our experiences inside movements, organizations and political parties, we have seen this loop favoring the emergence of two phenomena: the authoritarianism of the leader as a "savior" -as he or she is the only one able to carry the others to the right place-and the corruption of the leader as "martyr" -when she or he feel the need to compensate her sacrifice somehow-. The affections produced by the Capitalocene are also responsible for depression as a material expression of the annulment of the vital force, of the dispossession of all power. Depression can be understood as "a way to describe neoliberalism and globalization, or the current state of political economy, in affective terms" (Cvetkovich, 2012, p.11). In Chile, at the Center for Studies of Conflict and Social Cohesion (COES), at the beginning of 2019, the figure of diagnosed depression was estimated at 18.2% of the national population, a high value considered the world average (12%). "It was not depression, it was capitalism," said graffiti from Santiago during the 2019 revolts. In this political economy of affects when the malaise emerges as a collective movement it is considered an irrational event "that sprouts from a physical sensation of anger that is not under control and that pushed individuals to a multitudinous chaos of uncontrolled actions" (Foster, 2016, p.72, our translation). This outrage must be contained by public policy and the economic decisions, supposedly free of subjectivities and effervescences (Barandiaran, 2016;Aedo & Parker, 2020). The supposition of mobilization as "irrational" of a domesticable other permeates through definitions of "nature" as well. There is a good nature, that serves the interest of humans or that must be saved and preserved for contemplative enjoyment; and the "bad" and dangerous, that threatens with overflowing and that we need to manage as much as possible (volcanic eruptions, flooding, earthquakes, all very frequent in our country). At the same time, there are "good" citizens that comply with the forms of participation offered by the government and receive the benefits delivered by companies, and the "bad" that does not conform to the offers of the policy apparatus and that overflow the responsive capacity of the current government. But the conflict is not a state of effervescence but rather a phenomenon immanent to coexistence. In line with the openings of the affective turn, we need to give up on the search of the organic utopia (we are all a harmonic and non-conflictive body) and the ideal of the entrepreneur (we all meet in the efficient market). In virtue of its affects, different intensities get activated and entwined in different temporalities and places, generating events, ways of inhabiting the possible (Stengers, 2014). Thus, the importance of paying attention to events, rituals of encounter among those affected, favourable to the emergence of new possibilities of resistance and mobilization. Activism can be understood as "an art of emergency" (Stengers, 2014, our translation), art of generating the context of emergency of new events. This does not mean that it comes as a revelation or arbitrarily, but rather as triggering new possibilities. The occurrence "does not refer to an ineffable inspiration, to the sudden revelation, nor is opposed to explanation (…) Politics is an art, and an art (that) creates the manners that will enable it to become able to deal with what it has to deal with" (Stengers, 2014:33, our translation). Recognizing activism as an art, "less involved in making (1973)(1974)(1975)(1976)(1977)(1978)(1979)(1980)(1981)(1982)(1983)(1984)(1985)(1986)(1987)(1988)(1989)(1990) Affective entanglements: the Diploma's experience Trusting that "liminal situations (...) tend to be highly affectively charged [and] are enormously valuable formative experiences" (Stenner & Moreno, 2013, p.20, our translation), during the four versions this Diploma was conceived as a liminal space, marked by rituals. In each version, an average of 35 students and 30 teachers participated, meeting once a month in two-day sessions, for 8 months. As a swarm, some teachers participated as students and graduates of the Diploma were integrated in later versions as teachers. The ritual begins with the separation or rupture of the initial state, followed by an intermediate phase of passage and transition to a new condition. The sharing of the "initial state" was facilitated by actors from non-governmental organizations, social organizations, independent research and creation centers, and universities. We generated a shared context based on our analysis of extractivisms and socialecological conflicts, observing that all the baseline suppositions and historical patterns that guided the objectives of sustainable resource managementconservation of ecological water courses, agriculture, fishing and "sustainable" mining, among others-are being altered by the multiscalar and synergic phenomenon (Benson & Craig, 2014, p.2) of the Capitalocene. That context usually felt familiar or resonant to the participants, activists from different places: LGBT groups, students, workers, urban dwellers, rural dwellers, semirural-urban dwellers, student movements, feminist movements, peasant movements, environmentalists, social sciences professionals, arts, humanities, biological sciences, medical sciences, agriculture technicians, health technicians, etc. Later, all participants were invited to share their fears, paradoxes and suppositions on these conflicts. Keeping the liminal condition of the process, we welcomed all possibilities of inhabiting the "unlivable" experience in their testimonial legitimacy as a way of reclaiming a traumatic event, "finding words for what cannot be said because it was never understood or, rather because it was never 'understable'" (Koch, 2017, p.160). This space included exercises on active listening, keeping a personal diary, cartographies and mappings. Mapping exercises class, August 2016 Building on the trust generated by the sharing of testimonies, we approached the material and embodied dimension of affections, holding the question: what do our activist bodies do? Thus, we developed exploratory movement exercises, based on collaborative games, theatrical performance, contact dance, biodance and martial arts. We also did short residencies, where the group developed a practice of community work in the organization of one of their classmates. These works allowed us to exercise listening and reading of our experiences, observe our differences and go through conflicts. We verified that "if the body learns, if it can embody different corporealities and if these are what determine how we are together, it makes sense to think of training favorable to the (political) project" (Pérez Royo, 2016, p.16, our translation The experimentation spaces ended up in "rounds of thought", where we opened up a space for the expression and realization of a plurality of interconnected ethicalpolitical desires, such as "relational swarms" (Teles, 2009, p.119, our translation). Unlike political assemblies or discussion groups, in the rounds "the capacity of thought of the people that make it up is revealed, the ideas that are created and expanded, the possibility of approaching daily problems from different perspectives to those habitual" (Teles, 2009, p.126, our translation), producing affective relationships that did not homogenize multiplicity and therefore did not inhibit conflict. Thus, we talked without seeking consensus or generating agreements, only by sharing our reflections about what was happening to us. The delimitation of these milestones (sharing, testimonies, exploration of movements, residences and rounds of thought), allowed us to go through processes of personal and collective transformation without putting the participants at risk, carefully staging the liminal process. Without a delimiting structure for emergencies, "liminality can be chaotic, disorderly, dangerous and destructive. Instead of being a formative experience, transitions can be deforming" (Stenner & Moreno, 2013, p.25, our translation). With this precaution, we manage to stage affective events without prescribing results. What emerged in this space? In this initial rupture, something highlighted repeatedly in group dynamics and personal diaries was the trust they felt to share their experiences from the non-perfection. From the recognition of affectations and vulnerabilities in this context of trust, we learnt that modern hegemony and the utopia of sustainable development affect not just the predominant order but also the imaginaries of the resistances. Supposing the arrival of a right time of human-nature articulation as a harmonic and organic whole implies also a "correct agent of change". The agents must be infallible, impeccable, unimpeachable; but above all, aware and coherent, that is, rational and enlightened. When entering the Diploma, many activists rejected their fallibility: tiredness, the times they fail, the times they fell in authoritarian practices, not being on the front line all the time, wanting to quit. They also felt ashamed if their pleasure was "complicit" in what they question or reject. In line with the economy of affects mentioned above, there was a feeling that "we are never good enough, but still we should try to be so" (Paz, field notes). In opposition to what would be expected from a traditional environmental education or a leadership training course (aimed at "empowering", "consciousness development" and delivering a set of tools to get closer to the ideal) we explored the materiality of these testimonies through testimonies and movements, observing and inhabiting the paradoxes of our being activists and suspending what we "should be". In the rounds of thought, through observation and sharing what was happening to us, we discovered that recovering the complexity and non-linearity of activism was consistent with inhabiting the complexity and non-linearity of social-ecological systems (Benson & Craig, 2014, p.2). This finding allowed us to open other ways of exploring the limits of the possible as activists, taking the right to decide how far to go, asking for help, rest, enjoy, change course. So, we learned that the construction of other possible worlds does not assume that their agents are in perfect control and knowledge about himself and the world; precisely because this reduction of complexity and illusion of control is in the heart of the current crisis (Stengers, 2014, p.40). Overall, it was possible for the participants to explore what they had denied themselves of under the ideal of how they should be. A comrade from a Sacrifice Zone acknowledged that she wanted to leave and to stay, at the same time. We heard a comrade and then teacher, an urban Mapuche woman, share her affinity with diverse cosmologies, dismantling exoticizing suppositions. Some comrades that feared their own sadness and rage, were able to explore them as mobilising affects, also necessary for life and especially for resistance. Comrades installed in the stereotype of the Enlightened and strong community leader were able to explore tenderness, listening and silence. And so on. The experience of multiplicity and resonance allowed each participant to know herself as part of a non-unitary and non-homogenous network, and conceive their political practice as an event not reducible to planning, individual will, or utopia (Aedo et Al., 2017). This process offered to each participant to move from the learned insufficiency to the acknowledgement of dignity as inherent to their existence; not subjected to good behaviours. Furthermore, we expanded our vital force and created a desire to keep meeting each other, to continue affecting and exploring possibilities together. And this is what is happening in this long run, some leave, others arrive in our lives, and so it is how we all receive small pieces of other people that we never met, but that are somewhere in the person that is in front of us today, and so we will continue meeting others, and giving them small pieces that were given to us, to me the interconnection is there, among all of us and everything (Personal report #16, anonymized, 2015) I move with them, with my peers, to generate networks of linked worlds with others to create collectively, share learnings, pieces of knowledge, and experiences (Personal report #19, anonymized, 2015) It made the emergence of a common ground possible: three years after the last version of the Diploma, we and more than a third of the graduates stay in touch regularly, through virtual platforms and in some cases in shared projects that have grown in the warmth of the encounter. Even though every year there were graduates that proposed the creation of a formal organising body, in practice the weaving has emerged without central management and without planning. The network emerges and it is sustained by affectation and mutual support desire. It does not have a name or norms, it also does not demand commitment. It activates and deactivates following the interaction and positions of its members. It does not have a territorial affiliation; its members inhabit and move through different places and zones in conflict. Due to the multiple affinities and convergences, there are multiple encounters among participants: meetings, conferences, courses, projects, litigations, impact assessments, researches, restoration process. There is no competing, there is permission to fall, get tired, say help and rise up again. It is possible to be and not to be, go out and then back again, even not going back ever again, because nothing happens uniformly. If we were a centralised and normed organisation, these discontinuities would be threats. Since we are not, there are no successes and failures: just emergencies in the swarm, those don't need to fill all the gaps. Also, the swarm support our resistances at critical moments. In the middle of the wave of protests and repression in October 2019, it held a space of information, contention and alert. In the middle of pandemic-induced lockdowns, we have been generating spaces of mutual support and encounter. "You are not alone, you carry all of us wherever you go, count on us, your presence is enough!" says the collective presence. And because we know we are worthy, we demand and embody our dignity at the same time. Continuations Our relationship with the "affective turn" is marked by our history as activists, educators, and researchers. Starting from our own discomfort and paradoxes, we wanted to challenge the traditional experiences of training in the environment and sustainability, focused on the construction of "conscious and enlightened" subjects. Our Diploma offered a space where to deal with the affects and the corporeities constituting the resistances that we met; and where we can recognize ourselves in our needs for respect and care. We can highlight at least three central learnings of the process. First, the construction of a space in which to share and inhabit the discomforts, instead of avoiding this phase and rushing to find solutions. Secondly, in this present, embodied and shared experience of testimonies, movements, residencies and rounds of thought, we blur the activist "role model" as a hegemonizing structure and by resonance, we find that all hegemony has cracks. And finally, based on shared experiences, we witnessed and contributed to the emergence of a network as a "community", not as something that is, but as something that happens between those who recognize themselves not next to each other (or one above the other) but entangled. An activist community lived as a swarm that "although moving towards a goal, experiences everywhere a turn, a dynamic presence of others, a flow." (Stenner & Moreno, 2013, p.22, our translation). It could be argued contrariwise that it is not possible to resist the extractivism impacts and build political projects "only" on occurrences because we need regularity, projects, plans. And that it is impossible to make politics starting from small limited experience of community or from the swarm. But this argument reduces politics to the repetition of sequences, to evolution or development of new organics and strategic calculus. Activism, as an art, plays with movements, elements, contexts and experiences for the emergence of unpredictable events while actualize the civilisational course. This is precisely why they are so fundamental. Indeed, revealing the micropolitical importance of activism does not mean denying the importance of macropolitics, nor refusing to investigate and reflect on formal structures of resistance (organizations, corporations, political parties). Rather, we trust that it enriches its approach, by revealing the immanent fissures of all hegemony (Rolnik, 2019). Furthermore, we suggest the affections of the extractivism at the Capitalocene involve human and non-human agencies; and we recognize our focus on the human affective experiences, as the malaise of the unlivable. We acknowledge that this intersection must be developed further and we visualize these gaps as open paths for research and learning. Finally, we offer our story as a contribution to explore the possibilities of academic and activist involvement in our troubled times. Sharing of our own affectations and trajectories allows the emergence of new affective events; and we need contexts to stage it. Having the experience of activism as an art of the emergent and our resistances as a swarm, is fundamental to inhabit the Capitaloceno with less individual despair about the crisis. This is how we lived the beginning of protests and social agitation at the end of 2019. Non-predictable occurrences, but also not unexpected for those of us that were, from activism, invoking and embodying the fissures of hegemony in many ways, from many places. And we know that it is impossible to predict where the current crisis will take us. But, have we ever known?
v3-fos-license
2023-01-29T16:14:54.496Z
2023-01-01T00:00:00.000
256352470
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://discovery.dundee.ac.uk/files/94664726/20563051221150407.pdf", "pdf_hash": "4246104a9e8c1932ea162c32774351c2dc014c11", "pdf_src": "Sage", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46776", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "76de65714076327f9a4fd98f9800fb73b282419a", "year": 2023 }
pes2o/s2orc
Developing Misinformation Immunity: How to Reason-Check Fallacious News in a Human–Computer Interaction Environment To counter the fake news phenomenon, the scholarly community has attempted to debunk and prebunk disinformation. However, misinformation still constitutes a major challenge due to the variety of misleading techniques and their continuous updates which call for the exercise of critical thinking to build resilience. In this study we present two open access chatbots, the Fake News Immunity Chatbot and the Vaccinating News Chatbot, which combine Fallacy Theory and Human–Computer Interaction to inoculate citizens and communication gatekeepers against misinformation. These chatbots differ from existing tools both in function and form. First, they target misinformation and enhance the identification of fallacious arguments; and second, they are multiagent and leverage discourse theories of persuasion in their conversational design. After having described both their backend and their frontend design, we report on the evaluation of the user interface and impact on users’ critical thinking skills through a questionnaire, a crowdsourced survey, and a pilot qualitative experiment. The results shed light on the best practices to design user-friendly active inoculation tools and reveal that the two chatbots are perceived as increasing critical thinking skills in the current misinformation ecosystem. Introduction One of the major challenges of the current information ecosystem is the rapid spread of misinformation through digital media. Differently from disinformation, misinformation can be misleading despite the intention of its authors/spreaders (Carmi et al., 2020). However, this does not make it less dangerous due to its wide societal impact. For example, according to the RISJ 2020 fact sheet about the types, sources, and claims about COVID-19 information, 59% of fake news contains neither fabricated nor imposter content, but rather reconfigured misinformation (Brennen et al., 2020). Similarly, Allen et al. (2020), through the analysis of a multimode dataset of news consumption in the United States, show that blatantly false fake news constitutes 0.15% of Americans' daily news diet, while misinformation driven by agenda setting in mainstream media is largely understudied and misrepresented, together with news avoidance. As underlined by the newly published Reuters Digital News Report (Newman et al., 2022), the proportion of news avoiders has sharply increased across countries. This includes a significant portion of young people and people with lower educational attainment who blame news media for being hard to follow or understand, especially in cases when the information is de-contextualized or confusing language is used. Misleading information may be communicated by authoritative sources, such as reputable news media outlets or institutional websites (Kyriakidou et al., 2020;. Thus, citizens' skills in assigning trust values to mainstream media sources, hyperpartisan ones, and fake news websites, despite appearing quite well developed in experimental environments (Pennycook & Rand, 2019), are not well enough developed to identify misinformation across different media platforms and in everyday interactions. The situation is exacerbated in today's networked society where information is repurposed from one application to another or centralized among other sources through information aggregators. Furthermore, in a context of epistemological uncertainty, such as the pandemic or the Ukrainian War, besides authority and objectivity, three other criteria are recommended for information evaluation (Metzger, 2007)-accuracy, currency (whether the information is up-to-date) and coverage (comprehensiveness of the information provided). This requires additional effort when attempting to verify information. Overall, this makes it difficult for people to identify misinformation and distinguish trustworthy from misleading news within a social media message and leaves citizens vulnerable. As a result, misperceptions have caused significant downstream consequences across multiple domains. For example, in health they have prevented the timely adoption of measures and treatments to counter the epidemic (Freed et al., 2010;Starbird et al., 2020). In relation to climate change, they cause/sustain climate damaging behaviors (McCright & Dunlap, 2011); and in political decisions, where they have helped shape justifications for wars such as the invasion of Iraq (Kull et al., 2003). Unfortunately, the identification of misinformation is far from being successfully addressed by human fact-checkers, let alone automatic ones where the lack of a common "truth barometer" hinders the creation of datasets to train automatic systems. Thus, debunking through fact-checking, involving the post hoc correction of misleading content circulating through digital media, is far from an efficient means to counter the fake news phenomenon. This situation has brought to the fore the importance of prebunking, which involves preemptively raising citizens' awareness of mis/disinformation techniques to make them resilient toward fake news. So far, inoculation-the exposure of people to weakened doses of techniques used to spread fake news to generate mental antibodies-has proved to be the most effective way of prebunking (Lewandowsky & Van Der Linden, 2021). More specifically, active inoculation (Saleh et al., 2021) through engagement in a digital game has turned out to be particularly effective for cognitive reasons (Pfau et al., 2005). In light of this, we have designed two chatbots, the Fake News Immunity chatbot (http://fni.arg.tech/) and the Vaccinating News chatbot (fni.arg.tech/?chatbot_type=vaccine), to interactively teach citizens and communication gatekeepers, respectively, how to become their own fact-checkers and how to avoid creating and spreading misinformation through the identification of fallacious arguments (see section "The Fake News Immunity and the Vaccinating News Chatbots"). Newsmaking, especially in situations of epistemological uncertainty such as the pandemic, involves the argumentative process of gaining the acceptance of a certain interpretation of a news event. An assessment of the quality of arguments which constitute a news claim is, thus, a key factor to exercise critical thinking for the identification of fake news. More specifically, the recognition of arguments which seem valid, but are not-fallacies-results in the gray area of misinformation, where the information conveyed might be factual, but presented in a misleading way through strategies such as cherry picking, false analogies, and hasty generalizations (Musi & Reed, 2022). The two chatbots differ from existing tools in their scope over misinformation rather than disinformation, in the data-driven selection of the scenarios as well as their multiagent infrastructure and frontend features. After having introduced related work and described the design of the two chatbots, we report on their evaluation, focusing both on user experience and efficacy on advancing reason-checking among their users. We do so through the combination of a quantitative online questionnaire, a crowdsourced survey, and two workshops in a qualitative environmental setting. As a result, a preliminary framework to evaluate critical thinking skills tailored to the information ecosystem is provided. 1 From Debunking to Prebunking To counter the spread of fake news, two main types of interventions have been put into place: debunking, the retroactive correction of false beliefs, and prebunking, the preemptive exposure to misinformation and disinformation techniques or sources before they strike. Recent studies have cast doubts on the efficacy of debunking for a plethora of reasons. Park et al. (2021) have, for instance, revealed through a set of randomized surveys (overall sample: 1,145 young adults), that the positive effects of fact-checking are reduced by perception and belief biases. They registered a widespread reluctance to change views when fact-checking reveals that claims initially perceived as negative are true (self-correction bias) and a perception bias in interpreting messages flagged with the rating "Lack of Evidence" as closer to be false than claims marked as "Mixed Evidence" due to our cognitive uncertainty-aversion. As for long-lasting effects, Carey et al. (2022), through a large-scale survey in Canada, the United States, and the United Kingdom show that exposure to factchecks reduced beliefs in false claims with no spillover effects, but the improvements in accuracy judgments already dissipates after a few weeks. Regardless of the complexity of our cognitive systems, it is intuitive to think that corrections are more effective when the arguments supporting them are made transparent. For example, a fact-checker rating such as "'mixed" or "divided evidence" is not very informative unless readers are explained why the evidence provided is not enough, or potentially misleading. In other words, foregrounding and explaining the roots of the distortions is more effective than merely flagging them, since it allows prebunking across contexts (Van der Linden et al., 2020). The effects of prebunking promise to be less ephemeral since they enhance critical thinking skills which are neither space-nor time-bounded (Tay et al., 2021). The underpinnings of effective prebunking rest on "inoculation theory" which states that "if people are forewarned that they might be misinformed and are exposed to weakened examples of the ways in which they might be misled, they will become more immune to misinformation" (Lewandowsky & Van Der Linden, 2021, p. 348). Three main ways to prebunk have so far been used (https://tinyurl.com/2fd9yhtc): fact-based, logic-based, and source based. It is clear that the skills allowing to disentangle factual information from fakery, identify misleading rhetorical techniques as well as recognize trustworthy sources are core to the exercise of critical thinking, advocated by policymakers as a key pillar for media literacy (see UNESCO Media and Information Literacy: Policy and Strategy Guidelines). Comparing various types of intervention, extant research has shown that instructing against misleading rhetorical strategies is a highly efficient way to boost people's resilience against fake news (Cook et al., 2017) and that content features such as degree of novelty and emotional reactions of recipients are key factors in making fake news spread fast. Such an effort requires first of all a theoretical framework to surface flawed rhetorical techniques and arguments as well as a heuristic to make them publicly recognizable. Furthermore, in an information society where the medium is more and more the message, the venues that are used to inoculate against misinformation also play a crucial role. Critical Thinking and Argumentation for Media Literacy The importance of critical thinking skills for media literacy has been widely acknowledged. Koltay (2011) lists, for example, "Having a critical approach to quality and accuracy of content" among the five stages to build media literacy" (p. 213). Going one step further, during the opening speech of the EC (European Commission) Media and Learning Conference (10 March 2016), Roberto Viola points out that "a key pillar in all possible definitions of media literacy is the development of critical thinking by the user or citizen." Critical thinking has been an object of discussion across disciplines ranging from Philosophy to Psychology to Informal Logic and has become a buzzword in pedagogical settings since the late 20th century (Goodnight, 2009), leading to a proliferation of definitions (e.g., Ennis, 1989;Hatcher and Spencer, 2005;Paul, 1981). In the "Delphi report," Peter Facione attempted to reach consensus, gathering together 46 scholars who came up with the following definition: "We understand critical thinking to be purposeful, self-regulatory judgment which results in interpretation, analysis, evaluation, and inference, as well as explanation of the evidential, conceptual, methodological, criteriological, or contextual considerations upon which that judgment is based" (Facione, 1990, p. 2). However, an operationalizable definition of what critical thinking means when applied to data literacy is still missing, together with a methodology to assess critical skills by news consumers. To tackle this issue, we propose to view newsmaking as a process aimed at gaining the acceptance of a certain interpretation of a news event. In this perspective, newsmaking is, thus, a form of argumentation, intended as "a discourse aimed at convincing a reasonable critic of the acceptability of a standpoint by giving reasons that justify the standpoint" (Van Eemeren & Grootendorst, 2003, p. 1). A critical assessment of the presence and quality of arguments (fallacious or not) provided in support of a news claim, thus facilitates the identification of misleading news. Learning how to identify fallacies in news through the Fake News Immunity Chatbot offers an opportunity to inoculate against fake news. In line with Mercier and Sperber's (2017) argumentative theory of human reason, we believe that dialogical contexts where participants argumentatively exchange information are more likely to facilitate the acquisition of reasoning skills than monological ones. However, a measurable evaluation of the impact played by learning how to recognize fallacious arguments on identifying misinformation is challenging: a citizen might, for example, learn to be suspicious about causal relations since they might be simple correlations, but still believe in a news stating a causal explanation in the absence of contrary evidence. Furthermore, fallacious arguments are more or less hard to identify, depending on the news context and personal knowledge about that context. Thus, large-scale surveys measuring changes in truth values assigned to a set of news, frequently used to evaluate digital tools (see section "Active Inoculation Through Digital Tools"), do not allow to verify acquired critical thinking skills which relate to the epistemic process rather than the product (e.g., truth judgments). In light of this, we propose an impact evaluation framework which combines quantitative and qualitative components (see section "Impact Evaluation"). Active Inoculation Through Digital Tools Active inoculation, differently from passive inoculation prompt engagement: differently from passive inoculation, where both counter-arguments and refutations are provided by the recipient, in active inoculation it is the participants who must produce both pro-and counter-arguments themselves. To enhance the fight against fake news, a suite of digital tools in a gamification environment has been developed. Online quiz-based games such as Fakey (https://fakey.osome.iu.edu/v) and NewsWise (https:// tinyurl.com/5bmau7jk) have the goal of teaching users how to recognize misleading sources or headlines through trial and error, while Real or Photoshop quiz by Adobe focuses on the identification of fake images. BBCireporter (https://tinyurl.com/2jnnctrd) and NewsFeed Defenders (https://tinyurl.com/ms4nar5) instead, put users in the shoes of communication gatekeepers, simulating their decision-making processes. A more sophisticated generation of digital games is showcased by tools such as Bad News (https://www.getbadnews.com/#intro), Go Viral! (https://www.goviralgame.com/books/go-viral/) and The Harmony Square (https://harmonysquare.game/en), which cast the player as the "fake news spreader" who learns by doing successful misleading strategies used by disinformators. Impact evaluations have shown that these tools are highly advantageous. The findings of Basol et al.'s (2021) study to evaluate the efficacy of the Go Viral! game on 1,777 players reveal an increase in skepticism toward both real and fake news right after playing, but an enhanced immunity toward disinformation only after 1 week. The Fake News Immunity and the Vaccinating News chatbots differ from state-of-the-art active inoculation tools since they target misinformation rather than disinformation. As such, their primary goal is not teaching users how to disentangle true from fake information-a task not always feasible to perform in crisis situations where information is provisionary (e.g., side effects of a vaccine)-but rather to learn what questions to ask to critically consume and create news. We propose to do so by (a) applying the notion of fallacy to the identification of various types of misinformation through a novel heuristic based on critical questions (Musi & Reed, 2022) and (b) leveraging a large-scale data analysis to select those scenarios that turned out to be more prominent and, hence, potentially more dangerous. As remarked by Almalki and Azeez (2020), a plethora of health chatbots have been developed during the pandemic to "disseminate health information and knowledge; self-triage and personal risk assessment; monitoring exposure and notifications; tracking COVID-19 symptoms and health aspects; combating misinformation and fake news" (p. 244). The latter group of chatbots counter misinformation mostly offering accurate, tailored, and easy-to-access correct information (Altay et al., 2021;Herriman et al., 2020;Siedlikowski et al., 2021), rather than teaching citizens how to recognize misinformation in messages spread across digital media. Although not addressing the fake news phenomenon, digital tools have been built by the scholarly community to enhance critical thinking through argumentation. More specifically, various computer software packages (e.g., Araucaria, Reed & Rowe, 2004;iLogos, Harrell, 2008;Rationale, Martin Davies, 2009;ter Berg et al., 2013) have been created to support argument mapping through visualizations. The educational efficacy of the argument mapping software Rationale has been tested across domains ranging from English as a Foreign Language Context (Eftekhari et al., 2016) to business education (Kunsch et al., 2014). Going one step further, LiteMap (De Liddo & Strube, 2021), is a collaborative tool that besides argument mapping enables visual summarization to help sensemaking of online public debates. With the aim of preventing the formation of misconceptions about genetically modified organisms (GMOs), Altay et al. (2022) have developed a chatbot to provide participants with good arguments rebutting the most common counter arguments against GMOs. Finally, the ArgTech research center (https://arg-tech.org/), has showcased how argument technology can be applied to the media sphere, teaching how to improve debate skills (Test Your Argument, https://www.bbc.co.uk/taster/pilots/moral-maze) and appraise argumentative structures in news reports (Evidence Toolkit, https://bbc.in/2FFNQen) with the goal of instilling those critical literacy skills needed to reduce polarization and strengthen communication persuasive skills. Drawing on these preliminary results, our chatbot positions argument technology at the forefront in the fight against misinformation. From Fact-Checking to "Content-Checking" to "Reason-Checking" Increasingly, the challenge for the general public and specialists alike is shifting away from mere checking of "facts." In the first place, nuance, subtlety, and open-texture make the veracity of statements that are in principle verifiable much fuzzier than a simple true-false distinction. Claims depend upon context, definitions, deixis, and more, all of which may or may not be explicit, and may lead to significantly different judgments of reliability. As a result, many fact-checking organizations do not check facts as much as provide interpretation, contextualization, and exegesis. Increasingly, they focus not on the ways in which truth is attached to a claim, but on the ways in which truth is maintained or eroded along the passage to a claim from its evidence. The awareness that exposure to facts is not a solution to disinformation spread has induced scholarly communities building digital tools for enhanced content curation: the platforms SadView (https:// imi-sad.pages.switch.ch/sadview/) and Newteller (https:// newsteller.lsir.ch/) developed by the Media Observatory Initiative (EPFL), respectively, enable journalists and citizens to monitor the propagation of controversies across social media leveraging social network analysis and offer context for news articles combining content, social, and source indicators. Acknowledging that citizens tend to passively adhere to news feeds suggested by social media algorithms, Horne et al. (2019) develop a Trust Nudging Model through a recommendation system that nudges people to make better news consumption choices. In view of the role played by emotion in news interpretation, Sethi et al. (2019) propose a recommender system explaining interface where users' emotional profiles are factored in the interaction with pedagogical agents who compare and contrast various stances of an issue. This shift toward a focus on a more relational notion of fact-checking and content curation goes hand in hand with a rise in the role played by critical thinking (Johnson & Blair, 2006) in countering misinformation. For models of critical thinking uniformly reject absolutist notions of truth in favor of contextualized, relativistic conceptions of goodness both epistemologically and inferentially. Thus, for example, acceptability (to an audience), relevance (between pieces of information), and sufficiency (of evidence for claims) substitute for deductive validity, and as a result naturally usher in an approach that focuses upon relations between pieces of information, and between information and context. Techniques of critical thinking have long been explored in computational environments to provide scaffolding for better quality reasoning in domains such as law (see, e.g., Gordon et al., 2007) and politics. Recently, however, they have been placed front-and-center in a wide-scale deployment of software for the general public, to support an educational program in media literacy with the BBC in the United Kingdom (Visser et al., 2020). The focus in that work is not upon factchecking, but rather upon reason-checking-using theories of critical thinking to scaffold the investigation and interrogation not of claims, simpliciter, but of the connections between claims and their evidential context. It is such a shift of focus that underpins our attempts here to develop tools for enhancing fake news immunity. Let us consider an example that is part of the knowledge base we created for the Vaccinating News Chatbot. One of the four main learning outcomes of the chatbot is that of selecting non-fallacious sources for drafting an editorial. Zooming into the topic of "politicizing the vaccine," the user is asked to write about Amazon's offer to help with the U.S. government COVID-19 vaccination program. One of the first steps in a journalist's activity is that of picking a set of sources to draw upon. To simulate such a procedure a pool of four sources mixed as to origin (social media vs. official news source) is provided: 1. Source 1: https://archive.is/MQGVE 2. Source 2: https://www.whitehouse.gov/briefing-room/ press-briefings/2021/01/21/press-briefingby-press-secretary-jen-psaki-january-21-2021/ 3. Source 3: https://www.foxnews.com/us/why-didamazon-wait-until-bidens-inauguration-to-offerhelp-with-vaccine-distribution 4. Source 4: https://twitter.com/amazonnews/status/ 1351991663191871491 Regardless of the digital venue, both the second source (official transcript of a White House press briefing by Press Secretary Jen Psaki) and the fourth source (tweet linking to Amazon's letter to Biden declaring their intention to assist in the vaccination efforts constitute) offer accurate information to give respectable voice to the governmental response to Amazon's move and Amazon's perspective on the matter. However, selecting bits and pieces of these sources could result in "cherry picking" behavior and foreground facets, which suggest a defeasible interpretation of state affairs not far from the one expressed by Source 1. The tweet, factchecked by Snopes, does not contain non-factual-information per se, but it puts forward a misleading interpretation of facts: the fact that Amazon announced their help after Biden's inauguration does not mean that they did it because of Biden's inauguration, so that Biden takes credit for it instead of Trump. A similar interpretation, even if not asserted but simply alluded to by the question in the title ("Why did Amazon wait until Biden's inauguration to offer help with vaccine distribution?"), is suggested by the Fox News article. Such an instance of post hoc fallacy, establishing a causal connection when a simple correlation is at stake, is not a matter of facts but calls for a critical evaluation of the inferential links linking available evidence to the standpoints put forward. Design The design of the system is founded upon three tenets: first, that identifying misinformation rests critically upon critiquing the passage from premises to conclusions, from evidence to claims; that is, upon processes of reason-checking; second, that a powerful mechanism for reason-checking is to actively engage in dialogue, in multi-perspective exchange that puts inferential steps under a dialectical microscope; and third, that the process of dialogue can be conceptually and practically disentangled from the informational substrate over which it acts. These three tenets are explored first empirically through an analysis of data-informed cases, and second, through the design of the computational infrastructure by which such dialogue can be mediated and executed. Data-Informed Cases. The selected cases of misinformation addressed by the tools are news that actually circulated across digital media. This choice is reminiscent of the fact that authentic problems are a crucial factor when teaching critical thinking (Abrami et al., 2015). More specifically, the chosen cases come from a dataset of 1,500 news web-crawled from five English fact-checkers (Snopes, The Ferret, Politifact, Healthfeedback.org, Fullfact) in two time spans: from January 2020 till June 2020 (1,135 news items) and from September 2020 to December 2020 (365 news) to include news about the vaccine. This dataset has been systematically analyzed through a multilevel manual annotation encompassing (a) type of semantic claim expressed in the headline, (b) type of source (e.g., social media) for the entire dataset, and (c) type of fallacies. The statistical analysis of the results shows that while social media are privileged sources for disinformation, misinformation is spread across the board and that a set of 10 fallacies emerged from the data analysis allows to explain the misleading roots of the attested misinformation cases (see https://tinyurl. com/2p86ptxs for an explanation of the fallacy types). While some fallacies (e.g., evading the burden of proof) are significantly more frequent than others (e.g., false analogy), different types of fallacies do not pattern significantly with different types of sources. However, the interlevel analysis suggests a significant correlation between type of fallacies and type of claims where interpretations pattern with false cause; evaluation emotional with false analogy and predictions with evading the burden of proof. Drawing from this analysis, we have selected from the dataset misinformation news items with the most significant configurations of features (e.g., prediction claim-evading the burden of proof fallacy-social media source), assuming that they would resemble actual news read by citizens. More specifically, we have chosen 20 news for the Fake News Immunity chatbot and 16 for the Vaccinating News Chatbot. In the design of the Fake News Immunity (FNI) chatbot, to diminish bias in the news topic due to fact-checkers' editorial choices, we have picked the same number of news (4) from each fact-checkers for the Fake News Immunity chatbot. For the Vaccinating News Chatbot, we have first identified 4 popular topics related to the vaccine according to the World Health Organization (WHO; adverse reactions to vaccine; vaccine, immunity and transmission; vaccine manufacturers; politicizing the vaccine) and we have then selected 4 news from each topic evenly distributed across fact-checkers. Infrastructure. The FNI chatbot and the Vaccinating News platforms can be conceived as computational executions of dialogues, in a gamification format. The infrastructural architecture is represented in Figure 1. The structure of the dialogue game, written in the tailored dialogue game programming language DGDL (Wells & Reed, 2012), is detached from the knowledge over which the game is to be played. This makes updating and revising the underlying data a straightforward task that is independent of the structure of the interaction. To design the frontend of the Fake News Chatbot and the Vaccinating News Chatbot we used gamification principles that have been proved to be advantageous to enhance critical thinking (Stott & Neustaedter, 2013) which include (a) freedom to fail, (b) rapid feedback, (c) sense of progression, and (d) storytelling. Starting from the latter, we have chosen as a setting ancient Greece through the aid of multimodal features in the graphic design and the choice of Aristotle, Gorgias, and Socrates, fathers of critical thinking, as avatars. To allow for a sense of progression we have created a reward system where players receive a "gadfly" in their jars whenever they accomplish eight correct answers. Furthermore, the Fake News Chatbot contains three levels of increasing complexity (credulous, skeptic, and agnostic), while the Vaccinating News Chatbot allows for user to progressively select different tasks (write fallacy-free headlines; select fallacy-free sources; write fallacy-free articles; and write fallacy-free news on social media). No penalties are involved in the scoring system while each conversational turn by the user is followed by a prompt reaction from one of the avatars, to whom the user is allowed to ask for help at any stage of the decision-making process. Besides the three philosophers, a fourth avatar is a member of the research team that is selected by the player. The locution types expressed by the avatars are typified in accordance to their philosophical personalities: Socrates asks maieutic questions (e.g., "Does the news express an unassailable fact?") aimed at eliciting doubts and new concepts previously latent in the users' minds when reading a news; Aristotle explains notions and concepts through assertions (e.g., "An argument is relevant if it provides information that makes the claim more likely to be true"); Gorgias challenges users' answers as well as common ground opinions through rhetorical questions and witty comments (e.g., "no other opinions are mentioned, how can the post criticize someone else's opinion?"). This stylistic choice is motivated by three main factors: first, interacting with the philosophers' users inductively learn their dialectical techniques acquiring historical knowledge; second, research shows that building software agents as dialogical personas increases users' engagement (Tsai et al., 2021); third we wanted to test (see feedback questionnaire section "Conclusion") what character and, hence, dialectical style, is preferred by users. The structural rules underlying users/avatars interactions respond to the two chatbots learning outcomes (learn how to reason-check through fallacies; learn how to write fallacy-free news). Both the Fake News Chatbot and the Vaccinating News Chatbot start with a request to the user to assess the reliability of a news article explaining their rationale, paired with access to the fact-checker's verdict. After this self-assessment moment, in the Fake News Chatbot, the user is fronted with instances of news and guided by Socrates through heuristics meant to teach users how to identify potential fallacies. The heuristics is, in fact, composed of a set of critical questions, which are conceived in Argumentation Theory (Walton et al., 2008) as those questions that scrutinize the soundness of the reasoning expressed by the arguments (e.g., "Is the reported evidence [if any] the only available?" to verify whether cherry picking is at stake). The user is asked to take dyadic choices (yes/no) as an answer and (s)he is explained in detail the reasons underlying the right choice when the incorrect answer is picked (Figure 2). In the Vaccinating News Chatbot the user, who is meant to simulate the decision-making processes of a journalist/communication gatekeeper, has a more agentive role: she has to select an option out a series available (e.g., select one headline out of five) and justify the choice, while being challenged/prompted by the avatars (Figure 3). Impact Evaluation Since their launch in November 2020, the Fake News Immunity and the Vaccinating News Chatbots have registered 1,700 users across 10 countries (United States, 490; United Kingdom, 375; Italy, 122; Germany, 78; Netherlands, 56; Switzerland, 53; Canada, 39), with an average engagement time of 3 min and 14 s. From the first question, it emerges that half of the users were able to fact-check either three or four news items, having spent more than the average engagement time on the chatbot (we estimate 2 min per news item). As for conversational intelligence, we received overall positive feedback with some hints for reflection: around half of the respondents considered both conversational rhythm and tone "just right," while the majority of the other were scattered, respectively, across "slow" (17%)/"fast" (25%), and "formal" (25%), "informal" (14%), showing that the way conversational flow is perceived is highly subjective. Our fourth question was meant to assess the design of the multi-software agents' interaction: to simulate a peers' discussion, we did not limit software agents' conversations to interactions with the users (1->many), but we added conversational turns between the software agents. Users found the fact that "Sometime the AI participants talked amongst themselves" to be interesting (42%), informative (28%), and a minority confusing (26%) or boring (4%), suggesting that multiparty conversations shall be further explored in multiagent chatbots. Overall, the perceived active participation by the users could still be improved since 41% of users felt that their participation was "just right" and 26% felt "active," but 15% rated their participation as "sometimes active" and 18% as "inactive." To increase perceived agency, we are planning to allow for more unconstrained questions on the part of the user. As shown by responses to Q6, almost half of the respondents (85) agreed with the statement, "Sometimes I did not feel ready to choose yes or no . . . The world is not black and white!," highlighting the difficulty of making straight diadic choices. Zooming into chatbot's personality, the most favored software agent (Q7) is Aristotle (42%) followed by Socrates (36%) and Gorgias (22%). This line of preference matches with the choice of adjectives picked by the users to motivate their choices (Figure 4). While all the avatars have been construed as to portray reliability, Aristotle, qua father of fallacy theory, has been presented as the most knowledgeable and, together with Socrates, smart. Although the most humorous, Gorgias' unpredictable and provocative personality traits have turned out not to be the most appreciated. This might be due to the disclosed educational nature of the chatbot, which positions the avatars in a pedagogical setting as teachers rather than peers. Another possible explanation lies in the reputation cognitive heuristics according to which "people are likely to believe a source whose name they recognize as more credible compared to an unfamiliar source" (Metzger & Flanagin, 2013, p. 214). Both Aristotle and Socrates are names of philosophers which the majority of users would recognize, while the same does not apply to Gorgias. The analysis of the open answers to Q8 ("What do you think are the three most important qualities in a teacher?) reveals, in fact, that the semantic domain of knowledgeability (tokens "knowledge," "someone who knows"; "knowledgeable") is the most frequent (50 mentions). Furthermore, the avatar considered most trustworthy by the participants who answered question Q11 ("Which participant looks more trustworthy? And why?") is again Aristotle since "knowledgeable" and "intelligent." These results seem to challenge a key component of the social media trust framework, namely, that social media users tend first of all to trust their peers rather than institutions, equating familiarity with credibility (Shareef et al., 2020), what , call "networks of literacy." However, this might not be the case in crisis scenarios or educational settings in which familiarity, a behavioral component of trust, does not reduce uncertainty, while competence does. A supplementary survey is needed in future work to shed light on the features which enhance at once likeability and trust. Interestingly, however, 40% of users claimed that they did ask for help more frequently from their favorite character, while 44% said they did not and 16% did only sometimes. This self-reported info matches with the trends tracked over the two platforms which register 553 "help" clicks on Gorgias' avatar and 460 on Aristotle's one. It thus appears that recognizing high pedagogical ethos to an avatar does not translate in a propensity to ask for direct help, may be since perceived as face threatening with respect to an authority in the field. As for the interface, the majority of users declared that it made them feel "relaxed" compared with "bored" (17%), "overwhelmed" (18%), or "amused" (15%). This was our intention to prompt users to adopt a thinking-slow process, which is generally hampered by the overwhelming and fast proliferation of information. A recurrent aspect that users would have changed is that the opportunity of getting help from the avatars during the decision-making process was not apparent. To compensate for this issue, we have added to the question marks next to the avatars' portraits, a flashing light to capture users' attention. As for the question pointing to which avatar looks more trustworthy, the top choice has been Aristotle followed by Socrates and then Gorgias, with similar arguments to the ones supporting the choice for a favorite character. Finally, turning to the chatbot functionality, the most frequently encountered fallacies have been cherry picking, evading the burden of proof and strawman, as displayed in Figure 5. The descriptions of the discovered fallacies provided by some of the respondents were all accurate, suggesting that they learnt their meaning. When asked whether they would be able to recognize the fallacies in the future, 50% respondents answered they would maybe be able to, while 44% were more resolute ("yes"); the minority who was doubtful explicated as a reason the lack of required focus due to the fast-paced flow which features our digital lives. 2 To better understand factors that might prompt users' interactions with avatars, we plan to make Q8 ("Why do you like them? Pick 3 adjectives that apply") an open-ended question to directly crowdsource properties which trigger avatars' likeability. Crowdsourced Survey. Besides evaluating the user interface, we also wanted to verify whether the chatbots actually exercised users' critical thinking for media literacy. Since critical thinking goes beyond the capacity of assessing news reliability and relies first of all on awareness about the need for analytic parameters, we decided to assess users' self-reported perceptions of changes in critical thinking skills. To recruit respondents with diverse demographic features, we have set up a crowdsourced survey using the Amazon Mechanical Turk platform, aiming for a sample of 150 participants. The Amazon Mechanical Turk platform has been fruitfully employed to gather respondents for a range of tasks across domains ranging from social sciences to computer science (Strickland & Stoops, 2019). The task consisted of playing with the chatbots for 15 min and then filling in a questionnaire accessible on Qualtrics consisting of 10 questions. Each participant was provided with an incentive of 5 GBP to complete the task. Due to the remote and anonymous nature of the experiment, the first two multiple-choice questions were used to ascertain that the users played with the chatbots before completing the survey, asking about the levels of the Fake News Immunity chatbot and the way fact-checking is taught (through fallacies). After having discarded users who did not meet this requirement, we have obtained 142 answers. The design of the other questions was based on identifying parameters that are symptomatic of critical thinking in the context of news consumption. To this aim, existing tests for the evaluation of critical thinking so far proposed in the educational literature (e.g., California Critical Thinking Skills Test, Cornell Critical Thinking Test) were not suitable since they address general cognitive skills (e.g., deduction/induction) which are tangential but do not have scope over media literacy. Assuming that critical thinking in the media ecosystem implies a process of sensemaking the information accessed through the news (Grasso & Convertino, 2012), we have taken as a starting point the sensemaking scale developed by De . Their scale has been used to evaluate the efficacy of Democratic Replay, a platform meant to enhance televised election debates with interactive visualizations of speakers' arguments, dialogical performance, and public reactions. The scale encompasses nine factors based on Alsufiani and Attfield (2018) theory: reflection, insight, focus, argumentation, explanation, assess facts and evidence, assess assumptions, and change assumptions. We developed eight prompts, adjusted to the context of news consumption, one per each of the factors with the exception of "assess assumption." Differently from the context of political debates, news reading does not foreground the assessment of personal ideas, but rather a change in opinion deriving from consumed information. The factors, their definition, and the matched survey prompts are displayed in Table 1. For each prompt, users had to express their agreement on a 5-point Likert-type scale. The breakdown of the answers per factor is reported in Table 2 and visualized in Figure 6. The comparison of the results across the eight factors shows similar trends in users' responses, with a mean among values per each factor, which oscillates between 0.88 and 1.08. A third of users agreed that the use of the chatbots increased their skills across the eight factors. The ratio between strongly agree (max value: 39% for Distinguishing; min value: 30% for Assess Facts and Evidence) and somewhat agree (max value: 46% for Assess Facts and Evidence; min value: 33% for Insights) is in favor of a less convinced stance (somewhat agree) across the board. The highest effects are found to correspond with the factors "Distinguishing" and "Argumentation." This result is not surprising since the identification of fallacies, the main target of the chatbot, itself involves identifying the different types Table 1. Sensemaking Scale for Critical Thinking Self-Assessment. Critical thinking factors Definition Survey prompts Reflection Capability to think back and in depth I found that the chatbot made me reflect more deeply upon the news I read Insights Capability to get unexpected ideas or make unexpected inferences I found that the chatbot provided me with unexpected insights on the issues discussed in the news Focus Capability to see different angles and aspects in the debates I found that the chatbot made me focus on different aspects of the news that I would have otherwise neglected Argumentation Capability to reconstruct the arguments that the speakers make I found that the chatbots helped me reconstruct the arguments that the author made Explanation Capability to identify and explain issues I found that the chatbots helped me decide whether a news is trustworthy Assess facts and evidence Capability to assess presented facts and evidence I found that the chatbot provided me with new ways to evaluate the interplay of facts and evidence in the news Distinguishing Capability to make a difference between the speakers' claims and the options proposed The chatbot helped me distinguishing different types of misleading information Change Assumptions Capability to change one's own mind Using the chatbot I changed some initial assumptions I had before-head of misinformation and calls for a preliminary identification of the main standpoints and arguments making up the news. We are aware that self-reported information might not directly translate into behaviors and, in our case, into increased capabilities of identifying misleading information. However, crowdsourcing platforms do not offer a suitable environment to assess citizens' reasons behind their truth assessments, which require open-ended feedback, proficient English, and a population varied as to demographic features. In a preliminary experiment encompassing pre-and postintervention surveys with the same population of workers, we have encountered issues such as fraudulent, nonsensical, and partial responding, which are common for complex tasks that require a willingness to engage (Chmielewski & Kucker, 2020), but they do not allow for a valid assessment. As remarked by Garcia-Molina et al. (2016), macro-tasks pose more challenges than micro-tasks in a crowdsourcing environment where workers' starting points in terms of focus and knowledge are not transparent as well as their motivations to participate. In our case, a high number of workers, for instance, encountered difficulties in carrying out the two surveys in the right order and ended up finalizing one only. To test a framework to measure the impact of the chatbots on the users' news interpretation processes, we have conducted a qualitative pilot experiment encompassing pre-and postintervention feedbacks, rather than redesign the crowdsourcing experiment. We, in fact, realized that it is hard to prompt users' engagement in such a transactional environment that does not guarantee an unbiased environment. Pilot Qualitative Experiment. To investigate the impact of the chatbot on enhanced critical thinking skills, we recruited 20 participants with the help of the Pook FieldWork recruitment agency. The participants were balanced as to gender, half below and half over 45 years old, and with mixed socio-economic features (ABC1 & C2DE grades). The study took place in two workshop sessions on Zoom (40 min each), featuring 10 participants per session. The design of the first session was as follows: (a) pre-interaction phase during which participants have been asked through a Qualtrics questionnaire about whether they would believe five different news claims on a scale from 0 (not at all) to 100 (completely), and then explain their answer ("Please explain why you feel the claim is believable, unbelievable or what further information you would need to decide?"). It has to be noted that we did not ask them to rate a discrete truth value (e.g., "Pick one of the following option: True, Somewhat True, Mixed, Somewhat False, False"), since such a fact-checking task does not mirror the news consumption process where citizens are asked to decide whether to believe or not in the news they read with limited time capacity and knowledge about related facts. The chosen news claims were mixed as to topic, source, reliability, and presence of multimodal features (the full questionnaire is available at: https://liverpoolcommsmedia. fra1.qualtrics.com/jfe/form/SV_8uYG2fGcdalLl8a); (b) an interaction phase of 15 min during which they played with the FNI chatbot; (c) a post-interaction phase during which they completed the again the questionnaire in (a). The second session shared the same design with the exception of phase (b), during which instead of playing with the chatbot, participants were asked to read a booklet of media literacy recommendations explaining the decalogue of fallacies and how to recognize them (the booklet is freely accessible at: https://fakenewsimmunity.liverpool.ac.uk/wpcontent/uploads/2021/03/Fake-News-Immunity-Liverpool-Uni-project.pdf). We, in fact, wanted to test whether (a) learning fallacies affected news interpretation patterns and (b) a human-computer interaction environment has more or less influence compared with a static intervention. Due to last-minute issues, one participant from the first group did not manage to join the session, while two participants from the second group did not fully complete the tasks (csv files showing the full results of the experiments are available on github folder X). Based on the credibility scores, it is apparent that participants assign on average less credibility to the news when post-intervention (Figure 7). The mean values of participants' scores are, in fact, lower after having played with the Fake News Immunity Chatbot or having read the booklet, even though the drop in trustworthiness is not statistically significant. The increased skepticism applies to the majority of the news claims, as displayed in Table 3. A significant different behavior among the two groups is attested in response to Q5: This is an Instagram post that became viral across social media: It claims that "Worldwide shortages of oil, gas, paper, milk, grain and other raw materials are not because of Ukraine." Participants from the first group were already more skeptical pre-intervention compared with those of the second group and then radicalized their views instead of taming them after the intervention. Looking at the explanations it seems that after playing with the chatbot participants tended to classify the Instagram post as "opinion" rather than a fact, pointing to its defeasible nature (Figure 8). The qualitative analysis of the open-ended questions shows some consistent changes in the post-interventions explanations underpinning participants' credibility rates along these lines: • • In both groups pre-intervention explanations pointed mostly to the trustworthiness of the sources (e.g., "unknown source, personal account unable to verify"; "Anything you see on social media can be true or false") without taking into account the actual content of the news claims; in post-intervention explanations, instead, more focus is paid on the information which is conveyed both in terms of number of arguments (e.g., "Nothing to back up their view. Need more evidence that this is just due to Ukraine."), their facticity (e.g., "Again its just a claim so not FACT [. . .]") and their formal aspects ("it's a bit of a strong allegation"). • • In both groups, post-intervention explanations contain element of skepticism, absent in the pre-intervention ones, leading to a suspension of judgment (e.g., "no way of knowing if true or false"; "just not sure") and awareness that further information is required to assess the reliability of the claim (e.g., "Not sure, would need to read the article," "[. . .] I would need to research myself as I am unaware of the number of diseases which humans have had over the years"). • • Post-interventions explanations by participants of the first group point to fallacious arguments which were not identified before the intervention and which are not clearly enucleated by group two (pre and post). It is, for example, the case for the participants who answered to Q5 as follows. Example 1 Pre-intervention explanation (rating 70): "The war in Ukraine has obviously had an impact on materials being transported." Post-intervention explanation (rating 80): "It is misleading and doesn't give the whole picture of the situation." The participant already noticed pre-intervention that Ukraine necessarily had an effect on supply issues for certain products; however, it is only after the intervention that she or he has been able to point to the cherry picking behavior of the post which provides as evidence the country's geographical dimensions, neglecting the complexity of the political picture. The role played by the chatbot experience has also to do with the number of criteria considered when making a reliability judgment. Let us consider, for instance, the explanations provided by a participant in response to Q1. Example 2 This is a post on Instagram which shows a screenshot of a tweet: The tweet claims: "13,783 cases of Shingles (one of the adverse effects of C19 jab) are reported on the Vaccine Adverse Event Reporting System. Seems like Shingles is being termed as monkey pox." Pre-intervention explanation (rating 10): "I wouldn't believe that the two are related" (Figure 9). Post-intervention explanation (rating 2): "Despite being specific in terms of numbers-no source, potential exaggerated and not verified poster." Although suspicion about the presence of causal relations also before intervention, post intervention the explanation is not presented as a personal belief, but supported by arguments that pinpoint various aspects of the message. It has to be noted that new means for epistemic vigilance acquired interacting with the chatbot do not necessarily correspond to changes in the assigned reliability values. Example 3 This is a tweet that became popular on Twitter: It claims that "Biden gave Americans the cheapest gas prices on Earth." Pre-intervention explanation (rating 30): I don't really believe it as i feel i would have heard more about this online and on the news if this was true, however i know there has been a big issue with gas prices so it may be that the US is cheaper but not that cheap Post-intervention explanation (rating 31): I would need further information on the "rest of the world." How can you believe a claim that doesn't provide the data on the "cheapest prices on the Earth." Only prices shown are a few countries. The claim that "Biden gave" doesn't have any evidence either. Just sounds like somebody who supports Biden giving a biased opinion with very selective data to back it up. Although the participant does not modify reliability score pre-and post-intervention, his or her cognitive heuristics are significantly updated: before the intervention hearsay and popularity on the news are perceived as truth benchmarks (with a clear risk for bandwagon effects); after the intervention, the participant is, instead, able to identify the partiality of the data reported in the chart (hasty generalization fallacy and cherry picking) as well as the lack of enough evidence (evading the burden of proof), other than partisanship, to attribute to Biden the responsibility for gas prices in the United States. Conclusion The advent of digitization has crucially changed the way we access, consume, and share news. The online information ecosystem has created new participatory models of news creation and consumption, but it has also widened the array of existing media distortions. More specifically, it has fueled misinformation, information that is misleading without necessarily the intention of being so. The gray area of misinformation is hard to debunk both by human and automatic fact-checkers due to the variety of distortions in place, which proliferate across digital media and that cannot be reduced to a binary problem of true versus false information. Prebunking efforts have been proved to be more effective, but have so far mostly targeted disinformation. As advocated by scholars and policymakers, what is needed to counter misinformation is critical thinking skills. The act of critical thinking news, that we call reason-checking, implies an assessment of the quality of the arguments that support a news claim, especially in situations such as the pandemic, where limited factual information is available. Our theoretical starting point is that the presence of flawed arguments-fallacies-works as an indicator of misleading information. Drawing from Fallacy Theory and extant research and tools for active inoculation, we present two open access chatbots, the Fake News Immunity Chatbot (http://fni.arg. tech/) and the Vaccinating News Chatbot (fni.arg. tech/?chatbot_type=vaccine), to, respectively, teach citizens and communication gatekeepers how to avoid believing, creating, and spreading misinformation. These tools differ with respect to state-of-the art digital tools for active inoculation both in terms of design and learning outcomes. Their educational goal is not that of enhancing users' ability to directly disentangle truth from fakery, which might not be possible in uncertain epistemological scenarios, but to exercise users' critical thinking skills in questioning news' reliability. The design responds to this goal both from a backend and a frontend perspective. The knowledge base that underpins the scenarios portrayed in the chatbot is based on the multilevel analysis of 1,500 fact-checked news to surface fallacious arguments, which feature misinformation and their distribution across sources and types of news claims. This reason-checking preliminary activity is aimed at prioritizing those types of fallacious arguments that are more frequent in the actual misinformation ecosystem, adopting a bottom-up approach. The underlying infrastructure keeps the knowledge base separated from the dialogue process to allow for updates in the informational substrate keeping the conversational dynamics. The process of reason-checking is, in fact, taught through a dialogical exchange with multiple users who engage in a group discussion; the underlying idea is that of simulating the current media agora, where multiple parties are engaged in the process of news construction. While the frontend follows state-of-the-art gamification principles, it also proposes a new heuristic for the identification of fallacies leveraging critical questions and philosophically inspired dialectic profiles for different software agents. To evaluate the impact of the two chatbots, which have reached 1,700 users over 100 countries, we have made available a UX experience questionnaire to be voluntarily completed by users and we have conducted a crowdsourcing experiment. The questionnaire, so far filled in by 211 users, has revealed an (a) overall positive attitude toward conversational intelligence and interface; (b) trends in users' preferences (and reasons therein) for agents' personality types which, however, do not correlate with increased outreach. Aristotle is, in fact, perceived as the character preferred by the majority since knowledgeable; while such preference correlates with trustworthiness judgments, it doesn't make Aristotle the character to which users most frequently ask help for, suggesting that perceived authority might inhibit communication. Finally (c) users deem to have learnt fallacies and be likely to remember them, being able to describe their meaning in an accurate way. The crowdsourced survey was designed to assess self-reported perceptions of changes in critical thinking skills. We developed the first sensemaking scale for critical thinking self-assessment as news consumers and/or producers, drawing from factors identified in the context of public collective deliberation. The survey results show that users perceived an increase in each of the eight identified factors (Reflection, Insights, Focus, Argumentation, Assess Facts and Evidence, Distinguishing, Change Assumptions). The pilot qualitative experiment to assess pre-and postintervention changes in assessing news reliability has revealed an overall increased skepticism accompanied with an increased ability to identify fallacious arguments (especially after having used the chatbot), which promise to enhance epistemic vigilance against misinformation. These three-tiered pipelines to assess enhanced critical thinking skills for media literacy through pedagogical chatbots calls for further experiments to confirm the attested results. In particular, further impact evaluations through qualitative experiments are needed to assess whether users' perceptions translate into behaviors in the long term, while more research as to the viability of scaled-up evaluations is required. An option could be that of embedding an evaluation stage in the chatbot design. Despite its limitations, the Fake News immunity Chatbot and the Vaccinating News Chatbot are deemed to open doors for a new generation of digital tools to advance critical thinking for media literacy. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the UK Research and Innovation Economic and Social Research Council (Grant No. ES/V003909/1). Elena Musi https://orcid.org/0000-0003-2431-455X Notes 1. Although the whole article has been a result of continuous process of interaction among the authors, E.M. is the main contributor responsible for the design of the theoretical frameworks, the impact evaluations, and their analysis (sections 1, 2.3-4, 3.1.2, 3.2.1, 3.2.2, 4); E.C. contributed to the theoretical framework (sections 2.1; 2.2); C.R. contributed to the infrastructure of the chatbot (3.1.3); S.Y. contributed to the statistical analysis of the results of the qualitative experiment (3.2.3); and K.O. contributed as a mentor in the analytic pipeline. 2. Monthly updated reports of the questionnaire answers will be available on the authors' Github.
v3-fos-license
2023-01-17T15:27:47.794Z
2016-01-14T00:00:00.000
255851706
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1186/s12967-016-0770-7", "pdf_hash": "6ea69ff786791354f2aed9327583b31930521f03", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46779", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "sha1": "6ea69ff786791354f2aed9327583b31930521f03", "year": 2016 }
pes2o/s2orc
Role of a polyphenol-enriched preparation on chemoprevention of mammary carcinoma through cancer stem cells and inflammatory pathways modulation Naturally occurring polyphenolic compounds from fruits, particularly from blueberries, have been reported to be significantly involved in cancer chemoprevention and chemotherapy. Biotransformation of blueberry juice by Serratia vaccinii increases its polyphenolic content and endows it with anti-inflammatory properties. This study evaluated the effect of a polyphenol-enriched blueberry preparation (PEBP) and its non-fermented counterpart (NBJ), on mammary cancer stem cell (CSC) development in in vitro, in vivo and ex vivo settings. Effects of PEBP on cell proliferation, mobility, invasion, and mammosphere formation were measured in vitro in three cell lines: murine 4T1 and human MCF7 and MDA-MB-231. Ex vivo mammosphere formation, tumor growth and metastasis observations were carried out in a BALB/c mouse model. Our research revealed that PEBP influence cellular signaling cascades of breast CSCs, regulating the activity of transcription factors and, consequently, inhibiting tumor growth in vivo by decreasing metastasis and controlling PI3K/AKT, MAPK/ERK, and STAT3 pathways, central nodes in CSC inflammatory signaling. PEBP significantly inhibited cell proliferation of 4T1, MCF-7 and MDA-MB-231. In all cell lines, PEBP reduced mammosphere formation, cell mobility and cell migration. In vivo, PEBP significantly reduced tumor development, inhibited the formation of ex vivo mammospheres, and significantly reduced lung metastasis. This study showed that polyphenol enrichment of a blueberry preparation by fermentation increases its chemopreventive potential by protecting mice against tumor development, inhibiting the formation of cancer stem cells and reducing lung metastasis. Thus, PEBP may represent a novel complementary alternative medicine therapy and a source for novel therapeutic agents against breast cancer. Background Life-style changes significantly contribute to cancer prevention and are considered an important paradigm in translational medicine [1]. For example, a dietary intervention showed that a few months of following a Mediterranean diet are sufficient to favorably modify the metabolic/endocrine characteristics of breast cancer survivors [2]. In fact, breast cancer patients are among the highest users of integrative medicine in conjunction with conventional oncology care [3]. Currently, cancer preventive phytochemicals are receiving increasing attention regarding their impact on Cancer Stem Cell (CSC) self-renewal pathways [4]. In line with these reports, our preliminary results have shown that repression of breast Open Access Journal of Translational Medicine *Correspondence: chantal.matar@uottawa.ca Vuong et al. J Transl Med (2016) 14:13 CSCs by fermented blueberry preparation, named Polyphenols-Enriched Blueberry Preparation (referred hereafter as PEBP), supports diet-mediated targeting of CSCs. The chemopreventive effects of blueberry polyphenolics on breast cancer are well-known [5,6]. For example, phenolic extracts from European blueberry were shown to inhibit proliferation and induce apoptosis in breast cancer cells [7]. Therefore, increasing the phenolic content of blueberry might enhance its anticancer properties and reduce its metastatic potential. Indeed, biotransformation of blueberry juice with a novel strain of bacteria isolated from the blueberry flora increases its phenolic content and antioxidant activity [8]. CSCs, a highly tumorigenic cell subtype, are emerging as key drivers of cancer [9,10]. CSCs in breast cancer have been identified as CD44 + /CD24 low phenotype and are able to grow as spheres named, in this case, mammospheres [11,12]. Interleukin 6 (IL-6) and its major effector, the signal transducer and activator of transcription 3 (STAT3), are part of an important inflammation-associated pathway in malignancies, and are highly involved in CSC development and progression [13]. STAT3 has been recently recognized as a key therapeutic target to reduce tumor growth [14] and metastasis [15] in different types of cancer. The persistent self-renewal observed in CSCs was reported to be epigenetically controlled in the IL-6/STAT3/phosphatidylinositol 3-kinase (PI3K) signaling pathway [16]. STAT3 with PTEN is part of the positive feedback loop that underlies the epigenetic switch that links inflammation to cancer. Thus, prevention or inhibition of deregulation in the PI3K/STAT3/ PTEN signaling pathway could be beneficial for the treatment and better outcome of breast cancer. Several signal transduction pathways, such as the extracellular-signalregulated kinase/mitogen-activated protein kinase (Erk/ MAP) pathway and PI3K pathway have been implicated in mammary carcinogenesis [17]. Moreover, members of the mitogen-activated protein kinase (MAPK) pathways have been well studied for their role in controlling cellular responses to the environment and in regulating gene expression, cellular growth and apoptosis in cancer [18,19]. The extracellular signal-regulated kinases (ERKs)-1/2 were linked to cell proliferation and survival, whereas the stress-activated MAPKs, p38 and c-Jun N-terminal kinase (JNK), were connected to apoptosis [20]. Controlling MAPK pathways was shown to impact CSC-promoting IL-6 and modify CSClike behavior [21]. Different studies have shown that the fermentation of PEBP greatly increased its antioxidant potential [8,22] and endowed it with novel anti-inflammatory [23], antidiabetic [24,25] and other biological activities [23]. Importantly, the anti-inflammatory effects of PEBP seemed to be connected to IL-6 related pathways, as demonstrated by decreasing hyperglycemia, activating AMPK pathways and mimicking Metformin metabolic effects [24]. Additionally, our studies have revealed that PEBP increases adiponectin secretion [24], probably by counteracting reactive oxygen species [26] and inhibiting the pro-inflammatory cytokines [27]; two mechanisms that contribute to the inflammatory response. Indeed, inflammation is linked to obesity, diabetes and cancer [28]. The goal of this study was to investigate the anticarcinogenic effects of polyphenol-enriched blueberry preparation (PEBP) on breast cancer stem cell development in cell models and in vivo, as well as to study the involvement of STAT3 and MAPKs signaling pathways in its chemopreventive activities. Preparation of blueberry juices Mature lowbush blueberries (Vaccinium angustifolium Ait.) were purchased from Cherryfield Foods Inc. (Cherryfield, ME) as fresh and untreated fruits. Blueberry juice was extracted by blending the fruit (100 g) in a Braun Type 4259 food processor. The fruit mixture was then centrifuged at 500×g for 10 min to remove insoluble particles. The resulting juice was sterilized using 0.22 µm Express Millipore filters (Millipore, Etobicoke, ON). Cell culture Murine 4T1, a 6-Thioguanine resistant cell line, human MCF-7 and human MDA-MB-231 cell lines were obtained from American Type Cell Collection (ATCC; Chicago, IL, USA). ATCC authenticated the human cell lines by using short tandem repeat profiling and the mice cell line was confirmed to be from mice by cytochrome C oxidase 1 gene assay. MCF-7 cells were cultured in MEM, 4T1 and MDA-MB-231 in RPMI-1640, media containing FBS (10 %, v/v) (ATCC), penicillin (100 µU/ml), streptomycin (100 µg/ml) (Sigma-Aldrich, Oakville, ON) at 37 °C in a humidified atmosphere with 5 % CO 2 . Cell viability Cell viability was assessed by water soluble tetrazolium salts (WST-1) and Lactate Dehydrogenase (LDH) assays (Roche, Laval, QC). After a 24 h treatment, supernatants were collected for LDH assay following the manufacturer's instructions. The absorbance was measured with the μ-Quant plate reader (Bio-Tek, Winooski, VT) [30]. Cell motility Cells were plated in a six-well plate at density of 1 × 10 6 cells/0.2 ml/well and allowed to form a confluent monolayer for 24 h. The monolayer was then scratched with a pipette tip, washed with RPMI-1640 to remove floating cells, and photographed (time 0). The cells were treated with NBJ or PEBP for 24 h. The cells were then photographed again at three randomly selected sites per well. Cell motility was expressed as a percent of the surface area covered by migrating cells compared with time 0 [30]. Cell invasion The cell invasion assay was performed on a polyethylene terephthalate (PET) membrane (8 µm pore size) in a Tissue Culture (TC) insert (BD biosciences, Mississauga, ON) according to the manufacturer's instructions. In short, cells are incubated in the superior chamber for 24 h. The insert is then transferred to a new plate containing HBSS supplemented with 4 µg/ml of Calcein AM for 1 h. The intensity of the fluorescence is measured and is expressed as a ratio of the control well without treatment [31]. Mammospheres formation Adherent cells were detached by trypsin and single cells were counted using the Countess automated cell counter (Invitrogen, Burlington, ON). For tumor tissue, approximately 0.05 g of each tumor was minced and dissociated in RPMI-1640 media containing 300 U/ml collagenase (Sigma), and 100 U/ml hyaluronidase (Sigma) at 37 °C for 2 h. The cells were sieved sequentially through a 100 µm and a 40 µm cell strainer (BD Biosciences) to obtain a single cell suspension, and counted in a hemocytometer. IL-6 determination BD OptEIA Mouse IL-6 ELISA sets (BD Biosciences) were used to measure extracellular IL-6 production by mammospheres following the manufacturer's instructions. Animals Six-to eight-week-old BALB/c female mice weighing 18-20 g (Charles River, Montreal, QC) were randomly distributed into seven experimental groups: control, NBJ 12.5 %, NBJ 25 %, NBJ 50 %, PEBP 12.5 %, PEBP 25 % and PEBP 50 %. Each experimental group consisted of 8 mice housed in a controlled atmosphere (temperature 22 ± 2 °C; humidity 55 ± 2 %) with a 12 h light/dark cycle. Mice were maintained and treated in accordance with the guidelines of the Canadian Council on Animal Care. The protocol (ME-289) was approved by the Animal Care Committee of University of Ottawa. While mice in the control group received normal water, mice in NBJ-and PEBP-groups received either NBJ or PEBP, incorporated in their drinking water at three concentrations: 12.5, 25 and 50 % (v/v) respectively. After 2 weeks of treatment, all mice received a subcutaneous injection of 4T1 cells (1400 cells/0.1 ml/mice) into the abdominal mammary gland fat pad. Three weeks after the inoculation, tumors and lungs were collected and weighed [32]. Mice consumed an average of 2.9 ml of juice each day and both blueberry juices were well tolerated and did not affect mice body weight. Lung metastasis Lungs were minced and dissociated in RPMI-1640 media containing 300 U/ml collagenase (Sigma), at 37 °C for 15 min. After filtration through a 40 µm cell strainer (BD Biosciences), cells were collected and suspended in RPMI-1640 containing 10 % FBS (ATCC), penicillin/ streptomycin (0.05 mg/ml) and 60 μM 6-Thioguanine (Sigma). The cells were plated in 10-cm culture dishes (Corning) at 37 °C in a humidified atmosphere with 5 % CO 2 . After 14 days, the lung cells were fixed by methanol and stained with 0.03 % methylene blue solution. All blue colonies were counted, one colony representing one clonogenic metastatic cell [32]. Statistical analysis Statistical analysis of the data by ANOVA and Bonferroni's post hoc tests were performed using GraphPad Prism software version 5.04 (San Diego, CA, USA). Statistical significance was set at p ≤0.05. Data are reported as mean ± SEM. Inhibition of breast cancer cell proliferation At a concentration of 200 μM GAE, PEBP significantly inhibited the proliferation of 4T1, MDA-MB-231 and MCF-7 cancer cells by 34, 24 and 33 % respectively ( Fig. 1), whereas the same concentration of NBJ only showed an inhibition of 32 % in 4T1 cell proliferation (Fig. 1a). No significant effects of NBJ were observed in MDA-MB-231 and MCF7 (Fig. 1b, c). Both PEBP and NBJ did not show any toxicity on the three cell lines at tested concentrations, as determined by an LDH assay (data not shown). Reduction of motility and invasiveness potential Both NBJ and PEBP at 150 μM Gallic Acid Equivalent (GAE) significantly reduced the invasive ability of 4T1 and MDA-MB-231 (Fig. 2d, e). However, only PEBP exhibited an inhibitive effect on the motility of all three breast cancer cell lines (Fig. 2a-c). NBJ did not show any significant effect on cell motility as compared to the control. Inhibition of mammosphere formation PEBP significantly decreased the formation of mammospheres in all three cell lines (Fig. 3), and nearly total inhibition was observed at 150 μM GAE of PEBP. A treatment with the same concentration of NBJ only exhibited an inhibition of 75 % in MDA-MB-231 (Fig. 3b), whereas it significantly increased the formation of mammospheres in 4T1 by 60 % (Fig. 3a). Inhibition of IL-6/STAT3/PI3K signaling pathway A 6 h-treatment with NBJ in mammosphere formation conditions significantly elevated the secretion of IL-6 in all three cell lines (Fig. 3d-f ), while PEBP did not induce any modification as compared to the control cells. Moreover, PEBP significantly inhibited the phosphorylation of STAT3 and PI3K/Akt in all three cell lines. This inhibition started after a 6 h-treatment (Fig. 4a-i) and lasted up to 24 h (data not shown), whereas NBJ only decreased the phosphorylation of PI3K. Both PEBP and NBJ significantly enhanced the activity of PTEN in 4T1 (Fig. 4j), but only PEBP increased PTEN phosphorylation in MDA-MB-231 and MCF-7 ( Fig. 4k-l). Alterations of MAPKs pathway Starting from 1 h after the addition of PEBP, a significant inhibition of ERK1/2 phosphorylation was observed in 4T1 and MCF-7 (Fig. 5a, c). PEBP also increased MAPK p38 and JNK/SAPK phosphorylation in all three cell lines. Their inhibited-or activated-state attainted the maximal level after 2 h of treatment and remained stable up to 24 h (Fig. 5d-i). NBJ did not show any significant modification of the three MAPKs family members. Reduction of tumor growth, mammosphere formation and metastasis in vivo As illustrated in Fig. 6, when administered chronically over a 5-week period, NBJ reduced tumor volume and weight in a dose-dependent manner. However, significant effects were only observed in the NBJ 50 % group, whereas all three doses of PEBP-treated mice displayed significant delays of tumor growth (Fig. 6a, b). Moreover, the mammosphere formation from tumoral primary cells was significantly reduced exclusively in tumors of (Fig. 6c). Similarly, the treatment with PEBP significantly reduced the metastasis in lungs of PEBP-treated mice, while all of the other groups did not show a significant difference as compared to control animals (Fig. 6d). Discussion Chemoprevention is an important part of integrative and translational medicine in oncology. Naturally occurring compounds, such as polyphenols in fruits, are increasingly recognized for their effects in controlling aberrant signaling pathways and inflammatory signals in CSCs. Our group has discovered that the fermented, probioticlike product PEBP greatly accentuates its antioxidant potential and endows it with novel anti-inflammatory [22], antidiabetic [24,25] and neuroprotective [23] biological properties. The common mechanisms underlining the multiple beneficial effects of PEPB are probably related to its capability to modulate the activity of global regulators that are associated with cellular transformation and inflammation. In addition, biotransformations involving fermentation and catabolic breakdown have been suggested to enhance bioavailability [33]. In fact, PEBP was found to inhibit adipogenesis and increase glucose uptake in muscle cells and adipocytes [25] through the activation of the AMP-activated kinase, mimicking Metformin activities [34,35]. Particularly, the anti-inflammatory effect of PEBP is pointing out to the blockade of the STAT3 pathways (essential in CSCs, and inflammation) and the activation of AMPK, which in turn inhibits MAPK downstream (essential in diabetes and cancer). In addition, PEBP mimics Metformin anti-inflammatory/antitumoral activities by inactivation of PI3K/AKT pathways. Metformin is now proposed as a major adjunct therapy in cancer with a powerful inhibitory effect on CSCs [36,37]. This observation led us to further investigate the effect of PEBP on CSCs. The antiproliferative effect of PEBP was observed in all three breast cancer cell lines at 200 μM GAE, whereas NBJ, at the same concentration, only had an effect in 4T1. NBJ did not show any antiproliferative effect in MDA-MB-231 as previously reported [38,39]. This might be due to the low tested-doses in our study. Moreover, PEBP significantly inhibited the motility of all three cancer cell lines, which prompted further investigation for its antimetastatic activity in vivo. As expected, PEBP significantly reduced metastasis potential to the lung when tested in a murine breast cancer model. There is now substantial evidence that many cancers, including breast cancer, are driven by a cellular subpopulation, identified as cancer stem cells, which mediate tumor metastasis and resistance to conventional therapies. Therefore, controlling CSC growth in breast cancer is a possible avenue to prevent tumor development and metastasis. Thus, the investigation of PEBP-induced molecular mechanisms that mediate CSC growth was important to clarify its anticancer and anti-metastatic activities. Indeed, our data indicated that PEBP significantly inhibited mammosphere formation in vitro. Moreover, its inhibitory effect was further confirmed by the reduction of ex vivo mammosphere development from PEBP-treated animals. Polyphenols naturally have multi-target actions/mechanisms, which explain their wide spectrum of biological activities [40]. Their anti-inflammatory property is the key factor in the interface between inflammation and neoplasia [41]. At the cross road of cancer and inflammation, the STAT3 and MAPK pathways have been reported as crucial for CSC growth and their acquired EMT characteristics during metastasis [13,42]. Depending on the cell type, the IL-6/STAT3-dependent pathways, such as the JAK/STAT [13], PI3K/AKT/NF-κB [43], or p38 MAPK [44], can enhance tumor growth and refractoriness to chemotherapy [13]. Therefore, our studies were conducted to examine the involvement of these pathways in PEBP's antitumor activities. We demonstrated that IL-6 production, as well as STAT3 and PI3K phosphorylation, were decreased in CSC culture after PEBP treatment, when compared to the non-fermented control. 4T1 (a, d, g, j), MDAMB-231 (b, e, h, k), Although, polyphenols from blueberry have demonstrated inhibitory activities on cancer cells via the control of inflammatory cytokines such as IL-6 [5], dramatic and biphasic increase of IL-6 occurs early in CSC cellular transformation [45], independently of STAT3 decrease. STAT3 signaling, an important inflammation-associated pathway in malignancies, has been recognized as a key therapeutic target to reduce tumor growth and metastasis [15]. Several signal transduction pathways such as STAT3, PI3K/AKT/NF-κB cascade, p38/MAPK/ERK, or the AMPK pathways play an important role in inflammation-mediated response at all stages of cancer development and refractoriness to chemotherapy [46]. Moreover, downstream effectors of the PI3K pathway include Akt, which is overexpressed in many cancer types and is associated with increased tumorigenicity [47,48]. Our preliminary results showed that PEBP delayed the formation of CSCs in different types of cell culture and in vivo, through modulation of IL-6/STAT3, the PTEN/PI3K/ AKT axis, and ERK/p38 in MAPK signaling pathways, which are all central nodes in CSC signaling and homeostasis [49] (Figs. 3,5). We have demonstrated that STAT3, AKT, and PI3K are decreased, PTEN (a tumor suppressor gene upregulated by p53) is increased in a non-cell type dependent manner, and ERK1/2 was significantly inhibited in 4T1 and MCF7 (Fig. 5). In MAPK pathways, ERK1/2 is the most relevant to breast cancer. Increased expression of ERK1/2 was recently reported as driving endocrine resistance and breast cancer progression in an obesity-associated experimental model [50]. In fact, both PEBP and NBJ inhibited the phosphorylation of PI3K. These findings are consistent with previous reports, which attributed the inhibition of PI3K activity to the anticancer effects of blueberry [6,38]. In our study, PEBP and NBJ also enhanced the activity of PTEN, an upstream inhibitor protein of PI3K, possibly via the inhibition of miRNA-21 expression [51]. These alterations, unfound with NBJ, could be exerted by the novel compounds that were produced during biotransformation and acted in concert on different types of receptors. Treatment with PEBP rapidly increased p38-MAPKand JNK-phosphorylation, which significantly reached its highest level at 2 h, and remained elevated for up to 24 h. PEBP reduced ERK1/2 phosphorylation in the same kinetic and cell-type independent manner. Modifications in MAPK family enzymes might contribute to the abolition of stem cell growth afforded by PEBP. Indeed, prolonged activations of JNK and MAPKp38 and/or inhibition of ERK1/2 induced apoptosis in most cancer cell lines [52][53][54][55]. The mechanisms by which PEBP modified MAPKs' activities are unknown. In addition, PEBPinduced alterations of upstream MAPK members might inhibit the downstream STAT3/PI3K/Akt signaling, indicating an extensive cross-talk and interplay between the MAPK cascade and STAT3 pathways. We further confirmed the in vivo anticancer and antimetastatic potential of PEBP using the 4T1-induced breast cancer model in BALB/c mice. The 4T1 tumor is highly tumorigenic and invasive and, unlike most tumor models, can spontaneously metastasize from the primary tumor in the mammary gland to multiple distant sites [56,57]. Chronic administration of PEBP via incorporation in drinking water significantly reduced tumor volume and breast cancer stem cell development derived from the tumor. This diminution supports the low count of metastasis in lungs of PEBP-treated animals. Especially, PEBP anticancer and antimetastatic effects were observed at a therapeutic dose as low as 12.5 %, which, according to dose translation from animal to human using body surface area, corresponds to 1.2 cups of juice per day for humans [58]. In contrast, NBJ at the same dose did not show any significant effect. NBJ could show a decrease in tumor size and weight only at the dose of 50 %, which represents a substantial consumption of blueberry juice for humans. These results are consistent with findings from previous studies, which reported that feeding mice with blueberry extracts or whole fruit powder has an impact on inflammation and could delay tumor growth [6,38,59]. However, NBJ failed to achieve the reduction of breast cancer stem cells and metastasis observed with PEBP. Nonetheless, the process of preparing PEBP, which greatly increases the content in total phenolic compounds, could clarify its effectiveness at a low therapeutic dose as compared to NBJ. Furthermore, the novel antimetastatic potential of PEBP could be explained by the change of phenolic composition from NBJ to PEBP during the biotransformation process. Indeed, the biotransformation of blueberry juice not only increases its phenolic content, but also produces novel compounds [8]. One interesting possibility is that these novel compounds may possess more potent anticancer and antimetastatic properties that could have contributed to the observed reduction in tumor size and metastasis, as opposed to components of NBJ. In addition, the biotransformation process has probably broken down long polyphenol chains, which are poorly absorbed into gastro-intestinal tracts, increasing their bioavailability, and rendering PEBP highly functional [60]. Conclusion The results of the present study demonstrate that polyphenol-enriched blueberry preparation potently reduced the tumor growth and metastasis in mice. We have demonstrated that repression of breast Cancer Stem Cells (CSCs) by fermented blueberry supports a diet-mediated targeting of CSCs. We have provided evidence that PEBP selectively inhibits the inflammatory signature in CSCs through signaling pathways linked to the maintenance stemness and metastasis. The mechanisms of action involve, at least in part, alterations in the MAPKs cascade and inhibition of the STAT3 signaling pathway, involved in inflammatory pathways. The results convincingly demonstrated that PEBP, indeed, holds great promise as a chemopreventive agent and may represent a novel complementary therapy against breast cancer and metastasis. Conclusively, the prospective modulation of CSCs by nutrition will probably mark a major advance in preventing breast cancer and further optimizing the management of this significant disease. It is an important approach in translational medicine for specific integrative therapies that can be recommended as evidencebased supportive care for cancer patients. Authors' contributions TV prepared the PEBP, carried out the cell culture experiment and animal experiment and drafted the manuscript. JFM participated in the animal experiment, and the lung metastasis and drafted part of the manuscript. SR carried out the western blot. MO, HHV and ZH contributed to the design of the mammosphere experiment and revised the manuscript. CM drafted part of the manuscript and contributed to the conception and design of this research. All authors read and approved the final manuscript.
v3-fos-license
2020-01-09T09:13:47.337Z
2020-01-07T00:00:00.000
210877879
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2227-7072/8/1/1/pdf", "pdf_hash": "da0ec229bc046bb1509d09922bff360cc05681db", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46780", "s2fieldsofstudy": [ "Economics" ], "sha1": "4820d2ff14d5d3cc03f28daf385227b320a1bfca", "year": 2020 }
pes2o/s2orc
Are Corporate Bond Defaults Contagious across Sectors? The views in this paper represent those of the author, and not necessarily any institutions he is (or has been) associated with. Abstract: Corporate bond defaults in different sectors often increase suddenly at roughly similar times, although some sectors see default rates jump earlier than others. This could reflect contagion among sectors—specifically, defaults in one sector leading to credit stresses in other sectors of the economy that would not otherwise have seen stresses. To complicate matters, simple correlation-based tests for contagion are often biased, reflecting increased volatility in periods of stress. This paper uses sectoral default data from over 30 sectors to test for signs of contagion over the past 30 years. While jumps in sectoral default rates do often coincide, there is no consistent evidence of contagion across different periods of stress from unbiased test results. Instead, coincident jumps in sectoral default rates are likely to reflect common macroeconomic shocks. Introduction When economic or market conditions deteriorate, we often see simultaneous declines in asset prices, as market participants reassess prospects for revenue, profits, and affordability. In light of these worsening macroeconomic and financial conditions, credit conditions for borrowers often deteriorate rapidly, leading to firms being unable to meet agreed interest payments or being unable to refinance maturing debt. For these reasons, both non-performing loans held by banks and default rates for corporate bonds-covering both intermediated and disintermediated credit provision-tend to rise in recessions. Default rates can jump quite quickly (Figure 1), and these increases are often evident across a range of sectors. 0% 2% 4% 6% 8% 10% 12% 14% 16% Jan-70 Jan-74 Jan-78 Jan-82 Jan-86 Jan-90 Jan-94 Jan-98 Jan-02 Jan-06 Jan-10 Jan-14 Jan-18 In some instances, default rates have increased in one corporate sector before any increase is evident in others. This can give the impression that credit stresses "seep" from one sector to another, which, together with the high correlations in default rates across sectors in recessions, can give the impression of contagion-that is, corporate defaults in one sector having knock-on causal effects that drive up defaults in other sectors. Tests for contagion have been used in a variety of other markets including across equity markets in emerging economies. However, there has been relatively little work to test whether corporate bond defaults actually are contagious in the sense that stress in one sector triggers stress in another sector. This is the fundamental question that this paper addresses. This paper re-examines the empirical question of contagion in corporate bond markets, in particular looking at whether increases in default rates in one corporate sector exhibit contagion effects to default rates in other sectors. The remainder of this paper is structured as follows. Section 2 reviews past work on testing for contagion, and the importance of accounting for the broader macroeconomic environment and in particular taking explicit account of shifts between high-and low-volatility periods. Section 3 then presents the data underpinning this analysis, looking at sectoral defaults; Section 4 then details the methodological approach taken and presents results from the contagion analysis. Finally, Section 5 concludes the paper. Past Research: Defining and Testing for Contagion and the Problem of Time-Varying Bias The presence of contagion across asset markets is important for several reasons. For instance, most investment strategies assume that diversification across different markets is an effective mechanism for reducing portfolio risk. This relies on low-risk correlations across different markets; but if contagion occurs after a negative shock, then correlations across markets will increase, limiting the benefit from diversification. On the other hand, if contagion is present across markets and countries, this could lead to a crisis in one sector having a negative impact on the second sector; and, even if the second sector's economic fundamentals are sound, it could also face a crisis. Given the underlying fundamentals, this could justify intervention by the public authorities to stabilise the sector. A first key issue is to define contagion. This is less straightforward than might otherwise be expected, as different papers adopt different approaches and strategies for identifying shock propagation. In an international context, Dornbush et al. (2000) define contagion as a significant increase in cross-market linkages after a shock, either to an individual country or a group of countries. From this definition, an increase in co-movement across countries or sectors could look like contagion. Countries that are closely tied together will tend to exhibit strong correlations in economic performance, whether positive or negative shocks hit, and so co-movement across those countries may not vary much. But, if two countries are moderately but not strongly correlated, then if a shock to one country affects the other, then the co-movement and correlation could increase, suggesting contagion. This could be misleading. In explaining why, it is useful to draw upon Masson (1997) who defines three mechanisms for this propagation in a cross-country context: aggregate shocks which affect the economic fundamentals of more than one country; country-specific shocks which affect the economic fundamentals of other countries; and shocks which are not explained by fundamentals and are categorized as pure contagion. The distinction matters because pure contagion reflects the transmission of a stress originating in one country or sector to another sector and, in particular, that without the contagion that second country or sector would not otherwise have experienced stress. In the case of aggregate shocks that affect all sectors, we would expect subsequent sectoral movements to be correlated, which could look like contagion even though all sectors have been affected by the same shock, rather than one "infecting" another. The second mechanism, whereby stress in a first sector affects fundamentals in another, represents a direct transmission; Eichengreen et al. (1996) discuss these mechanisms. However, this transmission-and indeed the links it flows through-should be evident in economic and financial linkages such as declining demand for the second sector's products or increased production problems as suppliers from the first sector are unable to provide intermediate goods that the second sector uses in production. But, even here, the transmission mechanism works through real economic fundamentals rather than via market sentiment or other channels. The third mechanism, whereby co-movements across sectors cannot be explained by changes in fundamentals, is generally thought to represent the formal definition of contagion-the stress in the second sector cannot be explained using the normal economic and financial linkages that are already known to exist. There are several theories that seek to explain the presence of contagion. As noted earlier, Masson (1997) presents a theory of multiple equilibria which shows that a crisis in one country can drive a shift in investor expectations (rather than real linkages) that then impacts on another country. Valdes (1996) argues that episodic disruptions in capital market liquidity can lead to fire sales in some asset classes, potentially affecting prices of assets in countries not affected by the initial shock. Mullainathan (2002) posits that investors do not accurately recall past events, and memories of past crises cause investors to re-evaluate their assumptions, in particular the probability of bad outcomes. Drazen (1998) asserts that political economy considerations can drive co-movements, citing the European devaluations of 1993 as an example of "bunching" in economic policy shifts. As Forbes and Rigobon (1999) note, despite the different mechanisms these theories represent, they all essentially treat contagion as the unexplainable component-the residual in any stylized or empirical analysis. This raises the obvious problem that, if we do not accurately capture real spillovers among sectors or economies-if our models are misspecified-then we might erroneously think that contagion exists when in fact it does not. Essentially, contagion is defined as the observation that cross-market linkages during a crisis are different than during relatively stable periods. Consistent with this view, economists have developed a variety of tests for contagion but which often have a common underlying component. This is best illustrated by one simple approach that compares the correlation (or covariance) among two markets or sectors during a relatively stable period with the correlation during a period of turmoil, often directly after a shock occurs. Contagion is then identified wherever there is a significant increase in the cross-market correlation during the period of turmoil, relative to the average correlation across periods of both calm and turmoil. A common problem with this, however, is that correlation coefficients are calculated incorrectly. Forbes and Rigobon (2002) describe this issue in detail in their seminal work. To summarise, unless we control for the higher variance that naturally exists in stressed periods, the correlation coefficients that are calculated to evaluation contagion will be biased and misleading, leading to contagion being identified where in fact none exists. Forbes and Rigobon (2002) demonstrated this effect when testing for contagion across equity markets and found that, while interdependence was present, actual contagion was far less prevalent than often assumed, and in particular was much less prevalent than suggested by misspecified correlation tests. This bias arises from the unadjusted correlation coefficient not taking account of different market conditions during periods of stress versus periods of calm. Over a long enough sample, summary statistics such as correlation coefficients may "average out" periods of relatively high and relatively low volatility. But the formal correlation-based test for contagion that is typically used is exactly that correlations among sectors or markets increase once market turmoil takes hold; and those increases can occur anyway if system-wide shocks occur without formal contagion actually being present. Essentially, because the data sample is split into "high" and "low" volatility periods, and we are comparing the high volatility period to an average of the high and low periods, the correlation test will naturally tend to signal contagion if the bias that is naturally present in the test is not accounted for. Forbes and Rigobon (1999) provide a formal proof for this theorem, building on the previously work of Ronn (1998) who addresses bias in the estimation of intra-market correlations for stocks and bonds. The estimated-but unadjusted-correlation among two variables will increase when the variance of one of them increases, even if the actual correlation does not change. Happily, it is relatively straightforward to correct for this bias and, as such, calculate adjusted correlation coefficients that take this tendency into account. Loretan and English (2000) also note this effect. The original discussion in Forbes and Rigobon (1999) demonstrates how correlation coefficients can be biased when the variances of the underlying variables change, but for completeness this is revisited here. Suppose that x and y are stochastic variables, and they are related as follows: εt] = 0 and also (for simplicity) that |β| < 1. Now suppose that the variance of xt changes over the sample period so that it is lower in one part of the sample (l) and higher in the second part of the sample (h). Since by assumption the error and explanatory (xt) variables are orthogonal, the consistent and efficient ordinary least squares (OLS) estimator will return the same parameter estimates across samples: β h = β l . By assumption, we also know that σ h xx > σ l xx which, when combined with the standard definition of β, implies that σ h xy > σ l xy: the covariance between x and y will be higher in the second period. Since the residual variance is constant by construction and |β| < 1, we know that: Which, substituting into the standard definition of the correlation coefficient: implies that: > (4) In other words, the estimated correlation between x and y will be higher when the variance of x increases, even if the true correlation between x and y does not change. The standard correlation coefficient is conditional on the variance of x, and the bias this implies can be quantified as: where is the unadjusted (or conditional) correlation coefficient and is the actual (or unconditional) correlation coefficient. In this expression, is the relative increase in the variance of x. As such, during periods of high volatility and turmoil, estimated correlation coefficients will be biased upwards. Forbes and Rigobon (1999) also note that it is straightforward to adjust for this bias, solving Equation (5) for the unconditional correlation: Using this simple approach, they demonstrate that the estimated contagion among stock markets during the late 1990s was largely a statistical artefact. That is because the common technique to test for contagion was to estimate whether there is a significant increase in correlation coefficients during periods of market turmoil, and while the biased estimated correlation coefficients did indicate that contagion was present, the true unbiased correlation coefficients were much lower and typically did not suggest contagion. In other words, the test statistic upon which the contagion hypothesis was evaluated was biased, leading to misleading results and inference. Apart from Forbes and Rigobon (2002), other papers have also looked at different markets and episodes to test for contagion. Forbes and Rigobon (2001) themselves provide a broad survey of conceptual and empirical issues around measuring contagion. Since their original paper examining bias in correlation coefficients, methods for estimating contagion have become more sophisticated than just testing for changes in (biased) correlation coefficients. For instance, rather than using correlation tests over discrete samples, an alternative would be to estimate models with time-varying parameters to account for possible shifts (see for instance Ellis et al. (2014)). However, there have also been recent attempts to model contagion and network effects more explicitly in a time series format. One particular approach has been the emergence of correlation network models. As Giudici and Parisi (2018) note, while bivariate analysis and causal models can investigate whether the risk represented by a particular institution is affected by market crises or exogenous risks, these correlation network models try to explain whether that risk depends on endogenous contagion effects. These network models can take the form of contagion models that combine financial networks with price-based contagion models, such as those employed by Billio et al. (2012), who propose different statistical measures of connectedness, finding that different financial sectors have become more interrelated over time. Diebold and Yilmaz (2014) also propose a variety of connectedness measures based on variance decompositions that can be used to track time-varying connectedness of stock return volatilities. Ahelegbey et al. (2016) present an innovative Bayesian graphical approach to identification in vector autoregressive (VAR) models, finding strong unidirectional linkage from financial to nonfinancial sectors during the recent financial crisis and bidirectional linkages during the European sovereign debt crisis. However, it is unclear whether these linkages represent contagion per se, or more normal shock transmission. Das (2016) defines a new score of systemic risk that depends on individual risk and interconnectedness across financial institutions, where the network contributions implied by the latter could be interpreted as contagion. Interestingly, however, Das's work on spillover risk finds that splitting up too-big-to-fail banks does not lower systemic risk, consistent with contagion effects from large firms not being the primary driver of systemic risk. More recently, Giudici and Parisi (2018) used a vector autoregressive (VAR) approach to model credit default swap (CDS) spreads, using cross-sectional dependency to model the contagion transmission mechanism. They then used the VAR to distinguish between contagion and idiosyncratic risk. Herculano (2018) used a Bayesian spatial autoregressive model to identify bank defaults and found that spillover effects among peers were positive and significant, although also very heterogeneous, with some banks being more important than others. However, spillovers within the banking sector may not apply in other sectors. In some instances, the definition of contagion is rather different than the fundamental approach proposed by Forbes and Rigobon. For instance, Ahrend and Goujard (2014) examined bilateral financial and trade linkages and equity and bond price movements between 2002 and 2011, finding that financial turmoil was transmitted through bilateral debt integration and common bank lenders. However, these identified transmission mechanisms were actually not indicative of formal contagion, which by definition should not be accounted for by increases in debt integration or common counterparties. Like many of the other related papers in this area, Mezei and Sarlin (2018) also focused on systemic risk rather than contagion per se. They promote a mechanism for aggregating individual risk and interconnectedness but seem more concerned with cross-border linkages than contagion as defined earlier (that is, the transmission of a shock from one entity to another other than via a common source or established transmission mechanisms). Aamir and Shah (2018) investigated co-movements among Asian stock markets and Adad and Chulia (2013) examined the impact of monetary policy surprises on European government bond markets. Das et al. (2007) explored correlations of individual corporate defaults rather than across sectors, finding "excess" default clustering can be matched by assuming some "extra" correlation, and macroeconomic variables can account for some of this. However, individual corporate defaults can often be correlated within sectors due to the common exposures to exogenous developments and shocks-the spate of US shale defaults after the 2014/2015 decline in the oil price is a clear example of this. As such, these within-sector defaults may likely represent common exposures rather than genuine contagion that instead may be easier to identify across different sectors. There is some past evidence that the default of one company can increase the measured probability of another company defaulting (see Azizpour et al. 2018). Here, the authors used latent variable techniques to account for unobservable "frailty", and contagion is assumed to follow the specific form proposed by Hawkes (1971). The results of this approach suggest that the impact of one default on the default rate has a half-life of only three months. However, it is questionable whether the modelling correctly identifies changes in underlying macroeconomic conditions (which, as noted above, can bias measures of contagion) from non-stationary shifts in the latent "frailty" factor. It is also striking that more sophisticated computational techniques employed to examine contagion over the past two decades are far from uniform in their assessments. For instance, Leschinski and Bertram (2017) use a "rolling window analysis" to identify contagion effects during the euro crisis, finding pure contagion from Italy and Spain but not for Greece, Ireland or Portugal. Yet, at the same time, Caporin et al. (2018) analysed sovereign risk contagion using quantile regressions focusing on the euro area and found that the risk spillover was not affected by the sign or size of the shock, implying that contagion has remained subdued. At best, these different results demonstrate that more complex modelling approaches still result in ambiguity about the role and impact of contagion. However, probably the most important paper for gauging the effectiveness of newer methods of testing for contagion is Rigobon (2016) which reviews the empirical literature. As with the simple correlation analysis discussed earlier, Rigobon notes that the econometric issues arising from endogeneity and omitted variables produce time-varying biases. All the empirical approaches examined, including VARs, event studies, autoregressive conditional heteroskedasticity models, non-linear regressions, suffer to a greater or lesser extent-put simply, Rigobon finds that there is no single technique that can completely address this issue without formally correcting for these biases. This implies that, while the techniques mentioned above are, to varying degrees, more complex than the statistical analysis implicit in correlation coefficients, the underlying structural data conditions that the original paper by Forbes and Rigobon (1999) speak to still matter. Whenever the variance of explanatory variables varies over the data sample, it is critical to adequately correct for this in the estimated variance-covariance statistics that underpin the coefficient estimates and other features of the approaches and models outlined above. Simply using a Granger causality test or VAR will not adequately adjust for the fact that statistical bias will be present in any estimated coefficients where the variances of the underlying data vary over time. As such, while the approach originally presented by Forbes and Rigobon (1999) is simple, compared with some of the complex alternatives employed today, it still serves a useful purpose: any model estimates will be biased unless the underlying biases arising from statistical analysis are accounted for. Sometimes, stripping away statistical sophistication to focus on the underlying issue can shed more light on an issue than resorting to greater computational complexity. As such, armed with the original approach set out by Forbes and Rigobon (1999)-and conscious that more complex approaches will suffer from the same concerns about bias that they illustratethis analysis will now examine sectoral default rates for evidence of contagion. To the best of my knowledge, there has been little exploration of whether this type of cross-sectoral contagion exists in corporate bond markets which is the key contribution of this paper. Sectoral Bond Defaults: Inspecting the Data Ultimately, this analysis regards the question of whether corporate bond distress is contagious across sectors as an empirical one. In order to investigate this, we need to examine corporate default rates across sectors to see if there is evidence of contagion from one sector to another. To be clear, this analysis examines empirical data for signs of contagion among different sectoral default rates. Each sectoral default rate was calculated as the number of default events, relative to the universe of rated entities within that sector, over a specified time period. The default data here were collected and published by Moody's, a global credit rating agency. Moody's has collated data on defaults for over a century and publishes default rates back to 1920 on some bases. In this instance, the data sample covered defaults by non-financial corporate issuers as a percentage of Moody's rated universe. All the data used in this analysis were default rates (percentages) and were not absolute numbers of default events. The data are categorized according to one of 33 corporate sectors and default rates are published on an issuer-weighted basis. The entire data sample covered the period from the start of 1983 to the start of 2017; default rates were reported on a monthly frequency. Overall, the dataset comprised 13,530 observations across the 33 sectors considered here. Towards the start of the sample, the data were more heavily concentrated on US issuers, reflecting Moody's geographical coverage; but, as Moody's expanded into other markets, over time, more issuers from other parts of the worldparticularly the UK, Germany, and France-were included in the data. More detail on Moody's default data are presented by Moody's (2019a). It is important to consider what constitutes a default event in these data. Moody's defines defaults on the basis of one (or more) of four events (see Moody's 2019b). These include: a missed or delayed disbursement of a contractually obligated interest or principal payment; a bankruptcy filing or legal receivership by the debt issuer or obligor that will likely cause a miss or delay in future debt service payments; and a distressed exchange where an issuer offers creditors a new or restructured debt (or assets) that amount to a diminished value relative to the debt obligation's original promise, with the effect of allowing the issuer to avoid a likely eventual default. Finally, a change in the payment terms imposed by the sovereign that results in a diminished financial obligation also constitutes a default. Many defaults represent one of the first two instances, although distressed exchanges do periodically account for a sizeable number of total defaults. Simply from inspecting the data, it is readily evident that default rates typically jump at similar times across different sectors which, in principle at least, could be consistent with contagion across sectors (Figure 2). In the data sample examined here, there are three particular episodes where default rates jumped across a wide range of sectors. These three periods corresponded to macroeconomic and financial economic downturns that started in the late 1980s, the early 2000s, and the August 2007 Financial Crisis. These episodes are also clearly evident in Figure 1 as spikes in the default rate. It is clear from Figure 2 that a spike in the default rate in one sector often occurs at the same time that default rates jump in other corporate sectors. This is also confirmed by simple correlation calculations among different sectoral default rates across the whole data sample. These are presented in Figure 3, where darker shading indicates stronger correlation. However, as noted in the earlier discussion, a coincident jump in default rates could reflect the macroeconomic environment deteriorating and simultaneously impacting on different sectors; this would reflect a common macro shock rather than "contagion" from one sector to another. It is very important to differentiate between co-movement and genuine contagion as discussed previously. This is what the remainder of this paper investigates. Testing for Contagion across Sectors As noted earlier, a simple test for contagion is to compare the correlation (or covariance) between two markets or sectors during a relatively stable period with the correlation during a period of turmoil, often directly after a shock occurs. A significant increase in correlation during the turmoil period is indicative of contagion across sectors. However, it is important to correct correlation coefficients for the bias that will naturally arise in stressed periods, compared with calm periods. To do so, we need to correct for the jump in volatility that will lead the estimated correlation in periods of high volatility to exceed the unconditional correlation. As noted earlier, Forbes and Rigobon (2002) present the formal proof for this theorem in their seminal work. To summarise, unless we control for the higher variance that naturally exists in stressed periods, the correlation coefficients that are calculated to evaluate contagion will be biased and misleading, leading to contagion being identified where in fact none exists. Forbes and Rigobon (2002) demonstrated this effect when testing for contagion across equity markets and found that, while interdependence was present, actual contagion was less prevalent than often assumed and also much less prevalent than suggested by misspecified correlation tests. The analysis that follows will test for this effect when examining potential contagion among corporate default rates across different sectors. In the context of examining contagion, we might expect sectors that cause contagion to exhibit credit stresses before those stresses then spread to, and are evident in, other sectors. From inspecting the data, there are some sectors that appear to have "led" past downturns in credit quality: that is, corporate default rates jumped in some sectors before default rates across a range of other sectors then subsequently picked up. For instance, in the August 2007 downturn the default rate in the "media and publishing" sector increased prior to default rates in other sectors. In particular, the default rate for media and publishing reached 8% in 2007 Q1, long before the broader downturn was evident across a wide range of sectors. No other sector saw as early or as sharp a jump in its default rate during 2007; thus, if there was one sector that led others, potentially causing contagion, it would likely have been the media and publishing sector. In Figure 4, the media and publishing sector is represented by the red line to distinguish this "leading" sector from other corporate sectors during that time period. This suggests that, if contagion flows from one sector to another, the credit stresses should have originated first in the media and publishing sector before defaults there then triggered credit stresses elsewhere. This is not to say that the media and publishing sector is necessarily more important than others per se-any contagion appears to spread from other sectors in other periods-but it is the sector where credit defaults are evident earliest in the August 2007 episode. As noted later, the identification of the media and publishing sector is not critical for the contagion analysis presented herein; rather, it is meant to serve as an illustration of how contagion testing can be conducted. The simple test of this contagion in this instance is to examine whether the correlation coefficients between the default rate in the media and publishing sector and default rates in other sectors significantly increased during the "stress" period relative to whole period of data we are considering. In order to do this, we need to define the high and low volatility periods in the sample that we are going to test across for any significant change in correlations. In this instance, the high volatility period, hereafter referred to as the "stress" period, was assumed to run from January 2008 until January 2011. This was based on when default rates started to increase across many sectors. The low volatility (or "calm") period was assumed to run from February 2003 until December 2007. The sample period as a whole was therefore defined as the months between February 2003 and January 2011. It is this entire time period from February 2003 to January 2011 that the "benchmark" correlations were calculated over and with which the "stress" correlations were then compared. Author's calculations and Moody's. Note that the exhibit presents sectoral default rates for 33 nonfinancial corporate sectors. The vertical axis was deliberately limited to 50% to illustrate volatility within and across sectors relative to lower default rates in some time periods. The "media and publications" sector is highlighted in red, as it was the first sector that saw its default rate jump sharply during the course of 2007. It is important to note that, for some sectors, default rates in one of the (sub)sample periods were zero throughout; as such, it is impossible to test for changes in correlation coefficients, because the underlying variance of those particular data series was zero in the (sub)sample which does not allow the calculation of (adjusted) correlation statistics. Of the 32 sectors that could potentially exhibit evidence of contagion from the "media and publishing" sector, it was only possible to conduct the statistical correlation tests for contagion for 29 of them. But, using the sample periods defined above and the sectors for which it was possible to test for changes in correlation coefficients, we can formally test for evidence of contagion in corporate bond markets. The results, based on tests of simple correlation coefficients, are striking. As shown in the second column of Table 1, fully 16 of the 29 sectors appeared to show higher correlations in the stress period at the five percent statistical significance level (using a one-sided test, as we are looking for an increase in correlation rather than a decrease). At face value, this appears to be strongly indicative of contagion. However, when we adjusted the "high stress" correlation coefficients for the underlying bias in the calculation as noted by Forbes and Rigobon (2002), this number dropped to just one sector out of the 29 exhibiting signs of contagion, as shown in the fourth column of Table 1. Furthermore, that sector is where a negative correlation had become less negative (as opposed to an increase in a positive correlation). From this analysis, it appears that correcting for the bias in correlation coefficients due to the heterogeneous volatility can have a significant impact on the statistical test results of changes in correlation coefficients. Even without the issue of the negative correlation, given the underlying size of the significance test (as opposed to its power) and the number of sectors we are testing, this is not indicative of broad contagion across sectors in corporate bond markets. Consistent with the original work by Forbes and Rigobon (2002), this suggests that, while there may be interdependence in credit conditions across different corporate sectors, consistent with them all being affected by a common macro shock, contagion was not formally present in corporate bond markets in the late 2000s based on the results presented above. Instead, the co-movements in default rates are likely to have reflected the common macroeconomic shock, a form of interdependence. We can conduct the same formal tests for the other two stress periods in the data sample as well; for illustrative purposes, results are presented again based on identifying which sector saw early peaks in default rates. In the early 2000s, the default rate in the "environmental services" sector picked up before default rates in other sectors ( Figure 5), and in the late 1980s' downturn in credit quality, the "hotels and gaming" sector saw default rates jump before the broader deterioration across a range of sectors ( Figure 6). The fact that these "leading" sectors were not consistent over time could be consistent with each downturn having different underlying drivers or with sectoral performance differing over time or, indeed, with contagion spreading from different sectors in each instance. As with the previous contagion tests, we again split the subsamples into two periods, each for longer episodes: a "calm" period and a "stress" period. For the early 2000s period illustrated in Figure 5, the total sample period ran from February 1992 until January 2003; the "stress" episode was assumed to start in July 1999 (with the "calm" period running prior to the "stress" period). For the late 1980s downturn shown in Figure 6, the total sample period ran from January 1983 until January 1992, with the "stress" period starting in December 1988. For these earlier two periods in the data, any signs of contagion were less pronounced than in the late 2000s period, even using unadjusted correlation coefficients; in particular, there was very limited evidence of contagion to start with in the late 1980s, although it was more prevalent in the late 1990s downturn. However, it is striking that once the bias in the correlation coefficient was accounted for, the results were now consistent across all three periods in that there was scant genuine evidence of contagion across different corporate bond sectors. Figure 5. Sectoral default rates around the early 2000s downturn in credit conditions. Source: Author's calculations and Moody's. Note that the exhibit presents sectoral default rates for 33 non-financial corporate sectors. The vertical axis was deliberately limited to 50% to illustrate volatility within and across sectors relative to lower default rates in some time periods. The "environmental services" sector is highlighted in red, as it is the first sector that saw its default rate jump sharply and persist higher during the course of 1999/2000. Figure 6. Sectoral default rates around the late 1980s downturn in credit conditions. Source: Author's calculations and Moody's. Note that the exhibit presents sectoral default rates for 33 non-financial corporate sectors. The vertical axis was deliberately limited to 50% to illustrate volatility within and across sectors relative to lower default rates in some time periods. The "hotels and gaming" sector is highlighted in red, as it is the first sector that saw its default rate jump sharply and persist higher during the course of 1998/1999. Table 2 summarizes this analysis and illustrates the impact of this bias; it presents the percentage of sectors that exhibit signs of contagion when this is tested for on the basis of both biased and adjusted (unbiased) correlations coefficients. Across the three episodes, co-movements in default rates between the "lead" sectors-highlighted previously for each of the three stress periods in the sample-and other corporate sectors did show evidence of significant changes on the basis of simple correlation statistics. This would suggest some degree of corporate contagion from one sector to another. However, when the bias identified by Forbes and Rigobon (1999) was corrected for, any evidence of contagion was sparse at best. Table 2. Percentage of sectors exhibiting contagion compared with "leading" sector. Source: Author's calculations. Note that the sectors with negative "stress" correlation coefficients were excluded from the percentages presented. The contagion tests were calculated as specified in Table 1 In order to further test the robustness of this result, tests were also run to examine for contagion starting from other sectors: that is, not assuming that contagion would start from those sectors where default rates were observed to jump first (e.g., picking a different starting sector from media and publications in 2008). This is important because, although specific sectors were identified above for illustrative purposes, in principle it is possible that contagion could have flowed from other sectors that had financial fragilities that were not yet fully evident in default rates. However, these broader tests of contagion among different corporate sectors across the same time periods noted above yielded very similar results to those presented in Table 2-once the bias in correlation coefficients was controlled for, there was very little evidence of contagion across sectors. To further test the robustness and sensitivity of these results, correlations were also calculated after changing the "calm" and "stress" periods across each of the three episodes in our broader sample. It is possible that the specific choice of these dates could lead to so-called "boundary effects", where the statistical inference changes when these dates were varied. Happily, this did not appear to be the case, as the broad results presented herein were unchanged; once again, there was little evidence of contagion once the underlying bias in correlation coefficients was controlled for. As with the detailed analytical results presented earlier, unadjusted correlation coefficients gave a misleading picture of potential contagion among sectors and that contagion was not then evident when the bias in the correlation coefficients was adjusted to account for this. Conclusions The presence or otherwise of contagion during periods of financial turmoil remains an important question for investors, market participants and policymakers alike. Looking at past patterns in corporate bond default rates across sectors, it is tempting to assess that contagion has been present in past episodes of stress. However, the analysis presented herein demonstrated that simply observing that defaults increased first in a particular sector does not necessarily imply contagion in corporate bond markets. Instead, we should investigate changes in correlations across sectors for evidence of contagion and adjust for bias that will exist in simple correlation statistics. In addition, when the correlation tests are corrected for the bias that arises from the changing volatility between stressed and non-stressed periods, there is little consistent evidence of contagion from bond defaults in one sector to others over the different credit cycles. In line with previous research on equity market contagion, this demonstrates that simple correlation analysis can result in misleading results. Rather than contagion, the shifts in correlation coefficients represent the interdependence among the different corporate sectors. The analysis presented herein was deliberately simple. This was partly due to the desire to clearly demonstrate how misspecified hypothesis tests can lead to misleading analytical results and interpretation; but, importantly, it also reflects the fact that there is simply no single technique, complex or otherwise, that can assess contagion while addressing the issues around endogeneity, omitted variables and time-varying biases. As such, while it is important to note that the correlation analysis presented herein was not as sophisticated as some contagion estimation techniques-which is arguably a limitation of this research-the same issues will also pervade those more complex statistical approaches, unless the bias effects illustrated herein are sufficiently controlled for. Overall, these results suggest that, while default rates often rise across a range of sectors in the face of an economic downturn, reflecting a common underlying shock, corporate bond markets do not present a significant risk in terms of contagion spreading from one sector to another. As such, while investors should be vigilant for macroeconomic shocks that can drive credit distress across a range of sectors, there is little sign that idiosyncratic credit stresses in one sector will drive widespread credit stresses in other sectors. Appendix A. Numerical Listing of Corporate Sectors The main body of this paper uses default rates from 33 different non-financial corporate sectors to test for evidence of contagion. Given the difficulties in presenting 33 different sectoral titles in chart format, for simplicity, each sector is assigned a number in the analysis. Table A1 below presents the sectoral key, listing the individual sectors against the number they are assigned (for instance in Figure 3). Containers, Packaging and Glass S10 Energy: Electricity S11 Energy: Oil and Gas S12 Environmental Industries S13 FIRE: Finance S14 FIRE: Insurance S15 FIRE: Real Estate S16 Forest Products and Paper S17 Healthcare and Pharmaceuticals S18 High Tech Industries S19 Hotel, Gaming and Leisure S20 Media: Advertising, Printing and Publishing Conflicts of Interest: The author declares no conflict of interest.
v3-fos-license
2023-11-25T06:17:22.074Z
2023-11-23T00:00:00.000
265404099
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/pbi.14243", "pdf_hash": "c47a4353873a1fe3e4278cff0ff38d2ed90cca7c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46782", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "624006187c1dc2827c60fa5c12d22252bc6599da", "year": 2023 }
pes2o/s2orc
Elevating plant immunity by translational regulation of a rice WRKY transcription factor Summary Plants have intricate mechanisms that tailor their defence responses to pathogens. WRKY transcription factors play a pivotal role in plant immunity by regulating various defence signalling pathways. Many WRKY genes are transcriptionally activated upon pathogen attack, but how their functions are regulated after transcription remains elusive. Here, we show that OsWRKY7 functions as a crucial positive regulator of rice basal immunity against Xanthomonas oryzae pv. oryzae (Xoo). The activity of OsWRKY7 was regulated at both translational and post‐translational levels. Two translational products of OsWRKY7 were generated by alternative initiation. The full‐length OsWRKY7 protein is normally degraded by the ubiquitin–proteasome system but was accumulated following elicitor or pathogen treatment, whereas the alternate product initiated from the downstream in‐frame start codon was stable. Both the full and alternate OsWRKY7 proteins have transcriptional activities in yeast and rice cells, and overexpression of each form enhanced resistance to Xoo infection. Furthermore, disruption of the main AUG in rice increased the endogenous translation of the alternate stabilized form of OsWRKY7 and enhanced bacterial blight resistance. This study provides insights into the coordination of alternative translation and protein stability in the regulation of plant growth and basal defence mediated by the OsWRKY7 transcription factor, and also suggests a promising strategy to breed disease‐resistant rice by translation initiation control. Introduction Protein homeostasis is essential for cell viability.Various types of intricate mechanisms are coordinated to maintain the required amount and diversity of proteins in rapid response to environmental changes.During translation initiation, the selection of the AUG initiation codon is controlled by the scanning model conserved in eukaryotes, which is usually subject to the 'first-AUG rule' (Hinnebusch, 2014;Kozak, 2002).However, alternative translation can sometimes be initiated at downstream AUG codons by context-dependent leaky scanning and re-initiation (Kozak, 2002).The flexibility of translation initiation control is versatile in determining both the efficiency and composition of protein translation (Meijer and Thomas, 2002). In parallel to translational regulation, protein homeostasis is also sustained by the degradation pathways (Beese et al., 2020;Ciechanover, 2006;Hershko and Ciechanover, 1998;Varshavsky, 2019), which are crucial for timely disposal of unwanted proteins.In eukaryotic cells, the selective degradation of many short-lived proteins is carried out through the ubiquitin proteasome system (UPS) (Hershko and Ciechanover, 1998;Vierstra, 2009).UPSmediated proteolysis regulates almost all of the intracellular processes of plant biology (Vierstra, 2009), and the importance of this pathway in plant-pathogen interactions has been increasingly highlighted (Dielen et al., 2010). Plants maintain a dynamic balance between growth and defence in the face of continual challenges from a range of pathogens.Defence proteins are therefore under tight control to minimize the unnecessary fitness penalties associated with continuous activation of the defence response.The defence induction involves the recognition of microbe/damage-associated molecular patterns (M/DAMPs) by host pattern-recognizing receptors (PRRs) leading to pattern-triggered immunity (PTI) or basal immunity in plants (Boller and Felix, 2009).The second type of defence is mounted by the detection of pathogenderived effectors by intracellular nucleotide-binding and leucinerich repeat (NLR) immune receptors resulting in effectortriggered immunity (ETI) (Ara ujo et al., 2019;Maekawa et al., 2011).Both PRR and NLR immune receptors are regulated by the plant UPS.In Arabidopsis, the FLAGELLIN receptor FLS2 (FLAGELLIN-SENSING 2) is polyubiquitinated by PUB12/13 (Plant U-Box 12/13) for FLAGELLIN-induced turnover, thus attenuating immune signalling (Lu et al., 2011).Overaccumulation of SNC1, a Toll-interleukin 1 receptor (TIR)-type NLR, leads to constitutive defence responses and consequent dwarfism (Zhang et al., 2003).The stability of SNC1 protein is controlled by the F-box protein CPR1 for ubiquitination and degradation (Cheng et al., 2011).The Arabidopsis NPR1 (nonexpresser of PR genes 1) protein is a master immune regulator of systemic acquired resistance (SAR) (Fu and Dong, 2013).AtNPR1 is constantly degraded in the nucleus by the 26S proteasome which has dual roles in both preventing and stimulating gene transcription during SAR induction (Spoel et al., 2009).Overexpression of AtNPR1 in rice enhanced disease resistance to multiple pathogens but had detrimental effects on plant growth (Fitzgerald et al., 2004;Quilis et al., 2008).Similarly, OsNPR1, one of the AtNPR1 orthologues in rice, is also regulated by ubiquitin-mediated degradation through interaction with the Cullin 3 E3 ligase component (OsCUL3a), and accumulation of OsNPR1 in the oscul3a mutant causes cell death (Liu et al., 2017).Arabidopsis TBF1, a major molecular switch for growth-to-defence transition, is tightly regulated at both the transcriptional and translation levels.Translation of TBF1 is normally suppressed by two uORFs within the 5 0 leader sequence but promoted upon immune induction (Pajerowska-Mukhtar et al., 2012).This unique regulatory mechanism, uORF-mediated translation inhibition, was successfully used to engineer diseaseresistant plants without fitness costs (Xu et al., 2017). The WRKY gene family is a large group of transcription factors that play important roles in regulation of defence responses in plants (Pandey and Somssich, 2009).WRKY proteins participate in transcriptional reprogramming by binding to W-box elements in target promoters during a variety of immune responses including PTI, ETI, and SAR (Eulgem, 2005;Eulgem and Somssich, 2007;Maleck et al., 2000).Many recent studies have explored in detail the roles and signalling of WRKYs in regulation of stress responses (Chen et al., 2019;Phukan et al., 2016;Wani et al., 2021), but there is very limited knowledge of the mechanisms by which plants dynamically regulate WRKY protein homeostasis to adapt to their environment (Phukan et al., 2016).It has been shown that OsWRKY45, a pivotal regulator in SA/BTHinduced disease resistance to both fungal blast (Shimono et al., 2007) and bacterial leaf blight (Shimono et al., 2012) in rice, is degraded in the nucleus through the ubiquitin-proteasome system to prevent spurious defence activation in the absence of pathogen attack (Matsushita et al., 2013).AtWRKY53, which positively regulates leaf senescence, is targeted by the HECT domain E3 ubiquitin ligase UPL5 for its polyubiquitination and degradation, to ensure correct timing of senescence induction (Miao and Zentgraf, 2010). Prior studies have established that OsWRKY7 expression can be triggered by pathogen stress (Ryu et al., 2006) and that its overexpression confers resistance to blast fungus (Tun et al., 2023).In this study, we conducted a comprehensive functional characterization of OsWRKY7, demonstrating its positive role in mediating basal immunity against the bacterial pathogen Xoo.In addition to transcriptional regulation, OsWRKY7 was also tightly regulated at the protein level.Alternative translation from both the main open reading frame (mORF) and downstream in-frame ORF (diORF) of OsWRKY7 generated two isoforms with different protein stabilities.The full-length OsWRKY7 was polyubiquitinated and constitutively degraded through the 26S proteasome pathway.Treatment with the bacterial elicitor Flg22 and Xoo increased the protein level of OsWRKY7.The domain essential for degradation was located at the N-terminus and was different from the domains responsible for transcriptional activation and subcellular localization.The alternative translated protein lacked the degradation region and was therefore stable and functional.Overexpression of the full-length or the short stable isoform enhanced bacterial blight resistance in rice.Similar to the upstream open reading frame (uORF), the mORF of OsWRKY7 represses the translation of the diORF.Translation disrupting of the mORF by genome editing results in increased protein expression of the diORF and enhanced disease resistance to Xoo through increased PR transcript accumulation and ROS production.In addition, we were interested to find that proteasomal degradation and alternative translation were also features of several WRKY genes in the same subclade as OsWRKY7.Our results suggest that the production of appropriate amounts of OsWRKY7 protein is essential for normal growth and effective basal defence.Translational regulation could be explored as a route to optimize the production of defence proteins for breeding of disease-resistant crops with less fitness cost. OsWRKY7 is a positive regulator of rice basal defence against bacterial blight In an earlier study of the expression of the WRKY gene superfamily in rice, OsWRKY7 increased rapidly during an incompatible interaction between rice and the bacterial blight pathogen Xanthomonas oryzae pv.oryzae (Xoo) (Ryu et al., 2006).To confirm its possible role in the rice defence response to bacterial blight, we investigated the expression of OsWRKY7 in compatible rice varieties infected with Xoo strain PXO341 by qRT-PCR.Compared to its basal expression in H 2 O-treated japonica cultivar Nipponbare (Nip), OsWRKY7 was increased following pathogen inoculation at the time point 12, 36, and 60 h (Figure 1a).However, OsWRKY7 was not induced by PXO341 in IR24 (Figure 1b), which is a susceptible nearisogenic parent of the IRBB lines which have one or more bacterial blight resistance (Xa) genes (Huang et al., 1997).These results suggest that OsWRKY7 may be involved in the basal immune response in rice to Xoo infection.The role of OsWRKY7 in plant immunity 1035 To characterize the function of OsWRKY7 in regulation of bacterial blight resistance, we generated loss-of-function mutants of OsWRKY7 using CRISPR/Cas9 technique in the Nip background (Ma et al., 2015).Two single-guide RNAs (sgRNAa and sgRNAb) were designed within the first exon (Figure 1c; Figure S2a).Based on PCR and sequencing analysis (Figures S1a and S2b), three different mutations were identified in plants targeted by sgRNAa and sgRNAb, respectively (Figure 1d; Figure S2b), causing frameshift expression of the OsWRKY7 protein (Figures S1b and S2c).Three homozygous mutant lines at the sgRNAa target (oswrky7-Cas9-a) were selected and inoculated with Xoo at the booting stage.Compared to WT plants, the mutants had more severe disease symptoms with longer lesions at 14 dpi (Figure 1e, f and Figure S1c).Consistently, the transcript levels of the pathogenesis-related (PR) genes PR1a, PR1b, PR5, and PR10a were decreased in the mutant plants and their responses to Xoo infection were greatly impaired (Figure 1f).In addition, the oswrky7-Cas9-b-mutant lines were also more susceptible than the controls to Xoo (Figure S2d).These results indicate that OsWRKY7 plays a positive role in basal resistance against bacterial blight. OsWRKY7 protein undergoes alternative initiation from a downstream in-frame ORF (diORF) To further investigate the role of OsWRKY7 in basal defence, we constructed the 35S::OsWRKY7-FLAG vector and transfected rice protoplasts for protein expression analysis.Surprisingly, two close protein bands were detected in immunoblots, and the upper band was significantly increased when incubated with MG132, a 26S proteasome inhibitor (Figure 2a).Since many WRKY proteins have been reported to be phosphorylated by specific protein kinases for the regulation of plant immunity and stress adaptation (Chen et al., 2019), the potential phosphorylation of OsWRKY7 was examined via k-phosphatase (k-PPase) treatment.Notably, the two bands were insensitive to k-PPase treatment (Figure 2b), indicating that the upper band did not correspond to phosphorylated OsWRKY7-FLAG.By comparing with the single protein expressed from the Ubi::OsWRKY7-FLAG vector, the upper band was supposed to be the full-length OsWRKY7 (Figure 2a). Since the mRNA of OsWRKY7 gene has no alternative splicing according to the analysis of the RNA-seq data in NCBI (Figure S3), the two proteins are unlikely to be regulated by different splicing events.We then noticed the full-length OsWRKY7 CDS contains a second in-frame start codon 84 bp downstream.The resulting 28 amino acid is about 2.7 kD, which closely matches the difference between the two bands.This suggests that the additional lower band might be an N-terminal-truncated protein, possibly produced by alternative translation of the diORF.To test this notion, two mutated constructs were generated to disable the main AUG by removing A (OsW7(ÀA)) and convert the second in-frame AUG to AGG (OsW7m) respectively (Figure 2c).Then, these constructs were transiently expressed in rice protoplasts and treated with MG132.Immunoblots showed that inactivation of the main initiation site led to the production of the stable short protein only (Figure 2d), which size corresponded to the protein translated from the diORF (Figure 2e), whereas mutation of the second in-frame AUG site simply resulted in the translation of the unstable full-length protein (Figure 2d).To further elucidate whether the access to the second in-frame AUG of OsWRKY7 depends on the 35S promoter, a 3-kb native promoter upstream from the main AUG was used to express the full-length OsWRKY7 gene in protoplasts.Apparently, two isoforms of OsWRKY7 were also produced with different stabilities (Figure 2f), whereas only one isoform remained when the main or second AUG was disrupted (Figure 2f).Besides, in protoplasts, two isoforms could be detected in transgenic plants regulated by the native promoter (Figures 2h and 6c) suggesting that the alternative translation of OsWRKY7 occurs in planta. Finally, we performed LC-MS/MS analysis to confirm whether the two bands were alternatively translated from OsWRKY7.In order to identify the peptide at the N-terminal of the full-length OsWRKY7, three Ser (S) residues at site 16, 28, and 43 were changed to Arg (R) for enzyme digestion.The mutated OsWRKY7 (OsWRKY7-SR) gene under control of the 35S promoter also expressed two protein bands which were obvious after MG132 treatment (Figure S4a).The upper and lower bands in MG132 or DMSO treatment were sent for LC-MS/MS analysis (Figure S4b).Peptides that targeted to the OsWRKY7-SR protein were identified from all the bands (Table S3), but only the upper band contained the N-terminal peptide after the first Met (Figure S4c), while the lower bands from both DMSO and MG132 treatment contain the peptides after the second Met (Figure S4c).Therefore, The protoplasts were treated with DMSO (À) or 20 lM MG132 (+).(e) The OsW7(ÀA) coding sequence and the second diORF were fused with 3xFLAG and transiently expressed in rice protoplasts under control of the 35S promoter.The protoplasts were treated with DMSO (À) or 20 lM MG132 (+).(f) The OsW7, OsW7m, and OsW7(ÀA) genomic sequences were transiently expressed in rice protoplasts under control of the native OsWRKY7 promoter.Protoplasts were treated with DMSO (À) or 20 lM MG132 (+).(g) Pathogen mimic treatment of protoplasts transfected with 35S::OsWRKY7-3xFLAG. 0 lM, 0.5 lM, 1 lM, 2 lM, and 5 lM Flg22 were added to the protoplasts and treated for 2 h.Total protein was extracted and detected with anti-FLAG antibody.The level of actin protein was used as an internal loading control.(h) PXO341 treatment of OsWRKY7 promoter regulated OsWRKY7-FLAG transgenic plants.Total proteins were extracted from leaves after treatment with H 2 O + 0.05% Silwet L-77 (Mock), PXO341 + 0.05% Silwet L-77 (Xoo), or without treatment (À).Signals were detected with anti-FLAG antibody.Coomassie blue-stained Rubisco large protein (RubL) was used as loading control.The black and white arrowheads indicate the full length and alternative translated OsWRKY7 proteins respectively.Relative protein abundance in (g) and (h) were calculated to the control by ImageJ. our results confirm the presence of two translational products of OsWRKY7 by dual initiation from two in-frame start codons. The full-length OsWRKY7 protein is degraded by the ubiquitin-proteasome pathway To further examine the stability of the full-length OsWRKY7 protein, we performed cell-free degradation analysis.Immunoblot analysis indicated that the recombinant GST-OsWRKY7 protein was significantly decreased after 3 h incubation (Figure 3a).The addition of MG132 partially inhibited the degradation (Figure 3a).Then, we transiently expressed Ubi:: OsWRKY7-FLAG in rice protoplasts, in which the full-length OsWRKY7 was dominantly translated (Figure 2a).The low amount of OsWRKY7 in the mock was gradually increased after prolonged MG132 treatment.In the co-treatment with the protein synthesis inhibitor CHX, the protein amount gradually decreased to the lower level and further decreased in the treatment with CHX alone (Figure 3b), whereas the level of endogenous actin protein was constant (Figure 3b).In contrast, leupeptin and E-64, the two different cysteine protease inhibitors of lysosomal degradation, did not prevent degradation of OsWRKY7 (Figure S5), suggesting that the 26S proteasome pathway is involved in the degradation of OsWRKY7.To support this idea, protein extracts from the rice protoplasts expressing Ubi::OsWRKY7-FLAG with or without 35S::Myc-Ubi were immunoprecipitated and then detected by anti-Myc and anti-ubiquitin antibodies.As shown in Figure 3c The role of OsWRKY7 in plant immunity 1037 demonstrate that OsWRKY7 is a fast-turnover protein which is degraded via the ubiquitin/26S proteasome pathway. Over-accumulation of the full-length OsWRKY7 enhances disease resistance but affects plant growth Since the full-length OsWRKY7 protein is unstable under the normal condition, we analysed the protein level after elicitor/pathogen treatment.By transient expression of both the full-length and short OsWRKY7 proteins under the 35S promoter, we found that the full-length protein was increased more than the short form after Flg22 treatment (Figure 2g), suggesting pathogen mimic treatment could stimulate the full-length protein accumulation.We also observed the same trends in transgenic plants under its native promoter after Xoo treatment (Figure 2h).These data indicated that alteration in protein levels plays an important role upon pathogen infection. Then, we transformed plants with Ubi::OsWRKY7-FLAG to overexpress the full-length protein only.Finally, we obtained two lines with overexpressed protein levels (Figure 3g).And they showed resistance to Xoo infection as the lesion lengths were shorter than the control plant with no protein expression (Figure 3e,f).On the other hand, we observed impaired growth phenotype, especially line #11 (Figure 3d).So we concluded that the full-length OsWRKY7 positively regulates rice basal defence against Xoo but represses plant growth, implying a trade-off effect between growth and defence response. Translation from the diORF stabilizes OsWRKY7 protein without eliminating the transcriptional activity or changing its subcellular localization In the MG132 and CHX time course treatment, we found that the short OsWRKY7 protein translated from the diORF was stable in all the treatments (Figure S7), suggesting that the stability of OsWRKY7 was differentially regulated by alternative translation, and the N-terminal region before the second Met (28 amino acid) is essential for degradation. To explore whether the domain required for degradation is also indispensable to drive the full transcriptional activity as reported in some cases (Matsushita et al., 2013;Muratani and Tansey, 2003), a number of deletion mutants of the OsWRKY7 protein were fused to the GAL4 DNA binding domain (BD) and tested the reporter activation in yeast.As shown in Figure 4a, the shortest Nterminal part (NT1), which lacked the entire WRKY domain, could still activate the MEL1 reporter, whereas the remaining C-terminal part (CT1) had no transactivation activity.By additional N-terminal deletions (CT2-CT4), we found that deletion of the 28 or 50 amino acids at the N-terminus (CT4 and CT3) did not affect the transcriptional activity, but a further deletion of 75 amino acids (CT2) thoroughly impaired it.These results suggest that the activation domain of OsWRKY7 is located in the 51-75 amino acid region and is different to the N-terminal region required for degradation. Subsequently, we tested the activity of the deletion proteins on the transcriptional regulation of pathogenesis-related genes in plants.The effector constructs containing the full-length OsWRKY7 or the deletion fragments (NT1, CT1, and CT4) driven by the ubiquitin promoter were co-expressed with the OsPR10apro::LUC reporter construct in rice protoplasts (Figure 4b).The results showed that both the full-length OsWRKY7 (OsW7) and the N-terminal truncated protein translated from the second AUG (CT4) could activate the reporter gene expression (Figure 4c).By contrast, the proteins consisting only the activation domain (NT1) or the WRKY DNA-binding domain (CT1) were unable to activate the OsPR10a promoter (Figure 4c).Together, these results indicate that both the activation and WRKY domains, but not the degradation domain, are involved in the OsWRKY7-mediated transcriptional regulation. We then investigated the functional relevance of the subcellular localization to the degradation of OsWRKY7.GFP fluorescences of both the full-length OsWRKY7 and the CT4 protein without the N-terminal degradation domain were predominantly detected in the nucleus but with a weak cytoplasmic signal (Figure 4d), whereas the CT1 protein containing the WRKY domain was confined to the nucleus and the NT1 without the WRKY domain was evenly distributed in both the cytoplasm and nucleus (Figure 4d).These results demonstrate that the degradation domain does not overlap with the signal peptides for nuclear and cytoplasmic localization of OsWRKY7. Overexpression of the stabilized OsWRKY7 encoded from the diORF confers enhanced resistance to bacterial blight Since the alternative translation of OsWRKY7 initiated from the diORF occurred under its native promoter (Figures 2h and 6c), and this N-terminal-truncated protein retained the full active transcriptional activity and normal cellular distribution in protoplasts (Figure 4c,d), we further investigated its potential function in the regulation of rice resistance to bacterial blight.Transgenic plants with overexpression of the diORF (OsWRKY7-diORF-OE) were generated by transformation of the 35S::OsWRKY7(ÀA)-FLAG construct.Nine independent transgenic lines were subjected to Xoo infection, and significant decreases in length of lesions were found in the three T 1 lines with high protein levels (Figure 5a,b).Consistent with the enhanced Xoo resistance, the transcript levels of the PR genes PR1a, PR1b, PR5, and PR10a were up-regulated in these overexpressing plants and the levels remained high or were greatly increased after Xoo infection (Figure 5c).These results suggest that the N-terminal-truncated OsWRKY7 protein from alternative translation positively regulates rice innate immunity to Xoo. We also noticed that hypersensitive response (HR)-specific brown lesions appeared on leaves of the OsWRKY7-diORF-OE plants when infected by Xoo at the seedling stage (Figure 5d, upper panels).We next determined the H 2 O 2 levels after Xoo infection by 3,3-diaminobenzidine (DAB) staining.In WT leaves, DAB staining was weak showing that H 2 O 2 accumulation was low after pathogen infection (Figure 5d, lower panels), whereas in the leaves of OsWRKY7-diORF-OE plants, dark DAB staining colocalized with the necrotic lesions (Figure 5d, lower panels), implying the accumulation of a large amount of H 2 O 2 .Interestingly, the H 2 O 2 level was also increased in the mock inoculated leaves (Figure S8a), but the induction pattern was different to that of Xoo infection, probably induced by wounding as the intact leaves had no staining (Figure S8b).Accordingly, ROS-producing genes like respiratory burst oxidase homologue gene OsRbohB and OsRbohE were found up-regulated in the OE plants (Figure 5c).These results suggest that overexpression of the short stable OsWRKY7 protein activates the production of ROS and ROSmediated cell death. Since high constitutive expression of the full-length OsWRKY7 is destructive to plant growth, we assessed the growth of the OsWRKY7-diORF-OE plants.The overall growth of diORF-OE plants was similar to that of the WT in terms of flowering time and panicle number (Figure S9a).However, there were decreases in most of the agronomic traits measured and the grain number per panicle was more significantly lower (Figure S9b). Translation disruption of the mORF of OsWRKY7 enhances the diORF translation and the resistance to bacterial blight As we known that uORF often suppresses the translation of the mORF, it is not clear whether the diORF translation is also overwhelmed by the mORF.To answer this question, we replaced the diORF of OsWRKY7 with Luc gene to construct the N84-Luc vector, and disrupted the first ATG of N84-Luc by removing A (N84(ÀA)-Luc) (Figure 6a).Both the vectors were expressed under the native promoter of OsWRKY7 in protoplasts.As shown in Figure 6b, the relative Luc activity of N84(ÀA)-Luc was higher The role of OsWRKY7 in plant immunity 1039 than N84-Luc, suggesting the Luc protein level was increased after preventing the translation from the main AUG.Then, we generated transgenic plants expressing the Flag tag-fused OsWRKY7 and the first AUG mutant under the native promoter, and compared the protein levels between lines with the most strongest signals in each transgenic population (Figures S10a,b).Similarly, the short protein levels in the pOsWRKY7::OsW7(ÀA)-FLAG lines were much higher than both proteins in the pOsWRKY7::OsW7-FLAG lines (Figure 6c).These data indicated that stopping translation of the full-length OsWRKY7 protein resulted in higher protein level of the short form caused by increased translation from the diORF.As a result, the disease resistance in pOsWRKY7::OsW7(ÀA)-FLAG lines were increased (Figure S10c,d). Based on the result that the short OsWRKY7 protein could be translated efficiently from the diORF in the absence of the main AUG codon under the native promoter (Figure 6b,c), it appeared probable that the functional short stable isoform could be overexpressed by eliminating the main ATG site via CRISPR/Cas9.After analysing the genomic sequence of OsWRKY7, a PAM site (CGG) was found in an optimal position 4 nt downstream from the main ATG (Figure 6d), that would The role of OsWRKY7 in plant immunity 1041 be expected to lead to Cas9 cleavage between A and T.Then, the sequence upstream of the PAM site was selected as sgRNAc for transformation (Figure 6d).After sequencing analysis (Figure S11a), two different nucleotide mutation types were identified, and both had an incomplete ATG with A deletion (Figure 6d).To test for bacterial blight resistance, two homozygous lines (oswrky7-Cas9-c) were inoculated with Xoo.At 14 dpi, both lines exhibited enhanced resistance with much shorter leaf lesion lengths than Nip WT plants (Figure 6e,f).The remaining lines were all tested for resistance to Xoo strain PXO341.Significantly shorter lesion lengths were measured on leaves of the mutant lines, but not on those of the Nip or lines without mutation (Figure S11b,c).The transcripts of the PR genes and OsRboh genes were elevated in the mutant plants and were induced in response to Xoo infection (Figure 6g).In addition, oswrky7-Cas9c mutant plants were strongly resistant to the highly virulent Xoo strain PXO99 (Figure S12a), while the oswrky7-Cas9-a mutant plants were susceptible (Figure S12b), suggesting that OsWRKY7 may mediate a broad-spectrum resistance to rice bacterial blight.In accordance with the resistant phenotype, HR-specific brown lesions appeared on leaves of oswrky7-Cas9-c-mutant plants upon Xoo infection at either the seedling or the booting stage (Figure 6i, upper panels; Figure S11d), and a high level of H 2 O 2 was detected in the infected area with necrotic lesions by DAB staining (Figure 6i, lower panels), whereas mock infection with H 2 O did not induce lesion formation or H 2 O 2 accumulation (Figure S13).Taken together, these results suggest that multiple signalling pathways involving defence gene expression and ROS production are activated in the mutant plants when the main ATG of OsWRKY7 was impaired by CRISPR/Cas9.Interestingly, mild growth trade-offs were observed in these plants grown in normal condition (Figure S14).For practical application, we eliminated the main ATG of OsWRKY7 allele in an elite rice japonica cultivar ZJ70 by the sgRNAc target.Likewise, the mutant lines were much more resistant to Xoo infection than the ZJ70 wild type in the patch field (Figure 6i). The significance of alternative translation for other OsWRKY genes phylogenetically related to OsWRKY7 Since alternative translation has rarely been reported in plants, we questioned whether the case of OsWRKY7 is unique among the rice WRKY gene family.After a literature search, we found that overexpression of OsWRKY67-Myc in transgenic plants driven by the 35S promoter produced two fusion bands, which were retained after lambda phosphatase treatment (Vo et al., 2018).Phylogenetic analysis of the rice WRKY gene family revealed that OsWRKY67 clusters closely with OsWRKY7 in the same group (II) (Xie et al., 2005), and here we show that OsWRKY67 and two other closely related homologues OsWRKY10 and OsWRKY26 also have dual initiation from both the main and diAUG when driven by the 35S promoter (Figure S15a).Disruption of the main AUG by removing 'A' led to the expression of the second diORF of OsWRKY10 and OsWRKY67 or the third diORF of OsWRKY26 (Figure S15c).Interestingly, like OsWRKY7, the full-length proteins of OsWRKY10 and OsWRKY26 were both unstable and accumulated after MG132 treatment (Figure S15a), while the proteins translated from their diORFs were consistently abundant whether treated with MG132 or not (Figure S15a,c), suggesting the existence of a degradation domain at the N-terminal region, although their N-terminal amino acid sequences are not well conserved (Figure S16).On the other hand, WRKY genes like OsWRKY3, OsWRKY5, and OsWRKY14 which are phylogenetically distant from OsWRKY7 have normal translation controlled by the same 35S promoter, even though they all contain diORFs (Figure S15b).These results demonstrate the significance of coding sequence context for alternative translation. Discussion OsWRKY7 is a positive regulator in rice basal defence against bacterial blight but a negative regulator on growth An important step towards the understanding the regulation of the plant defence system is to identify transcriptional regulators responsive to pathogen attack (Liu et al., 2005).Through a comprehensive expression analysis of the WRKY gene superfamily in rice infected by pathogens, 12 genes were found differentially regulated by an incompatible bacterial blight pathogen (Ryu et al., 2006).Among these genes, OsWRKY11, OsWRKY30, OsWRKY67, and OsWRKY10 have been reported to play positive roles in basal or Xa-gene-mediated resistance in rice (Choi et al., 2020;Han et al., 2013;Lee et al., 2018;Vo et al., 2018).Here, we show that OsWRKY7 is another important regulator in establishing the basal resistance to bacterial blight through both transcriptional and post-translational regulation.In the compatible japonica cultivar Nipponbare, the OsWRKY7 transcript levels increased after inoculation with Xoo (Figure 1a), but there was no induction in a highly susceptible cultivar IR24 (Figure 1b), knockout of OsWRKY7 in the Nip background increased susceptibility to both PXO341 and PXO99 (Figure 1e,f; Figure S13b) and impaired the activation of PR genes (Figure 1g), suggesting the existence of a basal defence response in Nipponbare mediated by OsWRKY7.Interestingly, two The role of OsWRKY7 in plant immunity 1043 OsWRKY7 proteins were produced by alternative translation with different stabilities (Figure 2).Plants overexpressing the unstable full-length OsWRKY7 protein enhanced disease resistance but inhibited plant growth (Figure 3d-f).In fact, it is not easy to obtain transgenic plants overexpressing the full-length OsWRKY7 under the Ubi promoter, which suggested that the high level of full-length protein may have a detrimental effect on growth and developmental processes.Increasing the level of the alternative protein also elevated the resistance to Xoo; however, obvious growth inhibition was observed when controlled by 35S promoter (Figure S9).These results revealed the importance of the OsWRKY7 protein homeostasis in balancing growth and defence. Differential usage of two in-frame translational start codons regulates OsWRKY7 protein stability Generally, translation initiation in eukaryotes is based on the 'first-AUG rule' (Kozak, 1987(Kozak, , 2002)), but sometimes this rule is abrogated, and different proteins can be produced from a single transcript, for example, by dual initiation at both the first and downstream AUG codons (Slusher et al., 1991;Song et al., 2009).Mechanisms accounting for this escape have been elucidated extensively at the molecular level (Gray and Wickens, 1998;Kozak, 1994), but the biological functions behind these mechanisms of regulation are still largely unknown.In Arabidopsis, it has been reported that targeting of THI1 protein to both mitochondria and chloroplasts is regulated by the alternative use of two in-frame AUG codons (Chabregas et al., 2003).Here, we show that two protein isoforms of OsWRKY7 translated from two in-frame AUG codons are similar in subcellular localization but differ in protein stability (Figures 3 and 4; Figure S7).In the study of a human opioid receptor OPRM1, a short-lived isoform was generated by initiation at an alternative in-frame upstream AUG site (uAUG) in the 5 0 -untranslated region, which was subsequently degraded by the ubiquitin-proteasome pathway through the lysine residues within the extended N-terminus (Song et al., 2009).Although we showed that OsWRKY7 was ubiquitinated (Figure 3c; Figure S6), there are no lysine residues in the N-terminal degradation domain, and we therefore hypothesize that OsWRKY7 may undergo lysine-independent ubiquitination (McClellan et al., 2019).It will be interesting to determine the mechanism for selective degradation of OsWRKY7 proteins from alternative translation. Degradation of OsWRKY7 protein is dependent on the proteasome-mediated pathway Proteasome-mediated degradation of defence proteins is essential for optimal plant growth and development by preventing the untimely activation of defence responses under normal conditions.Many immune regulators are targets of the ubiquitinproteasome system, but there is limited information about proteasome-mediated degradation of the WRKY transcription factors despite the large size of this gene family.It has been reported that OsWRKY45, one of the central regulators of the SA/BTH-induced defence signalling pathway in rice, is regulated by UPS-dependent degradation (Matsushita et al., 2013).Other WRKY proteins like OsWRKY6 and OsWRKY11 are also possibly degraded through the ubiquitin/26S proteasome pathway (Choi et al., 2015;Lee et al., 2018).In this study, we demonstrate that OsWRKY7 protein stability is also controlled by the ubiquitinationmediated proteasome pathway (Figure 3c; Figure S6).However, the mode of degradation may differ between these proteins.For example, the degradation domain of OsWRKY7 is located at the N-terminal region, and is separate from both the nearby activation domain and the C-terminal WRKY DNA-binding domain, whereas, in OsWRKY45, the domains required for degradation and transcriptional activity closely overlap in the C-terminal region, and as a result, deletion of the degradation domain also compromises its strong blast resistance (Matsushita et al., 2013).In contrast, the truncated OsWRKY7 protein without the degradation domain had increased disease resistance to bacterial blight (Figure 5a,b).In addition, the nuclear localization of OsWRKY45 was necessary for its degradation (Matsushita et al., 2013), but the degradation of OsWRKY7 can occur outside of the nuclei as demonstrated by the cytoplasmic localization of the unstable CT1 fragment (Figure 4d). The possible mechanism for alternative translation of OsWRKY7 and closely related genes In eukaryotes, two independently initiated proteins from one mRNA are generally produced by context-dependent leaky scanning (Kozak, 1991(Kozak, , 1994;;Lin et al., 1993).The optimal sequence for translation initiation in vertebrates is GCCRCCAUGG (R at À3, is A or G; the AUG initiation codon is underlined) and is known as the Kozak motif (Kozak, 1986).Positions -3R (most often A) and +4G are the most conserved and crucial nucleotides (Kozak, 1981(Kozak, , 1984(Kozak, , 1986)).Although the Kozak motif varies among different eukaryotes, the -3R and +4G are conserved in species of green plants (Gupta et al., 2016;Hernandez et al., 2019;Rangan et al., 2008) and confer the best translational efficiency tested experimentally in many plant species including Oryza sativa (Sugio et al., 2010).We therefore analysed the native sequence context flanking the AUG initiation codon in OsWRKY7 and the related WRKY genes tested in this study (Table S4).These sequences were categorized based on the presence of the two crucial nucleotides at À3 and +4 in their Kozak motifs (Meijer and Thomas, 2002).It is interesting to find that most of the WRKY genes have strong (both of the key nucleotides are present) or adequate (only one of the key nucleotides is present) Kozak motifs at their native initiation site and only OsWRKY26 has a weak Kozak motif (lacking both key nucleotides).So the alternative translation observed in this study cannot be fully explained by context-dependent leaky scanning.Other sequence features such as the 5 0 -untranslated leading sequence and downstream secondary structure may influence the processes of translation initiation (Kozak, 1990(Kozak, , 1991(Kozak, , 1994)).Indeed, we found that the full-length OsWRKY7 sequence under control of the Ubi promoter did not produce two isoforms (Figures 2a and 3b).The Ubi promoter contains 899 bp of the promoter sequence, 83 bp of 5 0 untranslated exon, and 1010 bp of first intron sequence from the maize ubiquitin (Ubi-1) gene.(Christensen and Quail, 1996).In most cases, 5 0untranslated region (5 0 -UTR) that enable efficient translation are short, have a low GC content, are relatively unstructured and do not contain uAUG codons (Kochetov et al., 1998).5 0 -UTR in the Ubi promoter may well consistent with these features for stringent translation.Many studies also showed that intron present in the 5 0 -UTR strongly enhanced transgene expression (Chung et al., 2006;McElroy et al., 1990) by multiple mechanisms including translational control (Laxa, 2017;Rose, 2019).Besides the effect of the promoter, we found that not all WRKY genes tested would produce two protein isoforms under the same 35S promoter (Figure S15b), suggesting that the features on the coding sequence may also affect the translation initiation.Further studies are necessary to determine the underlying mechanism for the dual translation initiation of OsWRKY7 and other closely related genes. A proposed working model for OsWRKY7 in the activation of basal defence against bacterial blight When plants are subjected to pathogen attack, it would be an efficient coping strategy to regulate defence responses both at the transcriptional level and at the protein level.We find that OsWRKY7 is such a kind of disease resistance gene with multiple layers of regulation.In addition to its transcriptional induction by the bacterial pathogen Xoo (Figure 1a), the translated OsWRKY7 protein is degraded by UPS-mediated pathway (Figure 3; Figure S6) but induced by Flg22 and Xoo pathogen treatment (Figure 3g,h).In addition, the stability of OsWRKY7 is also modulated by alternative translation, which produced a functional N-terminal truncated isoform that resists UPS-mediated degradation (Figures 2 and S7).Here, we propose a model of OsWRKY7 in the regulation of basal defence responses (Figure 7).In uninfected rice plants, the full-length OsWRKY7 proteins are degraded by the UPS system to minimize negative effect on plant growth.Meanwhile, short stable OsWRKY7 isoforms are generated by alternative translation with less efficiency to provide a constant basal level of defence.In the genome-edited plants where the main AUG is disrupted, removal of the suppression on the diORF translation enhances the endogenous expression of the short stable OsWRKY7 protein, thus the plant basal defence against bacterial blight is promoted by increased defence-related gene expression and ROS accumulation, along with a partial inhibition on growth.This study provides insights into the mechanisms of alternative translation and protein turnover in the regulation of OsWRKY7-mediated basal defence in plants, and also provides a practical strategy to breed disease-resistant rice by translational regulation of the OsWRKY7 alleles via genome editing at the main ATG. Experimental procedures Plant materials and growth conditions Rice (Oryza sativa) plants used to determine the expression of OsWRKY7 were the japonica variety Nipponbare (Nip) and indica variety IR24.Transgenic overexpression and genome-edited plants of OsWRKY7 were generated in the Nipponbare or ZJ70.Rice seeds were germinated in petri dishes with water at 37 °C and hydroponically cultured in the rice nutrient solution (Yoshida et al., 1976) in a growth chamber with a 14-h light (30 °C)/10-h dark (28 °C) photoperiod.Indica variety 9311 was used for transient gene expression in protoplasts and cultured on 1/2 MS medium for 7-10 days in the same growth chamber. Vector construction and plant transformation The detailed vector construction information was listed in supporting information of method.Agrobacterium-mediated transformation was conducted as previously described (Hiei and Komari, 2008). Bacterial blight inoculation Two Xanthomonas oryzae pv.oryzae (Xoo) races from the Philippines (PXO341 and PXO99) were used.Strains were cultured in a in modified Wakimoto's medium at 28 °C to the optical density of OD 600 = 0.6-0.8.Fully expanded rice leaves were clipped about 1-2 cm from the tip by scissors dipped with bacterial suspension or sterile deionized water (Kauffman et al., 1973).To analyse OsWRKY7 gene expression in response to Xoo, leaves of 3-week-old Nipponbare and IR24 were inoculated with PXO341 or H 2 O for 12, 36, and 60 h, and 2 cm leaf tissues below the cutting edge were collected at the indicated time points.To analyse PR gene expression, 3-week-old seedlings were inoculated with PXO341 or H 2 O for 48 h.For evaluation of bacterial blight resistance, plants at the booting stage (70 days after sowing) were inoculated with PXO341 or PXO99, the lesion length was measured 2 weeks after inoculation.To analyse protein levels upon Xoo treatment, Plantlets were sprayed with PXO341 suspension containing 0.05% [v/v] Silwet L-77 for 6 h.H 2 O + Silwet L-77 used as mock control. Recombinant protein purification and cell-free degradation assay GST-OsWRKY7 recombinant protein was induced in E. coli BL21 (DE3) and purified by glutathione affinity resin column (Pierce, 16100).The cell-free degradation assays were performed as previously described (Wang et al., 2009).Briefly, total proteins from 100 mg WT leaves were extracted in 1 mL degradation buffer.Then, the supernatants were collected by centrifugation at 13 000 rpm for 10 min at 4 °C.A total quantity of 100 ng purified GST and GST-OsWRKY7 were incubated in 100 lL protein extracts without (À) or with (+) 100 lM MG132 at 28 °C for 0, 0.5, 1, 2, and 3 h.The protein abundance was detected by The role of OsWRKY7 in plant immunity 1045 an anti-GST antibody (1:10 000, Abmart, M20007), followed by a secondary antibody (1:5000, Abbkine, A21010).Coomassie blue-stained Rubisco large protein (RubL) was used as a loading control. Transient gene expression in rice protoplasts Rice protoplasts preparation and transformation were conducted according to the method of Zhang et al., 2011.The sheaths and stems of 30-40 seedlings were cut into 0.5 mm strips and incubated immediately in 10 mL enzyme solution for 4-5 h in the dark at 25 °C with gentle shaking (60 rpm).The protoplasts were purified and resuspended in 1-2 mL MMG solution at a concentration of 5 9 10 6 cells mL À1 .Plasmids (5-10 lg) prepared by an EndoFree Plasmid Midi Kit (CWBIO, Beijing, China) were used for transfection.For the protein degradation assay, DMSO (mock) or 20 lM MG132 (Millipore), H 2 O (0 lM), 1, 5, and 25 lM E-64 (Sigma-Aldrich) or Leupeptin (Sigma-Aldrich) were added to the protoplasts 12 or 4 h after transfection and treated for 4 or 12 h.For the time course treatment, 20 lM MG132 and/or 50 lM cycloheximide (CHX, Sigma-Aldrich) were added to the protoplasts 12 h after transfection and treated for 2 h, 4 h, and 6 h.For pathogen mimic treatment, H 2 O (0 lM), 0.5, 1, 2, and 5 lM Flg22 (Sangon Biotech) were added to the protoplasts 16 h after transfection and treated for 2 h.Transfection experiments were repeated at least three times. Luciferase transactivation assays Seven micrograms of OsPR10a::LUC reporter plasmid generated previously (Ersong et al., 2021) was transfected with 3 lg effector plasmid.Five micrograms of 0800-Luc, N84-Luc, and N84(ÀA)-Luc vectors were transfected alone.Firefly luciferase (LUC) and renilla luciferase (REN) activities were measured using the Dual-Luciferase Reporter Assay System (Promega).The luminescence signals were detected using the microplate reader SpectraMaxi3.Relative LUC activity was calculated by normalizing the value of LUC to REN in each sample.Triple transfections were carried out for each reporter/effector combination in one experiment, and two experiments were performed giving comparable results. DAB staining H 2 O 2 accumulation was detected using the 3,3 0 -diaminobenzidine (DAB; BBI, Shanghai, China) uptake method as described previously with modification (Thordal- Christensen et al., 1997).The second leaves from top of 25-day-old seedlings were cut 5 days after inoculation with PXO341 or H 2 O 2 , and immediately submerged in DAB solution (1 mg mL À1 DAB in 0.1 M Tris-HCl, and pH = 3.8) at 25 °C for 10 h in the light.Leaves were then de-stained in bleaching solution (acetic acid:ethanol = 1:1) at 37 °C for 12 h until all of the chlorophyll had been removed.The decolorized leaves were photographed under a Nikon SMZ1000 stereomicroscope equipped with a Nikon digital camera DS-Fi1. Statistical analysis Statistical analysis was performed by two-tailed Student's t-test and one-way ANOVA with Tukey's or Newman-Keuls multiple comparison test.S1 Sequences of primers used for vector construction.Table S2.Sequences of primers used for RT-PCR.Table S3 LC-MS/MS identified peptides in the proteins expressed from the mutated OsWRKY7-SR gene controlled by 35S promoter.Table S4.Comparison of the AUG initiation codon context (À6 to +4) in the OsWRKY genes tested in this study.Data S1 Supporting Information. Figure 1 Figure 1 OsWRKY7 loss of function mutant plants have increased susceptibility to Xoo infection.(a) Expression profile of OsWRKY7 in Nipponbare (Nip) inoculated with Xoo strain PXO341 or H 2 O for the time point indicated.(b) Expression profile of OsWRKY7 in IR24 inoculated with Xoo strain PXO341 or H 2 O. (c) Gene structure of OsWRKY7.The WRKY domain resides in the second and third exons are shown in yellow.The sgRNAa target sequence (blue letters) and PAM site (red letters) are shown at the end of the first exon.(d) Representative types of mutant alleles identified at the sgRNAa target.The wild-type sequence is shown at the top and mutant lines (Cas9-a-2, 5, 6) with three types of mutation are shown below.The numbers of deletions or insertions are shown in brackets.The red letters indicate the PAM site.(e) Representative leaves with typical lesions at 14 dpi were shown from the WT and three homozygous Cas9-a lines.Scale bar: 1 cm.(f) Lesion lengths on the leaves of the WT and Cas9-a lines inoculated in (e) at 14 dpi.Bars represent mean lesion lengthsAESD (n ≥ 3).Significant differences between WT and mutation lines are indicated as **P < 0.01, ***P < 0.001 by Student's t-test.(g) qRT-PCR analysis of OsPR1a, OsPR1b, OsPR5, and OsPR10a expression in WT and Cas9-a mutation lines challenged with PXO341 or H 2 O for 48 h.Data are shown as means AE SD (n = 3) of the fold change relative to the level of WT with H 2 O. Statistically significant differences to the WT control are indicated as *P < 0.05, **P < 0.01, and ***P < 0.001 by Student's t-test. ª 2023 The Authors.Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd., 22, 1033-1048 Figure 2 Figure 2 Generation of two OsWRKY7 protein isoforms by alternative translation initiation at two in-frame AUG codons.(a) Comparison of the 3 9 FLAG fused OsWRKY7 proteins expressed by the 35S or Ubiquitin promoter.Protoplasts were treated with DMSO (À) or 20 lM MG132 (+) for 4 h.Total protein was detected with anti-FLAG antibody.The level of actin protein was used as an internal loading control.(b) 35S::OsWRKY7-3 9 FLAG was transiently expressed in rice protoplasts and treated with k-PPase for different times as indicated.A sample in k-PPase buffer without phosphatase was used as mock control.(c) Schematic diagrams showing the coding sequence of OsWRKY7 (OsW7) and derived sequences with modifications.The main and the second in-frame AUG codons are indicated.The green arrowheads indicate translation of the corresponding proteins of 221 and 193 amino acids respectively.The OsW7m represents the coding sequence with a point mutation of the second AUG to AGG, and the OsW7(ÀA) represents a deletion of the first AUG to UG.The red crosses indicate the disabled translation of the corresponding proteins of 221 or 193 amino acid respectively.(d) The three types of coding sequences indicated in (b) were fused with 3 9 FLAG and transiently expressed in rice protoplasts under control of the 35S promoter.The protoplasts were treated with DMSO (À) or 20 lM MG132 (+).(e) The OsW7(ÀA) coding sequence and the second diORF were fused with 3xFLAG and transiently expressed in rice protoplasts under control of the 35S promoter.The protoplasts were treated with DMSO (À) or 20 lM MG132 (+).(f) The OsW7, OsW7m, and OsW7(ÀA) genomic sequences were transiently expressed in rice protoplasts under control of the native OsWRKY7 promoter.Protoplasts were treated with DMSO (À) or 20 lM MG132 (+).(g) Pathogen mimic treatment of protoplasts transfected with 35S::OsWRKY7-3xFLAG. 0 lM, 0.5 lM, 1 lM, 2 lM, and 5 lM Flg22 were added to the protoplasts and treated for 2 h.Total protein was extracted and detected with anti-FLAG antibody.The level of actin protein was used as an internal loading control.(h) PXO341 treatment of OsWRKY7 promoter regulated OsWRKY7-FLAG transgenic plants.Total proteins were extracted from leaves after treatment with H 2 O + 0.05% Silwet L-77 (Mock), PXO341 + 0.05% Silwet L-77 (Xoo), or without treatment (À).Signals were detected with anti-FLAG antibody.Coomassie blue-stained Rubisco large protein (RubL) was used as loading control.The black and white arrowheads indicate the full length and alternative translated OsWRKY7 proteins respectively.Relative protein abundance in (g) and (h) were calculated to the control by ImageJ. and Figure S6, the level of polyubiquitinated OsWRKY7-FLAG, indicated as a smearing ladder of bands, was greatly increased in protoplasts treated with MG132 compared to the mock treatment.These results ª 2023 The Authors.Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd., 22, 1033-1048 Figure 3 Figure3The full-length OsWRKY7 protein is a positive regulator against Xoo and is degraded by the proteasome-mediated pathway both in vivo and in vitro.(a) Cell-free degradation assay of GST fused OsWRKY7 in wild-type plant extracts treated without (À) or with (+) MG132 (100 lM) for indicated time.GST protein was used as non-degraded control.RubL was used as loading control.The protein levels were calculated by ImageJ and the relative abundance at 0 h was set to 1.00.(b) Ubi::OsWRKY7-3 9 FLAG was transiently expressed in rice protoplasts and treated with 20 lM MG132 and/or 50 lM CHX for 2 h, 4 h, and 6 h.Total protein was extracted and detected with anti-FLAG and anti-actin antibodies.The protein ratio of OsWRKY7-FLAG to actin was calculated by ImageJ and the relative abundance in mock treatment was set to 1.00.(c) Ubiquitination of the full-length OsWRKY7 in vivo.Ubi::OsWRKY7-FLAG was transiently co-expressed with or without 35S::Myc-Ubi in rice protoplasts treated with or without MG132 (50 lM).Polyubiquitinated OsWRKY7-FLAG (indicated as nUbi) was detected using anti-Myc antibody following immunoprecipitation with anti-FLAG magnetic beads.The levels of immunoprecipitated (IP) OsWRKY7 proteins were detected with anti-FLAG antibody.(d) Growth phenotype of Ubi::OsWRKY7-FLAG transgenic lines in filling stage without Xoo infection.(e) Lesions on leaves of transgenic lines after PXO341 infection (17 dpi).Scale bar: 1 cm.(f) Lesion lengths on the leaves of transgenic lines at 17 dpi.Bars represent mean lesion lengthsAESD (n ≥ 3).Significant differences between line #17 and other two lines are indicated as ***P < 0.001 by Student's t-test.(g) OsWRKY7-FLAG expression in transgenic lines.Total proteins were extracted from leaves and detected with anti-FLAG antibody.RubL was used as loading control. ª 2023 The Authors.Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd., 22, 1033-1048 Figure 4 Figure 4 The region responsible for OsWRKY7 degradation is different to the transactivation domain and irrelevant to its localization.(a) The transcriptional activity of different N-terminal or C-terminal truncated proteins in yeast cells.The number of deleted amino acids is given in brackets for each truncated fragment.Single lines represent the removed fragments (b) Schematic diagram of the effector and reporter vectors used for dual luciferase reporter assay.The reporter vector contained a 2523 bp OsPR10a promoter before the firefly luciferase (LUC).The effector vector contained the full-length OsWRKY7 (OsW7) protein and NT1, CT3, and CT4 truncations expressed under the Ubi promoter.(c) Transactivation assay of the OsPR10a-LUC reporter by the effectors indicated in (b).Relative LUC activities were expressed by normalizing the LUC signals to the value of REN.Data are means AE SD (n = 3).The empty effector vector (EV) serves as the control.Significant differences are indicated as *P < 0.05 by Student's t-test.(d) Subcellular localization of the full-length OsWRKY7 (OsW7) protein and its truncations (NT1, CT3, and CT4) which were fused with GFP in rice protoplasts.A H 2 B-mCherry vector was co-transformed together to indicate the nucleus.Scale bars: 4 lM. Figure 5 Figure 5 Overexpression of the stabilized OsWRKY7 from the diORF confers enhanced resistance to Xoo infection.(a) Lesion lengths on leaves of OsWRKY7-diORF-OE transgenic lines and wild type (WT) after Xoo (PXO341) infection for 14 days.Segregated T 1 plants with transgene (grey bar) or without transgene (black bar) were identified by PCR analysis before inoculation.Bars represent mean lesion lengthsAESD (n ≥ 3).Statistical analysis was performed by Student's t-test (*P < 0.05, **P < 0.01).OsWRKY7-diORF protein was detected with anti-FLAG antibody.RubL was used as loading control.(b) Representative leaves with lesions from three T 1 diORF-OE lines and WT after PXO341 infection for 14 days.Segregated T 1 plants with (+) or without (À) transgene were determined by PCR amplification.Scale bar: 1 cm.(c) qRT-PCR analysis of OsPR1a, OsPR1b, OsPR5, OsPR10a, OsRbohB, and OsRbohE expression in WT and diORF-OE lines challenged with Xoo or H 2 O for 48 h.Data are shown as means AE SD (n = 3) of the fold change relative to the levels in WT with H 2 O after normalization to the OsActin gene.Significant differences to the WT controls are indicated as *P < 0.05, **P < 0.01, and ***P < 0.001 by Student's t-test.(d) Estimation of H 2 O 2 levels in leaves of WT and diORF-OE lines.Leaves inoculated with Xoo at 5 dpi (upper panels) were stained with 3,3 0 -diaminobenzidine (DAB) and photographed after decolouring (lower panels).Scale bars: 1 mm.Two leaves of each line are shown. ª 2023 The Authors.Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd., 22, 1033-1048 Figure 6 Figure 6 Translation disrupting of the mORF results in increased protein expression of the diORF and enhanced disease resistance to Xoo.(a) Schematic diagram of the reporter vectors used for dual luciferase reporter assay in (b).OsWRKY7 promoter and the N terminal 84 bp were infused with the luciferase reporter gene (Luc).The N84-(ÀA)-Luc vector was similar to N84-Luc except the A of main ATG was deleted.(b) Relative LUC activity of vectors in (a) after transient expressed in rice protoplasts.(c) Compare the protein levels in plants transformed with OsWRKY7(ÀA)-FLAG and OsWRKY7-FLAG under the native promoter.Two lines with highest protein expression in each transgenic population were used.Total proteins were extracted from leaves and detected with anti-FLAG antibody.RubL was used as loading control.(d) The sgRNAc target sequence (blue letters) and the PAM site (red letters) shown include the main ATG (underlined).Representative types of mutant alleles were identified at the sgRNAc target.The WT sequence is shown at the top and mutant lines (Cas9-c-19, 29) with different mutations shown below.The numbers of deletions are shown in brackets for each type.(e) Leaves with typical lesions from the Nip and two homozygous Cas9-c mutant lines after PXO341 infection (14 dpi).Scale bar: 2 cm.(f) Lesion lengths on leaves of Nip and mutant lines inoculated in (e) at 14 dpi.Bars represent mean lesion lengthsAESD (n ≥ 3).Statistical analyses were performed by Student's t-test between mutant lines and WT (***P < 0.001).(g) qRT-PCR analysis of OsPR1a, OsPR1b, OsPR5, OsPR10a, OsRbohB, and expression in Nip and Cas9c mutation lines challenged with Xoo or H 2 O for 48 h.Data are shown as means AE SD (n = 3) as the fold changes relative to the level in the Nip with H 2 O after normalization to the OsActin gene.Significant differences to the Nip controls are indicated as *P < 0.05, **P < 0.01, and ***P < 0.001 by Student's t-test.(h) H 2 O 2 levels in leaves of Cas9-c mutation lines and Nip.Leaves inoculated with Xoo at 5 dpi (upper panels).The same Xoo-infected leaves were stained with 3,3 0 -diaminobenzidine (DAB) and photographed after decolouring (lower panels).Scale bars: 1 mm.(i) Xoo-resistant phenotype of Cas9-cmutant lines in ZJ70 background.Picture was taken in patch field 17 days after PXO341 infection. Figure 7 Figure7Working model of OsWRKY7 alternative translation during plant basal defence activation in rice.In wild-type plants, the OsWRKY7 protein translation can be initiated at either the first or the second inframe AUG.Under normal condition, the full-length OsWRKY7 proteins are degraded by the ubiquitin-proteasome system to minimize its inhibition of plant growth and development.On the other hand, the alternate stable OsWRKY7 protein is translated with less efficiency to maintain a low level of basal defence together with the undegraded fulllength protein.In the genome-edited plants where the first AUG of OsWRKY7 is disrupted, the alternate OsWRKY7 proteins are accumulated due to the enhanced translation at the second AUG; thus, the basal defence is elevated to a high level through the enhanced defence-related gene expression and ROS production, while the plant growth and development is partially inhibited. Figure S1 Figure S1 Characterization of the OsWRKY7 loss of function mutant rice plants generated by CRISPR/Cas9-mediated mutagenesis.Figure S2 Generation of the OsWRKY7 loss of function rice plants by CRISPR/Cas9-mediated mutagenesis at the sgRNAb site.Figure S3 Alternative splicing analysis of OsWRKY7 gene transcription from RNA-seq data of Nipponbare. Figure S4 LC-MS/MS analysis of the proteins translated from the OsWRKY7-SR gene under control of the 35S promoter.Figure S5 OsWRKY7 protein was not degraded through the lysosomal pathway.Figure S6 In vivo ubiquitination assay of OsWRKY7 protein.Figure S7 MG132 and CHX time course treatment of the fulllength and short OsWRKY7 proteins.Figure S8 H 2 O 2 levels in leaves of WT and OsWRKY7-diORF-OE transgenic plants without Xoo infection.Figure S19 The agronomic phenotypes of the OsWRKY7-diORF-OE transgenic plants.Figure S10 Characterization of plants transformed with the fulllength and A deletion OsWRKY7 constructs controlled by the native promoter.Figure S11 CRISPR/Cas9 plants with the first ATG of OsWRKY7 mutated had enhanced resistance to Xoo and hypersensitive response (HR)-related cell death.Figure S12 OsWRKY7 regulated defence response against the highly virulent Xoo strain PXO99.Figure S13 H 2 O 2 levels in leaves of WT and oswrky7-Cas9-c transgenic plants without Xoo infection.Figure S14 The agronomic phenotypes of the oswrky7-Cas9-c transgenic plants.Figure S15 Alternative translation initiation of OsWRKY group II members clustering in the clade with OsWRKY7. Figure S16 Conservation of OsWRKY7 protein and its closely related homologues.TableS1Sequences of primers used for vector construction.TableS2.Sequences of primers used for RT-PCR.TableS3LC-MS/MS identified peptides in the proteins expressed from the mutated OsWRKY7-SR gene controlled by 35S promoter.TableS4.Comparison of the AUG initiation codon context (À6 to +4) in the OsWRKY genes tested in this study.Data S1 Supporting Information. Figure S3 Figure S1 Characterization of the OsWRKY7 loss of function mutant rice plants generated by CRISPR/Cas9-mediated mutagenesis.Figure S2 Generation of the OsWRKY7 loss of function rice plants by CRISPR/Cas9-mediated mutagenesis at the sgRNAb site.Figure S3 Alternative splicing analysis of OsWRKY7 gene transcription from RNA-seq data of Nipponbare. Figure S4 LC-MS/MS analysis of the proteins translated from the OsWRKY7-SR gene under control of the 35S promoter.Figure S5 OsWRKY7 protein was not degraded through the lysosomal pathway.Figure S6 In vivo ubiquitination assay of OsWRKY7 protein.Figure S7 MG132 and CHX time course treatment of the fulllength and short OsWRKY7 proteins.Figure S8 H 2 O 2 levels in leaves of WT and OsWRKY7-diORF-OE transgenic plants without Xoo infection.Figure S19 The agronomic phenotypes of the OsWRKY7-diORF-OE transgenic plants.Figure S10 Characterization of plants transformed with the fulllength and A deletion OsWRKY7 constructs controlled by the native promoter.Figure S11 CRISPR/Cas9 plants with the first ATG of OsWRKY7 mutated had enhanced resistance to Xoo and hypersensitive response (HR)-related cell death.Figure S12 OsWRKY7 regulated defence response against the highly virulent Xoo strain PXO99.Figure S13 H 2 O 2 levels in leaves of WT and oswrky7-Cas9-c transgenic plants without Xoo infection.Figure S14 The agronomic phenotypes of the oswrky7-Cas9-c transgenic plants.Figure S15 Alternative translation initiation of OsWRKY group II members clustering in the clade with OsWRKY7. Figure S16 Conservation of OsWRKY7 protein and its closely related homologues.TableS1Sequences of primers used for vector construction.TableS2.Sequences of primers used for RT-PCR.TableS3LC-MS/MS identified peptides in the proteins expressed from the mutated OsWRKY7-SR gene controlled by 35S promoter.TableS4.Comparison of the AUG initiation codon context (À6 to +4) in the OsWRKY genes tested in this study.Data S1 Supporting Information. Figure S1 Characterization of the OsWRKY7 loss of function mutant rice plants generated by CRISPR/Cas9-mediated mutagenesis.Figure S2 Generation of the OsWRKY7 loss of function rice plants by CRISPR/Cas9-mediated mutagenesis at the sgRNAb site.Figure S3 Alternative splicing analysis of OsWRKY7 gene transcription from RNA-seq data of Nipponbare. Figure S4 LC-MS/MS analysis of the proteins translated from the OsWRKY7-SR gene under control of the 35S promoter.Figure S5 OsWRKY7 protein was not degraded through the lysosomal pathway.Figure S6 In vivo ubiquitination assay of OsWRKY7 protein.Figure S7 MG132 and CHX time course treatment of the fulllength and short OsWRKY7 proteins.Figure S8 H 2 O 2 levels in leaves of WT and OsWRKY7-diORF-OE transgenic plants without Xoo infection.Figure S19 The agronomic phenotypes of the OsWRKY7-diORF-OE transgenic plants.Figure S10 Characterization of plants transformed with the fulllength and A deletion OsWRKY7 constructs controlled by the native promoter.Figure S11 CRISPR/Cas9 plants with the first ATG of OsWRKY7 mutated had enhanced resistance to Xoo and hypersensitive response (HR)-related cell death.Figure S12 OsWRKY7 regulated defence response against the highly virulent Xoo strain PXO99.Figure S13 H 2 O 2 levels in leaves of WT and oswrky7-Cas9-c transgenic plants without Xoo infection.Figure S14 The agronomic phenotypes of the oswrky7-Cas9-c transgenic plants.Figure S15 Alternative translation initiation of OsWRKY group II members clustering in the clade with OsWRKY7. Figure S16 Conservation of OsWRKY7 protein and its closely related homologues.TableS1Sequences of primers used for vector construction.TableS2.Sequences of primers used for RT-PCR.TableS3LC-MS/MS identified peptides in the proteins expressed from the mutated OsWRKY7-SR gene controlled by 35S promoter.TableS4.Comparison of the AUG initiation codon context (À6 to +4) in the OsWRKY genes tested in this study.Data S1 Supporting Information. Figure S1 Characterization of the OsWRKY7 loss of function mutant rice plants generated by CRISPR/Cas9-mediated mutagenesis.Figure S2 Generation of the OsWRKY7 loss of function rice plants by CRISPR/Cas9-mediated mutagenesis at the sgRNAb site.Figure S3 Alternative splicing analysis of OsWRKY7 gene transcription from RNA-seq data of Nipponbare. Figure S4 LC-MS/MS analysis of the proteins translated from the OsWRKY7-SR gene under control of the 35S promoter.Figure S5 OsWRKY7 protein was not degraded through the lysosomal pathway.Figure S6 In vivo ubiquitination assay of OsWRKY7 protein.Figure S7 MG132 and CHX time course treatment of the fulllength and short OsWRKY7 proteins.Figure S8 H 2 O 2 levels in leaves of WT and OsWRKY7-diORF-OE transgenic plants without Xoo infection.Figure S19 The agronomic phenotypes of the OsWRKY7-diORF-OE transgenic plants.Figure S10 Characterization of plants transformed with the fulllength and A deletion OsWRKY7 constructs controlled by the native promoter.Figure S11 CRISPR/Cas9 plants with the first ATG of OsWRKY7 mutated had enhanced resistance to Xoo and hypersensitive response (HR)-related cell death.Figure S12 OsWRKY7 regulated defence response against the highly virulent Xoo strain PXO99.Figure S13 H 2 O 2 levels in leaves of WT and oswrky7-Cas9-c transgenic plants without Xoo infection.Figure S14 The agronomic phenotypes of the oswrky7-Cas9-c transgenic plants.Figure S15 Alternative translation initiation of OsWRKY group II members clustering in the clade with OsWRKY7. Figure S16 Conservation of OsWRKY7 protein and its closely related homologues.TableS1Sequences of primers used for vector construction.TableS2.Sequences of primers used for RT-PCR.TableS3LC-MS/MS identified peptides in the proteins expressed from the mutated OsWRKY7-SR gene controlled by 35S promoter.TableS4.Comparison of the AUG initiation codon context (À6 to +4) in the OsWRKY genes tested in this study.Data S1 Supporting Information. Figure S1 Characterization of the OsWRKY7 loss of function mutant rice plants generated by CRISPR/Cas9-mediated mutagenesis.Figure S2 Generation of the OsWRKY7 loss of function rice plants by CRISPR/Cas9-mediated mutagenesis at the sgRNAb site.Figure S3 Alternative splicing analysis of OsWRKY7 gene transcription from RNA-seq data of Nipponbare. Figure S4 LC-MS/MS analysis of the proteins translated from the OsWRKY7-SR gene under control of the 35S promoter.Figure S5 OsWRKY7 protein was not degraded through the lysosomal pathway.Figure S6 In vivo ubiquitination assay of OsWRKY7 protein.Figure S7 MG132 and CHX time course treatment of the fulllength and short OsWRKY7 proteins.Figure S8 H 2 O 2 levels in leaves of WT and OsWRKY7-diORF-OE transgenic plants without Xoo infection.Figure S19 The agronomic phenotypes of the OsWRKY7-diORF-OE transgenic plants.Figure S10 Characterization of plants transformed with the fulllength and A deletion OsWRKY7 constructs controlled by the native promoter.Figure S11 CRISPR/Cas9 plants with the first ATG of OsWRKY7 mutated had enhanced resistance to Xoo and hypersensitive response (HR)-related cell death.Figure S12 OsWRKY7 regulated defence response against the highly virulent Xoo strain PXO99.Figure S13 H 2 O 2 levels in leaves of WT and oswrky7-Cas9-c transgenic plants without Xoo infection.Figure S14 The agronomic phenotypes of the oswrky7-Cas9-c transgenic plants.Figure S15 Alternative translation initiation of OsWRKY group II members clustering in the clade with OsWRKY7. Figure S16 Conservation of OsWRKY7 protein and its closely related homologues.TableS1Sequences of primers used for vector construction.TableS2.Sequences of primers used for RT-PCR.TableS3LC-MS/MS identified peptides in the proteins expressed from the mutated OsWRKY7-SR gene controlled by 35S promoter.TableS4.Comparison of the AUG initiation codon context (À6 to +4) in the OsWRKY genes tested in this study.Data S1 Supporting Information. Figure S1 Characterization of the OsWRKY7 loss of function mutant rice plants generated by CRISPR/Cas9-mediated mutagenesis.Figure S2 Generation of the OsWRKY7 loss of function rice plants by CRISPR/Cas9-mediated mutagenesis at the sgRNAb site.Figure S3 Alternative splicing analysis of OsWRKY7 gene transcription from RNA-seq data of Nipponbare. Figure S4 LC-MS/MS analysis of the proteins translated from the OsWRKY7-SR gene under control of the 35S promoter.Figure S5 OsWRKY7 protein was not degraded through the lysosomal pathway.Figure S6 In vivo ubiquitination assay of OsWRKY7 protein.Figure S7 MG132 and CHX time course treatment of the fulllength and short OsWRKY7 proteins.Figure S8 H 2 O 2 levels in leaves of WT and OsWRKY7-diORF-OE transgenic plants without Xoo infection.Figure S19 The agronomic phenotypes of the OsWRKY7-diORF-OE transgenic plants.Figure S10 Characterization of plants transformed with the fulllength and A deletion OsWRKY7 constructs controlled by the native promoter.Figure S11 CRISPR/Cas9 plants with the first ATG of OsWRKY7 mutated had enhanced resistance to Xoo and hypersensitive response (HR)-related cell death.Figure S12 OsWRKY7 regulated defence response against the highly virulent Xoo strain PXO99.Figure S13 H 2 O 2 levels in leaves of WT and oswrky7-Cas9-c transgenic plants without Xoo infection.Figure S14 The agronomic phenotypes of the oswrky7-Cas9-c transgenic plants.Figure S15 Alternative translation initiation of OsWRKY group II members clustering in the clade with OsWRKY7. Figure S16 Conservation of OsWRKY7 protein and its closely related homologues.TableS1Sequences of primers used for vector construction.TableS2.Sequences of primers used for RT-PCR.TableS3LC-MS/MS identified peptides in the proteins expressed from the mutated OsWRKY7-SR gene controlled by 35S promoter.TableS4.Comparison of the AUG initiation codon context (À6 to +4) in the OsWRKY genes tested in this study.Data S1 Supporting Information. Figure S10 Figure S1 Characterization of the OsWRKY7 loss of function mutant rice plants generated by CRISPR/Cas9-mediated mutagenesis.Figure S2 Generation of the OsWRKY7 loss of function rice plants by CRISPR/Cas9-mediated mutagenesis at the sgRNAb site.Figure S3 Alternative splicing analysis of OsWRKY7 gene transcription from RNA-seq data of Nipponbare. Figure S4 LC-MS/MS analysis of the proteins translated from the OsWRKY7-SR gene under control of the 35S promoter.Figure S5 OsWRKY7 protein was not degraded through the lysosomal pathway.Figure S6 In vivo ubiquitination assay of OsWRKY7 protein.Figure S7 MG132 and CHX time course treatment of the fulllength and short OsWRKY7 proteins.Figure S8 H 2 O 2 levels in leaves of WT and OsWRKY7-diORF-OE transgenic plants without Xoo infection.Figure S19 The agronomic phenotypes of the OsWRKY7-diORF-OE transgenic plants.Figure S10 Characterization of plants transformed with the fulllength and A deletion OsWRKY7 constructs controlled by the native promoter.Figure S11 CRISPR/Cas9 plants with the first ATG of OsWRKY7 mutated had enhanced resistance to Xoo and hypersensitive response (HR)-related cell death.Figure S12 OsWRKY7 regulated defence response against the highly virulent Xoo strain PXO99.Figure S13 H 2 O 2 levels in leaves of WT and oswrky7-Cas9-c transgenic plants without Xoo infection.Figure S14 The agronomic phenotypes of the oswrky7-Cas9-c transgenic plants.Figure S15 Alternative translation initiation of OsWRKY group II members clustering in the clade with OsWRKY7. Figure S16 Conservation of OsWRKY7 protein and its closely related homologues.TableS1Sequences of primers used for vector construction.TableS2.Sequences of primers used for RT-PCR.TableS3LC-MS/MS identified peptides in the proteins expressed from the mutated OsWRKY7-SR gene controlled by 35S promoter.TableS4.Comparison of the AUG initiation codon context (À6 to +4) in the OsWRKY genes tested in this study.Data S1 Supporting Information. Figure S1 Characterization of the OsWRKY7 loss of function mutant rice plants generated by CRISPR/Cas9-mediated mutagenesis.Figure S2 Generation of the OsWRKY7 loss of function rice plants by CRISPR/Cas9-mediated mutagenesis at the sgRNAb site.Figure S3 Alternative splicing analysis of OsWRKY7 gene transcription from RNA-seq data of Nipponbare. Figure S4 LC-MS/MS analysis of the proteins translated from the OsWRKY7-SR gene under control of the 35S promoter.Figure S5 OsWRKY7 protein was not degraded through the lysosomal pathway.Figure S6 In vivo ubiquitination assay of OsWRKY7 protein.Figure S7 MG132 and CHX time course treatment of the fulllength and short OsWRKY7 proteins.Figure S8 H 2 O 2 levels in leaves of WT and OsWRKY7-diORF-OE transgenic plants without Xoo infection.Figure S19 The agronomic phenotypes of the OsWRKY7-diORF-OE transgenic plants.Figure S10 Characterization of plants transformed with the fulllength and A deletion OsWRKY7 constructs controlled by the native promoter.Figure S11 CRISPR/Cas9 plants with the first ATG of OsWRKY7 mutated had enhanced resistance to Xoo and hypersensitive response (HR)-related cell death.Figure S12 OsWRKY7 regulated defence response against the highly virulent Xoo strain PXO99.Figure S13 H 2 O 2 levels in leaves of WT and oswrky7-Cas9-c transgenic plants without Xoo infection.Figure S14 The agronomic phenotypes of the oswrky7-Cas9-c transgenic plants.Figure S15 Alternative translation initiation of OsWRKY group II members clustering in the clade with OsWRKY7. Figure S16 Conservation of OsWRKY7 protein and its closely related homologues.TableS1Sequences of primers used for vector construction.TableS2.Sequences of primers used for RT-PCR.TableS3LC-MS/MS identified peptides in the proteins expressed from the mutated OsWRKY7-SR gene controlled by 35S promoter.TableS4.Comparison of the AUG initiation codon context (À6 to +4) in the OsWRKY genes tested in this study.Data S1 Supporting Information. Figure S12 Figure S1 Characterization of the OsWRKY7 loss of function mutant rice plants generated by CRISPR/Cas9-mediated mutagenesis.Figure S2 Generation of the OsWRKY7 loss of function rice plants by CRISPR/Cas9-mediated mutagenesis at the sgRNAb site.Figure S3 Alternative splicing analysis of OsWRKY7 gene transcription from RNA-seq data of Nipponbare. Figure S4 LC-MS/MS analysis of the proteins translated from the OsWRKY7-SR gene under control of the 35S promoter.Figure S5 OsWRKY7 protein was not degraded through the lysosomal pathway.Figure S6 In vivo ubiquitination assay of OsWRKY7 protein.Figure S7 MG132 and CHX time course treatment of the fulllength and short OsWRKY7 proteins.Figure S8 H 2 O 2 levels in leaves of WT and OsWRKY7-diORF-OE transgenic plants without Xoo infection.Figure S19 The agronomic phenotypes of the OsWRKY7-diORF-OE transgenic plants.Figure S10 Characterization of plants transformed with the fulllength and A deletion OsWRKY7 constructs controlled by the native promoter.Figure S11 CRISPR/Cas9 plants with the first ATG of OsWRKY7 mutated had enhanced resistance to Xoo and hypersensitive response (HR)-related cell death.Figure S12 OsWRKY7 regulated defence response against the highly virulent Xoo strain PXO99.Figure S13 H 2 O 2 levels in leaves of WT and oswrky7-Cas9-c transgenic plants without Xoo infection.Figure S14 The agronomic phenotypes of the oswrky7-Cas9-c transgenic plants.Figure S15 Alternative translation initiation of OsWRKY group II members clustering in the clade with OsWRKY7. Figure S16 Conservation of OsWRKY7 protein and its closely related homologues.TableS1Sequences of primers used for vector construction.TableS2.Sequences of primers used for RT-PCR.TableS3LC-MS/MS identified peptides in the proteins expressed from the mutated OsWRKY7-SR gene controlled by 35S promoter.TableS4.Comparison of the AUG initiation codon context (À6 to +4) in the OsWRKY genes tested in this study.Data S1 Supporting Information. Figure S1 Characterization of the OsWRKY7 loss of function mutant rice plants generated by CRISPR/Cas9-mediated mutagenesis.Figure S2 Generation of the OsWRKY7 loss of function rice plants by CRISPR/Cas9-mediated mutagenesis at the sgRNAb site.Figure S3 Alternative splicing analysis of OsWRKY7 gene transcription from RNA-seq data of Nipponbare. Figure S4 LC-MS/MS analysis of the proteins translated from the OsWRKY7-SR gene under control of the 35S promoter.Figure S5 OsWRKY7 protein was not degraded through the lysosomal pathway.Figure S6 In vivo ubiquitination assay of OsWRKY7 protein.Figure S7 MG132 and CHX time course treatment of the fulllength and short OsWRKY7 proteins.Figure S8 H 2 O 2 levels in leaves of WT and OsWRKY7-diORF-OE transgenic plants without Xoo infection.Figure S19 The agronomic phenotypes of the OsWRKY7-diORF-OE transgenic plants.Figure S10 Characterization of plants transformed with the fulllength and A deletion OsWRKY7 constructs controlled by the native promoter.Figure S11 CRISPR/Cas9 plants with the first ATG of OsWRKY7 mutated had enhanced resistance to Xoo and hypersensitive response (HR)-related cell death.Figure S12 OsWRKY7 regulated defence response against the highly virulent Xoo strain PXO99.Figure S13 H 2 O 2 levels in leaves of WT and oswrky7-Cas9-c transgenic plants without Xoo infection.Figure S14 The agronomic phenotypes of the oswrky7-Cas9-c transgenic plants.Figure S15 Alternative translation initiation of OsWRKY group II members clustering in the clade with OsWRKY7. Figure S16 Conservation of OsWRKY7 protein and its closely related homologues.TableS1Sequences of primers used for vector construction.TableS2.Sequences of primers used for RT-PCR.TableS3LC-MS/MS identified peptides in the proteins expressed from the mutated OsWRKY7-SR gene controlled by 35S promoter.TableS4.Comparison of the AUG initiation codon context (À6 to +4) in the OsWRKY genes tested in this study.Data S1 Supporting Information. Figure S1 Characterization of the OsWRKY7 loss of function mutant rice plants generated by CRISPR/Cas9-mediated mutagenesis.Figure S2 Generation of the OsWRKY7 loss of function rice plants by CRISPR/Cas9-mediated mutagenesis at the sgRNAb site.Figure S3 Alternative splicing analysis of OsWRKY7 gene transcription from RNA-seq data of Nipponbare. Figure S4 LC-MS/MS analysis of the proteins translated from the OsWRKY7-SR gene under control of the 35S promoter.Figure S5 OsWRKY7 protein was not degraded through the lysosomal pathway.Figure S6 In vivo ubiquitination assay of OsWRKY7 protein.Figure S7 MG132 and CHX time course treatment of the fulllength and short OsWRKY7 proteins.Figure S8 H 2 O 2 levels in leaves of WT and OsWRKY7-diORF-OE transgenic plants without Xoo infection.Figure S19 The agronomic phenotypes of the OsWRKY7-diORF-OE transgenic plants.Figure S10 Characterization of plants transformed with the fulllength and A deletion OsWRKY7 constructs controlled by the native promoter.Figure S11 CRISPR/Cas9 plants with the first ATG of OsWRKY7 mutated had enhanced resistance to Xoo and hypersensitive response (HR)-related cell death.Figure S12 OsWRKY7 regulated defence response against the highly virulent Xoo strain PXO99.Figure S13 H 2 O 2 levels in leaves of WT and oswrky7-Cas9-c transgenic plants without Xoo infection.Figure S14 The agronomic phenotypes of the oswrky7-Cas9-c transgenic plants.Figure S15 Alternative translation initiation of OsWRKY group II members clustering in the clade with OsWRKY7. Figure S16 Conservation of OsWRKY7 protein and its closely related homologues.TableS1Sequences of primers used for vector construction.TableS2.Sequences of primers used for RT-PCR.TableS3LC-MS/MS identified peptides in the proteins expressed from the mutated OsWRKY7-SR gene controlled by 35S promoter.TableS4.Comparison of the AUG initiation codon context (À6 to +4) in the OsWRKY genes tested in this study.Data S1 Supporting Information. ª 2023 The Authors.Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd., 22, 1033-1048
v3-fos-license
2021-11-07T16:09:27.002Z
2021-11-05T00:00:00.000
243824168
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-3417/11/21/10422/pdf", "pdf_hash": "0b7ed6faf1b338c1e8a851cb063585f5e52bbf65", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46783", "s2fieldsofstudy": [ "Engineering" ], "sha1": "6aee95501c9922e9ecb0f3c116c2e5ccb4b9733b", "year": 2021 }
pes2o/s2orc
Study on Ducted Vertical Take-Off and Landing Fixed-Wing UAV Dynamics Modeling and Transition Corridor Featured Application: Authors are encouraged to provide a concise description of the specific application or a potential application of the work. This section is not mandatory. Abstract: An accurate description of the transition corridor is of great significance for the flight process of the vertical take-off and landing (VTOL) fixed-wing unmanned aerial vehicle (UAV). To study the transition flight process of vertical take-off and landing fixed-wing UAVs, the dynamic model and transition corridor model of this type of UAV are established in the current article. The method for establishing the model is based on a reasonable match of the power and aerodynamic force of this type of UAV. From the perspective of flight dynamics, the ducted lift-increasing system’s deflection angle–speed envelope is studied with the maximum lift coefficient of the wing and the system’s available power. The influence of the overall parameters and energy parameters of the UAV on the deflection angle–speed envelope of the ducted lift-increasing system is analyzed, and a method is proposed to expand the vertical take-off and landing fixed-wing UAV’s transition corridor. Taking the UAV as the object, using the established model, the transition flight corridor of the UAV is obtained, the influence of the control parameters on the transition flight is studied, and the appropriate transition flight control strategy is determined. At the same time, the influence of the overall parameters and energy parameters on the transition corridor is calculated. According to the calculation results, the effect of expanding the flight corridor of the UAV is more obvious when increasing the available power than when increasing the aerodynamic parameters by the same proportion. Introduction The vertical take-off and landing fixed-wing UAV can achieve zero-speed take-off and landing without the need for a runway as in the case of the conventional aircraft. In the vertical take-off and landing mode, it is capable of hovering. In fixed-wing mode, high-speed and long-term flight can be achieved. It is one of the best choices to perform reconnaissance and monitoring tasks in special environments [1][2][3][4][5]. In this work, a ducted vertical take-off and landing fixed-wing UAV scheme is proposed. For this type of vertical take-off and landing fixed-wing UAV, the transition flight corridor (envelope) is studied. There are many studies on the tilting corridor of the tilt-rotor vertical take-off and landing aircraft. Rajan Gill [6] studied a vertical take-off and landing unmanned aerial vehicle, presented with a quadrotor design for propulsion and attitude stabilization and an annular wing that provides lift in forward flight. Luz M. Sanchez-Rivera [7] focused on the Dual Tilt-Wing UAV, a vehicle capable of performing both flight modes (VTOL and CTOL). The complete dynamic model of a UAV was obtained using the Newton-Euler formulation, which includes aerodynamic effects as the drag and lift forces of the boundary of the corridor is determined by the UAV's available power. Finally, the UAV is calculated for the transition corridor. Then, the flight envelope of the vertical take-off and landing fixed-wing UAV is analyzed based on overall parameters, and a method is provided for the design of the transitional corridor of the vertical take-off and landing fixed-wing UAV. UAV Dynamic Model Aerodynamic modeling of the UAV adopts the airframe axis coordinate system and the earth axis system (see in Figure 2). The origin of the axis system is at the center of gravity of the UAV. The x-axis points forward along the plane of symmetry of the fuselage. The y-axis is perpendicular to the x-axis. The z-axis is determined by the right-hand rule. The x-axis and z-axis constitute the symmetry of the fuselage. Modeling of the lift fan and duct is based on the self-body axis system, which is an independent system of the lift fan and duct. The process of UAV take-off and transition to level flight is shown in Figure 3. During vertical take-off and landing, the duct deflects to provide upward thrust, and the lift fan in front also generates upward thrust. In the transition process, the forward-flying thrust is obtained by deflecting the duct inclination, and the transition from vertical take-off to horizontal flight is completed under certain control strategies. In level flight, the duct is deflected to the horizontal position, and the flight is controlled by the deflection of the control surface. UAV Dynamic Model Aerodynamic modeling of the UAV adopts the airframe axis coordinate system and the earth axis system (see in Figure 2). The origin of the axis system is at the center of gravity of the UAV. The x-axis points forward along the plane of symmetry of the fuselage. The y-axis is perpendicular to the x-axis. The z-axis is determined by the right-hand rule. The x-axis and z-axis constitute the symmetry of the fuselage. Modeling of the lift fan and duct is based on the self-body axis system, which is an independent system of the lift fan and duct. Appl. Sci. 2021, 11, x FOR PEER REVIEW 3 of 15 boundary of the corridor is determined by the UAV's available power. Finally, the UAV is calculated for the transition corridor. Then, the flight envelope of the vertical take-off and landing fixed-wing UAV is analyzed based on overall parameters, and a method is provided for the design of the transitional corridor of the vertical take-off and landing fixed-wing UAV. Figure 1. Conceptual graphs of vertical take-off and landing fixed-wing UAV, which includes a lift fan and ducted lift-increasing system. UAV Dynamic Model Aerodynamic modeling of the UAV adopts the airframe axis coordinate system and the earth axis system (see in Figure 2). The origin of the axis system is at the center of gravity of the UAV. The x-axis points forward along the plane of symmetry of the fuselage. The y-axis is perpendicular to the x-axis. The z-axis is determined by the right-hand rule. The x-axis and z-axis constitute the symmetry of the fuselage. Modeling of the lift fan and duct is based on the self-body axis system, which is an independent system of the lift fan and duct. The process of UAV take-off and transition to level flight is shown in Figure 3. During vertical take-off and landing, the duct deflects to provide upward thrust, and the lift fan in front also generates upward thrust. In the transition process, the forward-flying thrust is obtained by deflecting the duct inclination, and the transition from vertical take-off to horizontal flight is completed under certain control strategies. In level flight, the duct is deflected to the horizontal position, and the flight is controlled by the deflection of the control surface. The process of UAV take-off and transition to level flight is shown in Figure 3. During vertical take-off and landing, the duct deflects to provide upward thrust, and the lift fan in front also generates upward thrust. In the transition process, the forward-flying thrust is obtained by deflecting the duct inclination, and the transition from vertical take-off to horizontal flight is completed under certain control strategies. In level flight, the duct is deflected to the horizontal position, and the flight is controlled by the deflection of the control surface. In the process of modeling the vertical take-off and landing UAV, the coordinate system and angle need to be defined. Fang Zhenping [17] studied the coordinate system and the conversion relationship, and the Euler transformation relationship between the ground coordinate system and the airframe coordinate system (see in Figure 4) is used in In the process of modeling the vertical take-off and landing UAV, the coordinate system and angle need to be defined. Fang Zhenping [17] studied the coordinate system and the conversion relationship, and the Euler transformation relationship between the ground coordinate system and the airframe coordinate system (see in Figure 4) is used in this article. Yaw angle ψ: the angle between the projection of the body axis Ox b on the horizontal plane Ox g y g and the Ox g axis. The angle is positive when the aircraft yaws right. Pitch angle θ: the angle between the body axis Ox b and the horizontal plane Ox g y g . When the head of the aircraft is raised, the angle is positive. Roll angle φ: the angle between the plane of symmetry of the aircraft and the vertical plane containing the Ox b axis. The angle is positive when the aircraft rolls right. In the process of modeling the vertical take-off and landing UAV, the coordinate system and angle need to be defined. Fang Zhenping [17] studied the coordinate system and the conversion relationship, and the Euler transformation relationship between the ground coordinate system and the airframe coordinate system (see in Figure 4) is used in this article. Yaw angle : the angle between the projection of the body axis on the horizontal plane and the axis. The angle is positive when the aircraft yaws right. Pitch angle : the angle between the body axis and the horizontal plane . When the head of the aircraft is raised, the angle is positive. Roll angle : the angle between the plane of symmetry of the aircraft and the vertical plane containing the axis. The angle is positive when the aircraft rolls right. According to the general law of coordinate transformation, the transformation matrix from the earth coordinate system to the body coordinate system can be obtained as We project the aerodynamic forces and aerodynamic moments of the lift fan and ducted lift-increasing system, and combine the control equations to obtain the nonlinear flight dynamics mathematical equations of the vertical take-off and landing fixed-wing UAV. The mathematical model of nonlinear flight dynamics can generally be written in the form of a first-order differential equation: where y represents all of the state quantities of the vertical take-off and landing fixed-wing UAV flight dynamics equation, and u represents the control quantities of the vertical takeoff and landing fixed-wing UAV, and they are defined as According to the general law of coordinate transformation, the transformation matrix R bg from the earth coordinate system to the body coordinate system can be obtained as We project the aerodynamic forces and aerodynamic moments of the lift fan and ducted lift-increasing system, and combine the control equations to obtain the nonlinear flight dynamics mathematical equations of the vertical take-off and landing fixed-wing UAV. The mathematical model of nonlinear flight dynamics can generally be written in the form of a first-order differential equation: . where y represents all of the state quantities of the vertical take-off and landing fixed-wing UAV flight dynamics equation, and u represents the control quantities of the vertical take-off and landing fixed-wing UAV, and they are defined as The force and moment of the body expressed by the state quantity of the dynamic equation, that is, the reference motion parameter y used in the equation, are as follows: It can be seen in the body axis formation process that ψ is formed by the angular velocity dψ/dt along the z g axis; θ is formed by the angular velocity dθ/dt along the y axis, which is projected along the body axis as 0, ; φ is formed by the angular velocity dφ/dt along the x b axis. It can be concluded that the projection of the rotational angular velocity on the shaft system of the body is The solution result is a dynamic equation rotating around the center of mass, expressed as The above Formulas (4), (5) and (7) are the dynamic model of the vertical take-off and landing fixed-wing UAV. UAV Transition Corridor In the vertical take-off and landing fixed-wing UAV, its power configuration is quite different from that of a helicopter. It is mainly composed of a front lift fan and a rear ducted lift-increasing system. In fixed-wing flight mode, the front lift fan turns off, and the ducted lift-increasing system deflects to level. The duct is used as the propulsion power, and the wing is the lift surface. During the transitional flight, the ducted lift-increasing system keeps deflecting, and the power of the lift fan keeps changing. When the UAV is flying at low speed, a faster deflection of the ducted lift-increasing system may cause the wing to stall and fail to complete the transition. When the flying speed of the UAV is too high, the deflection of the ducted lift-increasing system may cause problems such as insufficient available power. The ducted lift-increasing system's deflection angle-speed envelopes studied in this paper are carried out at low and high speeds. The low-speed section is the left boundary of the flight transition corridor, while the high-speed section is the right boundary. Transition Window The flight transition window refers to the collection of parameters of the external conditions and the aircraft's own conditions required to complete a certain flight task [18]. The transition flight window in this article can be divided into the transition beginning window and the transition ending window. The resultant force parameter matching between the beginning window and the ending window can form the transition flight corridor. The transition beginning window is the flight state hovering at a safe altitude after vertical take-off; the transition ending window is the flight state meeting the safe level flight speed. The transition beginning window usually meets a certain safety height. In this paper, based on the experience of helicopters, this height is set as 20 m. The transition ending window usually has three important flight parameters, namely, flight speed, power thrust, and attitude of the UAV, which can be calculated via Formula (8)- (10): where G is the weight of the UAV; L is the aerodynamic lift; ρ is the air density; V s is the stall speed of the UAV; S is the wing area; C Lmax is the maximum lift coefficient of the UAV; V sa f e is the safe flight speed of the terminal window; and M is the torque. For the equilibrium equation, see Formula (11): where T f is the thrust of lift fan; T di is the thrust of the ducted lift-increasing system; θ is the pitch angle of the aircraft; α di is the deflection angle of the ducted force; l f is the lift fan lever; l di is the increased lift ducted lever. Ducted Lift-Increasing System's Deflection Angle-Speed Envelope Based on Stall Angle of Attack The transition corridor in the low-speed section starts from the hovering state until the ducted lift-increasing system deflects to the fixed-wing flight mode. At this time, the angle of attack is the stall angle of attack. When hovering, the aerodynamic force of the ducted lift-increasing system and the aerodynamic force of the front lift fan balance the gravity of the UAV. Under normal circumstances, the attitude angle of the UAV is zero. At this time, the lift fan pull and ducted lift-increasing system's aerodynamic force are vertically upward. Due to the existence of the small wing, the deflection angle of the rising duct is generally less than 90 degrees. The aerodynamic deflection angle of the ducted lift-increasing system is related to the deflection angle of the duct and the deflection angle of the small wing, which can be expressed as where i di is the resultant deflection angle of the ducted lift-increasing system, i d is the deflection angle of the duct relative to the airframe, and i iw is the deflection angle of the ducted lift-increasing system relative to the small wing. Figure 5 shows the force diagram of the UAV during the transition flight. In this diagram, α is the fuselage angle of attack, and L and D are the wing aerodynamic lift and drag respectively, which include the aerodynamic force of the main wing's free flow and the main wing-induced aerodynamic force. According to the force of the UAV, the balance relationship can be analyzed as follows: where lift L A and drag D A can be expressed as where subscript "A" represents all aerodynamic forces, including free-flow aerodynamic forces and induced aerodynamic forces on the main wing caused by duct inlet airflow. L mw and D mw in the formula are the induced lift and induced drag of the duct on the main wing, and the subscript "mw" represents the main wing-induced force. During the transition of a vertical take-off and landing fixed-wing UAV, the gravity of the UAV is mainly balanced by the lift fan and the ducted lift-increasing system to transition to aerodynamic balance. Early transition flight speed is low, and the wing provides a lift limitation to the critical stall angle of attack. When calculating the envelope of the transition low-speed section, the maximum angle of attack is taken as the critical angle of attack, and the maximum angle of attack in the low-speed section of the transition flight envelope satisfies the following relationship: where α lj is the critical angle of attack of the wing and i w is the wing installation angle. duct is generally less than 90 degrees. The aerodynamic deflection angle of the ducted lift increasing system is related to the deflection angle of the duct and the deflection angle o the small wing, which can be expressed as where is the resultant deflection angle of the ducted lift-increasing system, is the deflection angle of the duct relative to the airframe, and is the deflection angle of the ducted lift-increasing system relative to the small wing. Figure 5 shows the force diagram of the UAV during the transition flight. In this di agram, is the fuselage angle of attack, and L and D are the wing aerodynamic lift and drag respectively, which include the aerodynamic force of the main wing's free flow and the main wing-induced aerodynamic force. According to the force of the UAV, the balance relationship can be analyzed as follows: Deflection Angle-Speed Envelope Based on Power Limit in Ducted Lift-Increasing System The envelope of the equilibrium state gives the deflection angle-speed envelope of the ducted lift-increasing system on the premise that the wing does not stall. However, in actual flight, as long as the power is sufficient, the UAV can complete transition well even if the wing is stalled. In this section, the deflection angle-velocity envelope of the ducted lift-increasing system is determined according to the available power of the ducted lift-increasing system as a constraint [19]. The ducted lift-increasing system must satisfy the balance of the lift fan and ducted lift-increasing system aerodynamic force and the airframe aerodynamic force with the gravity of the UAV during the deflection process, and the available power of the lift fan and ducted lift-increasing system are required during the transitional flight. The required power P r of the lift fan and the duct is the same, which is composed of the induced power P i , type resistance power P pr , waste resistance power P p , and climbing power P c . where η p is the transmission loss coefficient from the engine to the blades. According to the conservation of energy and momentum theorem, the expressions of the waste resistance power P p , induced power P i and climb power P c can be expressed as follows: where U c is the vertical velocity of the blade plane and v i is the induced velocity of the rotor blade. Considering the existence of multi-blade blades in the lift fan and duct, the induced speed is non-uniform. In this paper, the induced speed is modified by K ind , and the above formula becomes According to the blade element theory, the rotor-type resistance power can be expressed as P pr = P pr0 (1 + 4.7µ) where P pr0 = σπR 2 ρV 3 t c d /8, σ is the solidity of the rotor, c d is the drag coefficient of the blade, V t is the tip speed of the rotor, and µ is the forward ratio. For lift fans and duct power components, their power can be obtained by the above formula [12]. Thus, in the case of the power limitation, the VTOL fixed-wing UAV's angle-speed envelope boundary must satisfy the force balance, and, at the same time, the total power of the lift fan and duct must be less than the available power of the system P n , i.e., P r < P n . Transition Corridor Calculation The transition corridor is the transition flight envelope. Equation (13) is the balance equation that is used to calculate the vertical take-off and landing transition corridor of the fixed-wing UAV. Equations (8) and (13) can be combined to solve three unknowns, namely, the thrust of the lift fan T f , the thrust of the ducted lift-increasing system T di , and the thrust deflection angle of the ducted lift-increasing system i di . When calculating the deflection angle of the ducted lift-increasing system based on the stall speed, the thrust of the lift fan, the thrust of the ducted lift-increasing system, and the thrust deflection angle of the ducted lift-increasing system are used as the solution. When calculating the deflection angle of the ducted lift-increasing system based on the power, the power of the lift fan and the duct are limited, and the attitude of the UAV is used as the solution. Firstly, the power parameters of the beginning window of the transition flight envelope are determined by the hovering state Formula (11). The UAV aerodynamic coefficients C L and C D at different flight speeds are calculated according to Formula (8). The above calculation results are substituted into Formula (14) to calculate the aerodynamic force of the UAV, and the final aerodynamic result is substituted into the balance Formula (13). Using the lift fan force T f , ducted lift-increasing system force T di , and force deflection angle i di as solution quantities for the trim calculation, the deflection angle-velocity envelope of the ducted lift-increasing system based on stall speed can be obtained. When calculating the deflection angle-speed envelope of the ducted lift-increasing system based on power, given the deflection angle value of the ducted lift-increasing system, the required power of the component is calculated according to the power formula, and, finally, the power-based transition flight envelope of the vertical take-off and landing fixed-wing UAV is obtained according to the output power limitation of the power components. Results and Discussion Taking a certain type of vertical take-off and landing fixed-wing UAV as a case prototype, the ducted lift-increasing system's deflection angle-speed envelope during the transition flight is calculated [20]. Wang C et al. [20] studied the dynamic modeling and transitional flight strategy of this type of aircraft but did not study its transitional flight corridor. The calculation of the transitional flight corridor is studied in detail below. Figure 6 shows the relationship between the lift coefficient and drag coefficient of the prototype during transitional flight with the angle of attack. In the legend, "12-A0-5" means that the ducted lift-increasing system has a single ducted power of 12N, the angle of attack is 0 • , and the free flow is 5 m/s. The aerodynamic data at specific state points in the transition process are calculated based on the aerodynamic coefficient curve. The rated power and calculation parameters of the lift fan and ducted lift-increasing system are shown in Table 1. Calculation Results In Figure 7a, declination angle-speed envelope of the UAV ducted lift-increasing system is calculated based on the case of the stall angle of attack. It can be seen that in the hover phase, the force deflection angle of the ducted lift-increasing system is 75 degrees, and the minimum flight speed when deflected to the fixed-wing flight mode is 43 m/s. Figure 7b shows the curve of lift fan thrust, the thrust of the ducted lift-increasing system, and body lift and drag in the transitional flight envelope. It can be seen that when hovering, the gravity of the UAV is balanced by the pull force of the lift fan and the ducted liftincreasing system. With the deflection of the ducted lift-increasing system, the lift of the UAV is provided by the power components and gradually transitions to the wings. As speed increases, the lift and drag of the UAV also increase. Calculation Results In Figure 7a, declination angle-speed envelope of the UAV ducted lift-increasing system is calculated based on the case of the stall angle of attack. It can be seen that in the hover phase, the force deflection angle of the ducted lift-increasing system is 75 degrees, and the minimum flight speed when deflected to the fixed-wing flight mode is 43 m/s. Figure 7b shows the curve of lift fan thrust, the thrust of the ducted lift-increasing system, and body lift and drag in the transitional flight envelope. It can be seen that when hovering, the gravity of the UAV is balanced by the pull force of the lift fan and the ducted liftincreasing system. With the deflection of the ducted lift-increasing system, the lift of the UAV is provided by the power components and gradually transitions to the wings. As speed increases, the lift and drag of the UAV also increase. Calculation Results In Figure 7a, declination angle-speed envelope of the UAV ducted lift-increasing system is calculated based on the case of the stall angle of attack. It can be seen that in the hover phase, the force deflection angle of the ducted lift-increasing system is 75 degrees, and the minimum flight speed when deflected to the fixed-wing flight mode is 43 m/s. Figure 7b shows the curve of lift fan thrust, the thrust of the ducted lift-increasing system, and body lift and drag in the transitional flight envelope. It can be seen that when hovering, the gravity of the UAV is balanced by the pull force of the lift fan and the ducted liftincreasing system. With the deflection of the ducted lift-increasing system, the lift of the UAV is provided by the power components and gradually transitions to the wings. As speed increases, the lift and drag of the UAV also increase. Figure 7a in that in the high-speed range, the perpendicular angle of the ducted force may be maintained until the trigger power limit. Figure 8b shows the curve of the power of the UAV's power components with the flight speed during the transition process under different attitude angles. Under the condition of power limitation, if the required power and the maximum available power are equal, a power-based deflection angle-speed envelope of the vertical take-off and landing fixed-wing UAV's ducted lift-increasing system can be obtained. It can be seen that with the increase in speed at the same attitude angle, the required power first decreases and then increases, which is closely related to the inflow characteristics of the duct. At the same time, it can be seen from the three curves with different attitude angles that the transition power at a large attitude angle is greater than that at a low angle of attack. The large attitude angle transition will trigger the power limit at low speed, because the higher the attitude angle is, the greater the resistance during the transition and the greater the required power. the ducted lift-increasing system. The results in this figure differ from those shown in Figure 7a in that in the high-speed range, the perpendicular angle of the ducted force may be maintained until the trigger power limit. Figure 8b shows the curve of the power of the UAV's power components with the flight speed during the transition process under different attitude angles. Under the condition of power limitation, if the required power and the maximum available power are equal, a power-based deflection angle-speed envelope of the vertical take-off and landing fixed-wing UAV's ducted lift-increasing system can be obtained. It can be seen that with the increase in speed at the same attitude angle, the required power first decreases and then increases, which is closely related to the inflow characteristics of the duct. At the same time, it can be seen from the three curves with different attitude angles that the transition power at a large attitude angle is greater than that at a low angle of attack. The large attitude angle transition will trigger the power limit at low speed, because the higher the attitude angle is, the greater the resistance during the transition and the greater the required power. Combining the flight envelope calculated based on the stall angle of attack and the flight envelope calculated based on the power limit, the transition flight corridor of the vertical take-off and landing fixed-wing UAV can be obtained (see in Figure 9). It can be seen that the left boundary of the transition corridor in the low-speed section is the deflection angle-speed envelope of the ducted lift-increasing system based on the stall angle of attack, and the right boundary of the high-speed section is the power-based deflection angle-speed envelope of the ducted lift-increasing system. The UAV can complete the transition flight in the transition corridor between the two envelopes. Combining the flight envelope calculated based on the stall angle of attack and the flight envelope calculated based on the power limit, the transition flight corridor of the vertical take-off and landing fixed-wing UAV can be obtained (see in Figure 9). It can be seen that the left boundary of the transition corridor in the low-speed section is the deflection angle-speed envelope of the ducted lift-increasing system based on the stall angle of attack, and the right boundary of the high-speed section is the power-based deflection angle-speed envelope of the ducted lift-increasing system. The UAV can complete the transition flight in the transition corridor between the two envelopes. Influence Analysis of the Manipulation Parameter The transitional flight process of the vertical take-off and landing fixed-wing UAV is the intermediate process connecting the vertical take-off and landing fixed-wing flight process, and it is an important flight mode of the vertical take-off and landing fixed-wing Influence Analysis of the Manipulation Parameter The transitional flight process of the vertical take-off and landing fixed-wing UAV is the intermediate process connecting the vertical take-off and landing fixed-wing flight process, and it is an important flight mode of the vertical take-off and landing fixed-wing UAV. During the transition flight, the ducted lift-increasing system deflects continuously, the aerodynamic configuration changes accordingly, and the aerodynamic force also changes accordingly. Transition flight is also a dangerous flight mode. The larger the declination angle-speed envelope of the vertical take-off and landing fixed-wing UAV's ducted liftincreasing system, the easier it is to achieve the transitional flight process. Within the flight envelope, the flight attitude of the UAV and the speed of power deflection have a certain impact on the transition process. In the transition process, on the premise of ensuring longitudinal balance, the transition at different pitch angles will have different transition state characteristics (see in Figures 10 and 11). From the calculation of flight data, it can be seen that in the ducted deflection data, when transitioning at a large attitude angle, the duct deflection can be completed at a lower flight speed, while for transition flight at a small attitude angle, a higher flight speed is required, as it allows for the flight transition to be completed. When deflection rates of the duct are different, fixed-wing vertical take-off and landing UAVs have different trim-flying capabilities. Within the allowable range of the power system, the deflection rate of the duct can vary from 1 • /s to 29 • /s (see in Figure 12). When the deflection rate of the duct exceeds 29 • /s, transition flight will exceed the lift fan trim capability of the aircraft, and, therefore, it cannot be completed. The simulation result is calculated under the stable equilibrium state of the UAV. In this state, it is found that when the transition time is the target, the smaller the pitch attitude angle, the faster the completion time of the transition flight. Taking the dynamic characteristics of the power system as the target, it is found that the transition flight can be completed stably when the pitch angle is 3 • , the duct deflection rate is 5 • /s, and the deflection rate of the small wing surface is 1.5 • /s. Influence Analysis of the Overall Parameter In aircraft design, the way in which the overall parameters of the vertical take-off and landing fixed-wing UAV affect the flight envelope is straightforward. In this work, the flight envelope of the vertical take-off and landing fixed-wing UAV is analyzed from the perspective of overall parameters, and a method for the design of the transitional corridor of the vertical take-off and landing fixed-wing UAV is proposed. The deflection angle-speed envelope of the ducted lift-increasing system based on the stall angle of attack is calculated with the maximum lift coefficient, and the stall angle of attack of the wing is mainly affected by the wing area, the wing lift coefficient, and the wing installation. Therefore, the deflection angle-speed envelope of the ducted lift-increasing system based on the stall angle of attack is affected by these parameters. Figure 13a shows the ducted lift-increasing system's deflection angle-velocity envelope based on the stall angle of attack after the wing area increased by 10%, 20%, and 30%, and Figure 13b shows the deflection angle-speed envelope of the ducted lift-increasing system's variation diagram based on the stall angle of attack after the maximum lift coefficient increased by 10%, 20%, and 30%. It can be seen that changing the wing area and changing the maximum stall lift coefficient can change the position of the deflection angle-speed envelope of the ducted lift-increasing system based on the stall angle of attack. The flight envelope moves to the left when increasing the wing area and increasing the maximum stall lift coefficient, and the flight envelopes expand by 4.6%, 6.9%, and 11.6%. In the transition process, on the premise of ensuring longitudinal balance tion at different pitch angles will have different transition state characteristics ures 10 and 11). From the calculation of flight data, it can be seen that in the flection data, when transitioning at a large attitude angle, the duct deflection c pleted at a lower flight speed, while for transition flight at a small attitude ang flight speed is required, as it allows for the flight transition to be completed. When deflection rates of the duct are different, fixed-wing vertical take-off and landing UAVs have different trim-flying capabilities. Within the allowable range of the power system, the deflection rate of the duct can vary from 1°/s to 29°/s (see in Figure 12). When the deflection rate of the duct exceeds 29°/s, transition flight will exceed the lift fan trim capability of the aircraft, and, therefore, it cannot be completed. The simulation result is calculated under the stable equilibrium state of the UAV. In this state, it is found that when the transition time is the target, the smaller the pitch atti- The simulation result is calculated under the stable equilibrium state of the UAV. In this state, it is found that when the transition time is the target, the smaller the pitch attitude angle, the faster the completion time of the transition flight. Taking the dynamic characteristics of the power system as the target, it is found that the transition flight can be completed stably when the pitch angle is 3°, the duct deflection rate is 5°/s, and the deflection rate of the small wing surface is 1.5°/s. Influence Analysis of the Overall Parameter In aircraft design, the way in which the overall parameters of the vertical take-off and landing fixed-wing UAV affect the flight envelope is straightforward. In this work, the flight envelope of the vertical take-off and landing fixed-wing UAV is analyzed from the perspective of overall parameters, and a method for the design of the transitional corridor of the vertical take-off and landing fixed-wing UAV is proposed. The deflection anglespeed envelope of the ducted lift-increasing system based on the stall angle of attack is calculated with the maximum lift coefficient, and the stall angle of attack of the wing is mainly affected by the wing area, the wing lift coefficient, and the wing installation. Therefore, the deflection angle-speed envelope of the ducted lift-increasing system based on the stall angle of attack is affected by these parameters. Figure 13a shows the ducted liftincreasing system's deflection angle-velocity envelope based on the stall angle of attack after the wing area increased by 10%, 20%, and 30%, and Figure 13b shows the deflection angle-speed envelope of the ducted lift-increasing system's variation diagram based on the stall angle of attack after the maximum lift coefficient increased by 10%, 20%, and 30%. It can be seen that changing the wing area and changing the maximum stall lift coefficient can change the position of the deflection angle-speed envelope of the ducted lift-increasing system based on the stall angle of attack. The flight envelope moves to the left when increasing the wing area and increasing the maximum stall lift coefficient, and the flight envelopes expand by 4.6%, 6.9%, and 11.6%. The right boundary of the flight corridor of the vertical take-off and landing fixedwing UAV is determined by the deflection angle-speed envelope of the power-based ducted lift-increasing system, thus reducing the required power of the flight state or increasing the available power of the flight system. It is possible to change the deflection angle-speed envelope of the ducted lift-increasing system. Figure 14 shows the deflection angle-speed envelope of the ducted lift-increasing system based on power after increasing the available power by 10% and 20%. It can be seen that as the available power increases, the power-based ducted lift-increasing system's deflection angle-speed envelope moves to the right, thus expanding the flight corridor boundary. It is calculated that the flight envelope expands by 21% and 40%. As shown in Figures 13 and 14, the effect of using the improvement in the overall parameters to expand the flight corridor is worse than using The right boundary of the flight corridor of the vertical take-off and landing fixedwing UAV is determined by the deflection angle-speed envelope of the power-based ducted lift-increasing system, thus reducing the required power of the flight state or increasing the available power of the flight system. It is possible to change the deflection angle-speed envelope of the ducted lift-increasing system. Figure 14 shows the deflection angle-speed envelope of the ducted lift-increasing system based on power after increasing the available power by 10% and 20%. It can be seen that as the available power increases, the power-based ducted lift-increasing system's deflection angle-speed envelope moves to the right, thus expanding the flight corridor boundary. It is calculated that the flight envelope expands by 21% and 40%. As shown in Figures 13 and 14, the effect of using the improvement in the overall parameters to expand the flight corridor is worse than using available power to expand the flight corridor under the same percentage. That is, the effect of using energy to expand the flight corridor is more significant than using the overall parameters. As the effect of the power parameters increased by 10% and that of the aerodynamic parameters also increased by 10%, it can be observed that increasing the power can enlarge the transition corridor by 21%, while increasing the aerodynamic parameters can only expand the transition corridor by 4.6%. Appl. Sci. 2021, 11, x FOR PEER REVIEW 14 of 15 available power to expand the flight corridor under the same percentage. That is, the effect of using energy to expand the flight corridor is more significant than using the overall parameters. As the effect of the power parameters increased by 10% and that of the aerodynamic parameters also increased by 10%, it can be observed that increasing the power can enlarge the transition corridor by 21%, while increasing the aerodynamic parameters can only expand the transition corridor by 4.6%. Figure 14. The influence of available power on flight corridors. Conclusions (1) The model proposed in this article can describe the transitional flight corridor of a vertical take-off and landing fixed-wing UAV; (2) The left boundary of the transition flight corridor of this type of vertical take-off and landing fixed-wing UAV is determined by the maximum lift coefficient during the transition flight, and the right boundary is determined by the maximum available power; (3) In the process of transition flight, the transition completion time with a small attitude angle is the shortest, while the transition power deflection at a large attitude angle is slower, and the completion process is stable. (4) Enlarging the transition corridor of this type of UAV through the design of overall parameters, the results show that the improvement in the transition corridor by increasing the available power is more obvious than that when increasing the aerodynamic parameter in the same proportion. Conclusions (1) The model proposed in this article can describe the transitional flight corridor of a vertical take-off and landing fixed-wing UAV; (2) The left boundary of the transition flight corridor of this type of vertical take-off and landing fixed-wing UAV is determined by the maximum lift coefficient during the transition flight, and the right boundary is determined by the maximum available power; (3) In the process of transition flight, the transition completion time with a small attitude angle is the shortest, while the transition power deflection at a large attitude angle is slower, and the completion process is stable. (4) Enlarging the transition corridor of this type of UAV through the design of overall parameters, the results show that the improvement in the transition corridor by increasing the available power is more obvious than that when increasing the aerodynamic parameter in the same proportion.
v3-fos-license
2019-07-07T13:05:12.008Z
2019-07-01T00:00:00.000
195812597
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1097/md.0000000000016298", "pdf_hash": "330768260d004736909c2d54fc28468038b8b17c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46785", "s2fieldsofstudy": [ "Medicine" ], "sha1": "330768260d004736909c2d54fc28468038b8b17c", "year": 2019 }
pes2o/s2orc
Treatment rationale and design of the induction chemotherapy and adjuvant thoracic radiation in resectable N2-3A/3B non-small cell lung cancer (ICAT) study Abstract Background: The optimal treatment strategy for stage N2–3A/3B non-small cell lung cancer (NSCLC) remains controversial owing to its heterogeneity. Although multimodal therapy is considered the standard therapeutic approach for stage N2–3A/3B resectable NSCLC patients, the optimal combination strategy still needs to be clarified. Patients and methods: In total, 25 male and female patients aged between 20 and 75 years with stage N2–3A/3B resectable NSCLC will be included. Eligible patients will undergo tri-modality therapy comprising induction chemotherapy (3 cycles of combination therapy with carboplatin and nab-paclitaxel), followed by surgery and postoperative radiotherapy. Recruitment was commenced in April 2017, with a planned last follow-up in March 2024. As of May 2019, 1 subject has been enrolled. The primary endpoint is the treatment completion rate. The secondary endpoints are objective response rate (ORR) of induction chemotherapy, treatment-related adverse event, recurrence-free survival (RFS) time, and overall survival (OS) time. RFS and OS time will be calculated as the time from this study registration to first recurrence and all-cause death, respectively. Ethics and dissemination: The protocol was approved by the institutional review boards of Kyoto Prefectural University of Medicine and all the participating hospitals. Written informed consent was obtained from all patients before registration, in accordance with the Declaration of Helsinki. The study results will be disseminated via publication in peer-reviewed journals. Trial registration: Trial registration number UMIN000025010 and jRCT1051180028 Introduction The optimal treatment strategy for stage N2-3A/3B non-small cell lung cancer (NSCLC) remains controversial due to disease heterogeneity. The PACIFIC study showed that concurrent chemoradiotherapy and adjuvant durvalumab therapy significantly increased the progression-free survival in patients with stage 3 unresectable NSCLC. [1] Meanwhile, multimodality treatment with a combination of chemotherapy, surgery, and radiotherapy is considered the standard treatment option for stage N2-3A/3B resectable NSCLC. Patients who undergo surgery as part of trimodality therapy have better overall survival than those who undergo chemoradiotherapy alone, [2] showing that surgery should be considered as part of a multimodality treatment for patients with resectable lung cancer. Induction chemotherapy combined with surgery could be beneficial and feasible for patients with resectable NSCLC. [3][4][5] Furthermore, induction chemotherapy is non-inferior to induction chemoradiotherapy with respect to survival benefit. [6,7] However, postoperative radiotherapy (PORT) improves survival and relapse. [8][9][10] Stage N2-3A/3B NSCLC patients have high rates of distant metastasis and local recurrence, and aggressive consolidation therapy after induction chemotherapy and surgery improves survival benefit. [11] However, some patients who undergo upfront surgery are ineligible for adjuvant chemotherapy because of a poor status after invasive surgery. More aggressive induction chemotherapy has higher feasibility and survival benefit and can ensure compliance to the planned chemotherapy. An effective combination approach will improve the feasibility and maximize the benefit of chemotherapy, surgery, and radiotherapy and improve survival. This study aims to evaluate the feasibility of trimodal therapy comprising chemotherapy, surgery, and radiotherapy for stage N2-3A/3B NSCLC. We will use a combination approach comprising 3 courses of induction chemotherapy, surgery, and PORT. We selected carboplatin (CBDCA) with nab-paclitaxel (PTX) as induction chemotherapy regimen. CBDCA with paclitaxel is one of the standard induction chemotherapy or chemoradiotherapy regimens for advanced NSCLC, and a phase III international trial reported that nab-paclitaxel (nab-PTX) regimen had a favorable risk-benefit profile compared with that of PTX because it improves the objective response rate (ORR) and decreases the risk of adverse events such as severe neuropathy and neutropenia. [12] 2. Materials and methods Study design The study is an investigator-initiated, multi-institutional, singlearm, open-label prospective intervention phase II trial. Figure 1 depicts the study flowchart. Study setting Seven hospitals agreed to participate in this study. The protocol was approved by the institutional review board of each hospital. Written informed consent will be obtained from all patients before registration, in accordance with the Declaration of Helsinki. Patients will be registered in this study after independent review by the Center for Quality Assurance in Research and Development, Kyoto Prefectural University of Medicine. At least annual independent monitoring is planned, in accordance with the Japanese clinical trial guideline. Participants The tumors are staged according to the 8th edition of the Union for International Cancer Control TNM Classification of Malignant Tumors. [13] Patients who were histologically diagnosed with N2-3A/3B NSCLC without chest wall invasion will be recruited. The inclusion criteria are as follows: (1) Complete resection after induction chemotherapy is anticipated. (2) Confirmed normal respiratory function, which is defined as vital capacity (VC) >80% and forced expiratory volume in 1 second as percent of forced VC (FEV1%) >70%, within 56 days before obtaining informed consent. (3) History of chemotherapy or radiation to the chest. (4) Age between 20 and 75 years at the time of enrolment. (5) Eastern Cooperative Oncology Group Performance Status of 0-1. (6) Confirmed normal bone marrow, hepatic, and renal functions within 28 days before obtaining informed consent. The exclusion criteria are as follows: (1) Unresectable mediastinal lymph node, such as extranodal infiltration, on computed tomography (CT) (2) Having conditions contraindicated for CBDCA or nab-PTX administration (3) Severe hypersensitivity to CBDCA or nab-PTX (4) Severe bone marrow suppression (5) Severe renal disorders (6) Severe hepatic disorders (7) Past history of severe drug allergy (8) Pulmonary disorders (9) Past history of cardiac infarction within 180 days before informed consent (10) Chest CT showing possible interstitial pneumonia or idiopathic pulmonary fibrosis within 56 days before informed consent (11) Prescription with steroids equivalent to >10 mg/day prednisolone within 90 days before informed consent (12) Cardiac disorder of clinical concern (13) Mental disorders of clinical concern (14) Uncontrollable diabetes mellitus (15) Infections of clinical concern (16) Complications of clinical concern (17) Active double cancer (18) Patients who are pregnant, lactating, or potentially pregnant (19) Any other patients regarded by the investigators as unsuitable for this study. Dose and treatment regimens of induction chemotherapy Induction chemotherapy will consist of 3 cycles of combination therapy with CBDCA (area under the blood concentration curve 5 mg/mL/min per Calvert formula) over 1 hour on day 1 and 100 mg/m 2 nab-PTX over 30 minutes on days 1, 8, and 15 every 3 weeks. Dose modifications are allowed when grade 4 hematological toxicity or grade 2 to 3 nonhematological toxicity occur; toxicities are assessed according to the Common Terminology Criteria for Adverse Events (CTCAE) version 4.0. Surgery Clinical re-staging of the tumor after induction chemotherapy and before surgery will be performed via CT and positron Patients who achieve non-progressive disease status (PD, i.e., >20% increase in size or appearance of new lesions) will undergo surgery at 14 to 56 days after the end of induction chemotherapy. Surgical treatment will include lobectomy, bilobectomy, or pneumonectomy with systemic lymph node dissection. Pneumonectomy will be performed only when absolutely necessary. Open, minimally-invasive, or hybrid resection techniques are allowed in the trial. Eligibility criteria for surgery: (1) Complete resection after induction chemotherapy is anticipated (2) Predicted postoperative (Ppo)% VC >40% (3) Eastern Cooperative Oncology Group Performance Status of 0-1 (4) Confirmed normal bone marrow, hepatic, and renal function within 7 days before surgery according to the following parameters: (1) Leukocyte count ≥2,500/mm 3 (2) Hemoglobin ≥8.0 g/dL (3) Platelet count ≥70,000/mm 3 (4) Total bilirubin 1.5 mg/dL (5) Serum albumin ≥2.5 g/dL (6) Aspartate aminotransferase, alanine aminotransferase 100 IU/L (7) Creatine 2.0 mg/dL (8) Peripheral arterial oxygen saturation on room air ≥90%. Postoperative radiotherapy PORT will be performed within 56 days after surgery. Positive lymph node involvement on clinical imaging is defined as an enlargement of 1 cm or more in the short axis on CT scan and showing hypermetabolic uptake on fluorodeoxyglucose (FDG)-PET scan. The mediastinal target volume will be defined according to clinical guidelines and irradiated up to a total dose of 50 Gy in daily fractions of 2 Gy (Monday to Friday) within 56 days of PORT. In addition, subsequent boost irradiation of 10 Gy in daily fractions of 2 Gy will be applied in cases of positive margin (R1) or extracapsular tumor spread (total dose 60 Gy/30 Fr within 70 days of PORT). Three-dimensional CT images will be obtained for radiotherapy planning purposes. Opposing portal and multifield techniques are allowed in the trial according to irradiation methods. For pulmonary resection, we targeted an outcome of total lung V10 <40%, V15 <30%, and V20 <20%. Rationale for setting the target population size A total of 25 patients will be accrued in this study. Our analysis of previous trials showed that the complete resection rate after induction therapy was 71% to 76%. [6,14] We expect a feasibility rate of >75% if this study used bimodality therapy with combination of induction chemotherapy and surgery without radiotherapy. However, because this will be a trimodality therapy study, the completion rate is estimated to be lower than that of bimodality therapy. The expected rate and threshold rate are determined to be 75% and 50%, respectively. Under these conditions, when 1-side alpha error of 0.1, beta error of 0.2, and statistical power of 80% are assumed, 21 subjects are required. Considering allowance for dropouts, the sample size was set to 25 patients. 2.8. Statistical methods 2.8.1. Population to be analyzed. All subjects enrolled in this study (full analysis set, FAS), excluding the patients with serious violations (such as serious protocol deviation, violation for inclusion/exclusion criteria, and violation for prohibited concomitant medication/therapy) from FAS (per protocol set). Primary endpoint. The primary endpoint is the feasibility of trimodality therapy for resectable 3A/3B-N2 NSCLC, which will be evaluated according to the completion rate. Feasibility of trimodality therapy is defined as more than 50% of the enrolled patients completing the treatment. Assuming a null hypothesis of completion rate more than 50%, we will conclude that this trimodality therapy is useful. A 1-sided P value <.1 is considered statistically significant. 2.8.3. Secondary endpoints. The secondary endpoints are ORR of induction chemotherapy, safety of the trimodality therapy, recurrence-free survival (RFS), and overall survival (OS). Both RFS and OS will be calculated as the time from this study registration to first recurrence and all-cause death, respectively 2.8.3.1. ORR. Local response will be evaluated using images obtained via CT and PET-CT after induction chemotherapy in accordance with the RECIST guidelines. 2.8.3.2. Safety: Treatment-related adverse event. The safety of chemotherapy and radiotherapy will be evaluated according to the CTCAE version 4.0, while the safety of surgery will be evaluated according to the Clavien-Dindo classification. RFS curve. Kaplan-Meier method will be used to estimate the RFS curve and calculate the annual and 2-year RFS and their 95% confidence interval. OS curve. Kaplan-Meier method will be used to estimate the OS curve, and the annual and 2-year OS rates and their 95% confidence interval will be calculated. Ethics The trial was approved by the Ethics Committee of Kyoto Prefectural University of Medicine, Kyoto, Japan (Approval number: ERB-C-765-2, the last edition ver 2 14/Feb/2018). The trial is subject to the supervision and management of the Ethics Committee. Discussion The treatment of stage N2-3A/3B NSCLC remains controversial owing to its heterogeneity. Particularly, although multimodality therapy is considered the standard treatment option for stage N2-3A/3B resectable NSCLC patients, the optimal combination approach needs to be clarified. Chemotherapy is absolutely essential for the treatment of stage N2-3A/3B resectable NSCLC patients, [3,5] while upfront surgery is associated with lower completion rate of adjuvant chemotherapy. [15,16] Moreover, although there is no significant difference in survival benefit between neoadjuvant chemotherapy and adjuvant chemotherapy, but neoadjuvant chemotherapy is better tolerated as evidenced by the higher completion rate of full-dose chemotherapy and fewer high-grade toxicities. [17] However, neoadjuvant chemotherapy increases postoperative complications and surgical complexity and thus requires careful surgical technique and postoperative management. Radiotherapy does not add any survival benefit to induction chemotherapy [6] and increases the rate of severe postoperative complications, including bronchopleural fistula. [6,18] A high completion rate will allow for maximized effect of multimodality therapy, particularly with respect to survival benefit, in stage N2-3A/3B resectable NSCLC patients. The ICAT study will provide data on the most appropriate combination approach of multimodality therapy for stage N2-3A/3B resectable NSCLC patients.
v3-fos-license
2021-12-12T16:22:09.732Z
2021-12-10T00:00:00.000
245040889
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/13/24/13654/pdf", "pdf_hash": "0a6ea01234bbd34b1ed3c56233b28d82018073e5", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46787", "s2fieldsofstudy": [ "Business", "Engineering" ], "sha1": "d8b5375fe871f2903089e5a23766344c213dfc2b", "year": 2021 }
pes2o/s2orc
From Contractual Flexibility to Contractor’s Cooperative Behavior in Construction Projects: The Multiple Mediation Effects of Ongoing Trust and Justice Perception : The flexible contract is an important mechanism for owners to govern contractors in construction projects. Given the limited explanatory power of the justice mechanism and the important role of relational factors, this study explored the role of ongoing trust and justice perception in the relationship between contractual flexibility and the contractor’s cooperative behavior and aimed to further reveal the potential influence paths through an empirical analysis. We found the following: (1) apart from justice perception, ongoing trust is another crucial mediation factor in the relationship. (2) Together with the former, ongoing trust plays significantly multiple mediation effects and constitutes the main indirect influence paths, among which the parallel one is strongest and the chain one comes third. (3) Moreover, both factors are more likely to be impacted by contract executing flexibility, compared with contract content flexibility. These findings enrich relational mechanism research and provide some guidance for the owner to build contractual flexibility to govern contractors’ behavior. Introduction Cooperation between the owner and the contractor forms the foundation of effective construction project outcomes [1]. However, due to the one-off, high complexity, diverse interests and information asymmetry between project parties, as well as asset specificity, cooperative relationships are hard to build and maintain automatically [2]. Under this condition, project contracts become the premise of project parties' cooperation [3]. Considering the inability of rigid contracts to adapt to the changing circumstances [4] and the weak legal institutions and high wasteful litigation [5], the owner inclines to sign a flexible contract to set up afterwards adjustable rules in advance, which can motivate the contractor [6], reduce opportunistic behavior [7], and then achieve greater efficiency [1]. Currently, most literature has explored the positive influence of contractual flexibility and qualitatively revealed its potential justice mechanism. Contractual flexibility not only realizes the reasonable transfer and fair sharing of risk between parties and enhances the fair cooperation [8,9] but also provides a compensation mechanism for project uncertainties, which creates a stable environment for cooperation [10,11]. Furthermore, the previous research of this study's authors has demonstrated that justice perception only plays a partial mediation effect between contractual flexibility and cooperative behavior [12], which means there are other important factors that still need to be studied. On the other hand, according to the literature, trust has been recognized as one of the most central issues when considering the relationship between organizations [13]. The building of trust can effectively reduce the parties' fear of uncertainty, improve the effectiveness of problem solving [14], further reduce opportunistic behaviors [15] and safeguard cooperative behavior [16]. Meanwhile, trust building is influenced by structural and functional factors [17], i.e., roles and responsibilities which are defined in contract. Especially, well-prepared, specified contracts allow for risk reduction and make it easier for relations to survive [18], which can enhance trust [19]. Furthermore, there is also a close relationship between fairness and trust, the former to a certain extent constitutes the foundation of the latter. The trust of one partner to the other usually comes from the perception of fairness in the transaction process. If the contractor believes that the owner's behavior is fair and his own interests can be effectively protected, he will have trust in the owner and think that the owner will not do anything harmful to his own interests. In this case, given the crucial role of trust and its close relationship with contracts and behavior, this study further explored its role. Meanwhile, as the parties in a transaction need to continuously prove their credibility through a series of specific interaction activities to promote the development and stability of long-term relationship, trust, as a subjective evaluation, will also change with the interaction between the two. Therefore, trust is not a result-oriented concept but a process-oriented concept. Thus, this study considered it as a dynamic process, namely, ongoing trust, and tried to reveal its role in the relationship. Therefore, this study takes ongoing trust as a point of departure and incorporate it into the mediation model of justice perception. The question arises as to what the role of ongoing trust between contractual flexibility and contractor's cooperative behavior is and how it works with justice perception. To fill in this gap, this study aims to explore the multiple mediation effects in the construction project and reveals the potential influence paths. Contractual Flexibility In line with Song, Zhu, Klakegg and Wang [12], we defined contractual flexibility as the contractor's positive ability for dynamic adaptation and flexible adjustment to respond to project uncertainty economically and quickly, in accordance with the provisions of the contract or within the reserved space during the signing and executing of the contract. The contract is not only the visual and explicit contract agreements but also the whole performing process. Especially in the construction project field, the signed contract does not mean the completion of the contract relationship, but the starting point of contracting [20]. Correspondingly, contractual flexibility not only embodies in the various kinds of terms of parties' game also in the relationship and process, which includes content flexibility (CF) and executing flexibility (EF) [12]. Contract content flexibility refers to the adaptability of formal contract. In this way, the contract is equivalent to the transaction contract or agreement, and contractual flexibility is understood as contract (terms) flexibility. The construction of flexible mechanisms or elements of contract terms is concerned. It firstly reflects a contract's completeness, which aims to cover the future possibilities ahead of time through forecasting all kinds of situations and uncertainties [3], such as "terms floating range" and "terms completeness", which go as far as possible to identify uncertainty factors ahead of time to improve the dynamic adaptability of contracts. Secondly, it reflects the active and passive selection space reserved in the adaptive clauses, such as "ex post clause adjustability", "ex post renegotiation clause" and "engineering change authority". The reservation of such space also includes two aspects: active selection and passive selection. The active one means to temporarily shelve identified uncertainty in order to reduce the signing cost [12]. In this way, when specific conditions or circumstances occur, it allows adjustment, renegotiation and other procedures in the project implementation that are specified in the clause. In contrast, the passive one is to set a framework for cooperation, such as some guiding or open clauses, when the uncertainty cannot be predicted due to insufficient information and other reasons, so as to achieve flexible response to future changes when obtaining more information [21]. At this point, some guiding or vague clauses should be set for further acquisition of more information and data, and a new or supplementary contract can be signed through renegotiation to achieve a flexible response to future changes. Contract executing flexibility refers to the ability to use the informal contract to replace the formal one in order to address unforeseeable uncertainties effectively, rapidly and economically [12]. This kind of informal contract may include different informal contractual relationships such as commitment and communication. Contract executing flexibility is based on the attributes of the relationship and the informal contract in the contracting and is often introduced into project contracts with relational methods and capabilities [3]. Actually, it mainly derives from relational contracts, an informal agreement formed by the value of future contractual relationships. Generally, it mainly turns to two forms, namely, the usage of informal contracts and flexible authorization [12,21]. The usage of informal contract handles unexpected events in a more flexible way, instead of sticking to inapplicable clauses or complex and time-consuming renegotiation process. Through informal authorizing, executing flexibility allows the contractor to address uncertainties rather than applying to or discussing with the owner about everything or details. On the one hand, when the terms of the formal contract are not applicable to the new situation or unexpected circumstances occur, it may cost more manpower, material resources and time to renegotiate and sign the formal contract, and both parties can resort to the relationship to deal with the new situation quickly and economically. On the other hand, due to the constraints of signing costs, it is difficult to specify all details in a formal project contract. At this time, owners are more inclined to solve problems through cooperation and mutual trust and give contractors more freedom to deal with problems by themselves rather than strictly reviewing or approving every business and making up for the incompleteness of formal contracts with lower costs. Ongoing Trust Trust is not a result-oriented concept but a process-oriented concept [22]. Especially in the construction project, the trust relationship between parties is not an isolated event but a process gradually formed in the project's implementation and mutual interaction, namely, ongoing trust (OT). It reflects the expectation about the reliability of the other party, which generates from specific actions [23]. Compared with the initial trust, ongoing trust between parties has a more sustained and significant impact on the cooperation intention, behavior and attitude of both parties [24]. Therefore, this study focuses on the contractor's ongoing trust in the process of project implementation and treats it as a positive intention that believes the owner will not harm his interests, even if he needs to take a certain amount of risk. Based on three classical models of trust applied to the project situation proposed by Hartman [25], Rousseau et al. [26] and Lewicki and Bunker [27], Yang and Shuai [28] discussed and found three sources of trust in Chinese construction projects, competence, relation and intuition trust. In this study, as the basis of transaction, the contract provides institutional support and guarantee for the activities of both parties. Compared with the initial intuition trust, the institution provided by the formal or informal contract will play greater influence on the contractor's mind and behavior, correspondingly becoming the trust's source. Therefore, this study combined the models from Yang and Shuai [28] and Rousseau, Sitkin, Burt and Camerer [26] and argued that the formation sources of ongoing trust includes the owner's competence, both parties' relationship and the institution. Justice Perception Justice reflects an important moral quality, which occurs when one party believes that a decision, outcome or process is balanced and correct [29]. It is not so much an objective outcome as a subjective and relative evaluation of an action or outcome [30], namely, justice perception (JP). Inter-organizational equity plays an important role in responding to internal and external environmental uncertainties and creates a basis for inter-organizational cooperation. If one party feels the existence of fairness in the trading activity, the convenience will hold a positive state for the future result, invest more resources into the trading and reduce opportunistic behaviors. On the contrary, the lack of fairness will lead to conflict and even the termination of the transaction relationship. In the context of construction projects, owners and contractors usually have conflicting goals, interests and concepts. The perception of fairness is of great value to the management of these conflicts, which can effectively reduce the risk of conflicts and promote the generation of cooperative behaviors. Combined with the construction context, we define justice perception as a project contractor's subjective perception or evaluation of the appropriateness of the owner's decision, behavior and project results in the project process. Furthermore, the formation of justice perception at the level of individuals and organizations mainly comes from the reasonability and equality of the distribution, procedures and interactions [31], which is recognized by scholars and widely used at individual and organizational levels [32]. Among them, distributive justice perception represents an individual's sense of fairness in the distribution of results, such as resources and salary, and focuses on the distribution of resources or results, as well as subjective judgment on input and return. In inter-organizational transactions, when one party believes that the outcome of the transaction is fair compared to its contribution, this convenience generates distributive justice perception. Procedural justice perception refers to individuals' perception of formal decision-making procedures and related policies, reflecting fairness in the process. It depends on whether the decision-making process is open, transparent and impartial. Fairness is perceived when one party believes it can control or intervene in the process. It can be seen that procedural fairness involves fairness in the process of outcome distribution and emphasizes control power in the process of distribution. The perception of interactive justice is one party's subjective perception of the quality of the relationship, emphasizing the important role of respect, honesty and politeness in interaction. This fairness also includes interpersonal fairness and information fairness. The former refers to whether the two parties treat each other politely and respectfully, while the latter reflects the degree and state of information exchange between the two parties. When one party feels the respect, courtesy and full exchange of information from the other party during the interaction, the interaction will have a sense of fairness. Contractors' Cooperative Behavior Cooperative behavior (CB) is the effort that individuals are willing to make efforts for others in order to complete tasks. It is always accompanied by a certain degree of sacrifice to achieve mutual help [33]. In line with Zhang et al. [34], this study defines the contractor's cooperative behavior as a series of adaptive and mutually beneficial efforts and coordination taken by the contractor, who is willing to take certain risks in order to achieve the common goals or interests with the owner in the project. According to literature, there are mainly two perspectives about the dimensions of cooperative behavior. One focuses on the characters of cooperative behavior, such as in-role and extra-role behavior [35]. The other one identifies dimensions according to the content structure, such as information exchange, shared problem solving and highlighted flexibility from Pearce [36]. Due to the complication and diversity of contractor's behavior, it is hard to accurately classify a kind of behavior into one attribute. Thus, this study more agrees with Pearce [36] and mainly focuses on three kinds of cooperative behavior. Contractual Flexibility and Contractors' Cooperative Behavior The contract with high completeness clearly prescribes different aspects of complex transactions in advance, such as working scope, conflict resolution procedures, contingencies and critical milestones [37], which clearly specify what is and what is not allowed [35]. This enhances the transparency of the trading and forms an effective supervision over opportunistic behavior to provide effective guidance for the future [38]. When disputes arise, the contractor can cooperate with the owner according to the rules and fulfill responsibilities [39] instead of being concerned about opportunistic behavior [40]. Meanwhile, the detailed contract also clarifies both parties' expectations, which can improve the consistency of objectives [41] and promote the continuous and cooperative activities [42]. The active and passive selection space of flexible contract creates a buffer zone and awards the contractor the right to seek fair return, which can decrease contractors' concerns about the unknown risk and restrain opportunistic behavior [4]. Meanwhile, this flexibility reflects a good future expectation and expresses the inherent willingness to continue the transaction, which can encourage the contractor to assume responsibility and reduce the possibility of severe losses. In addition, the selection space also provides an adjustment mechanism, and permits the contractor to cooperate according to specific situations, instead of sticking to inapplicable clauses [21]. On the other hand, contract executing flexibility takes the favorable relationship as flexible elements and combines them into the contracting through relational competence [20]. Instead of sticking to the initial clauses [3], it creates a buffer space and allows the contractor to fully discuss the project process and tasks with the owner and better understand uncertainties or problems. Such a clear post-definition is a key factor to promote the contractor to solve problems through cooperation [43] and response to the continuously trading environment effectively [3]. Secondly, instead of referring to the owner for everything or the complex and time-consuming formal negotiation process, contract executing flexibility supports the contractor to independently deal with some risks in a mutually beneficial way. This in turn strengthens the inter-organizational ties and positive emotions and improves the transaction satisfaction, forming the basis for the contractor's cooperative attitude [44]. Thereby, we hypothesize: Hypothesis 1 (H1). Contract content flexibility (H1a) and executing flexibility (H1b) affect the contractor's cooperative behavior positively. Mediation Effect of Ongoing Trust The pursuit of completeness is helpful to define the contractor's role [3]. When signing this flexible contract, the contractor will be committed to the transaction [45], and this will be further reflected as the acceptance and performance of responsibilities, which promotes institution-oriented trust. Meanwhile, formal, clear and detailed clauses provide a formal platform to prove that the owner has enough competences to fulfill his responsibilities, such as capability of payment, which is helpful for the emergence and persistence of trust [46]. Secondly, the selection space provides an opportunity for the contractor to communicate and negotiate according to changes and enhances favorable mutual interaction that is beneficial for trust [47]. On the other hand, in the contracting process, executing flexibility coordinates the parties' interests according to new changes or uncertainties in a flexible way. This is helpful to develop a good relationship and enhance or maintain a trust relationship [47]. Furthermore, the usage of an informal contract usually means that the contractor believes the flexible execution can effectively safeguard his interests and bring more income than keeping the original contract. This will maintain his good will about the project's returns [48] and create an environment to reduce opportunistic behaviors [47]. Additionally, the flexible authorization mechanism is beneficial to reduce the power imbalance between the parties [49], which further promotes the contractor's satisfaction and contributes to the increase in ongoing trust. Meanwhile, trust helps to ease the hostile atmosphere between parties, avoid excessive self-protection and even weaken the willingness to adopt opportunistic behavior [50]. Meanwhile, it constitutes the intrinsic motivation to pay more attention to the common interests with the owner [44] and cultivates the partnership between parties [51]. It also creates moral constraints and brings the belief that the owner will cooperate in a reliable, authentic and long-term manner, instead of short-term and self-interested behaviors. Thereby, we hypothesize: Hypothesis 2 (H2). Ongoing trust plays a mediation role between contract content (H2a)/executing (H2b) flexibility and cooperative behavior. Mediation Effect of Justice Perception By designing a relatively complete contract, contract content flexibility clarifies the potential solutions of various contingencies in advance as far as possible [21]. Under this, the contractor's interests are more likely to be integrated with the owner's and be realized through negotiated strategies, which will promote the rationality of profit distribution [52]. Secondly, when some unexpected risks occur, the framework provided by the adaptive clauses supports to resolve matters smoothly and efficiently at the institutional level and avoid disputes between the parties [44]. Moreover, the active and passive selections space gives the contractor the right to participate in the risk decision and realizes the share of risk responsibility reasonably [53]. In short, contract content flexibility forms a mechanism of adaptability to prompt reasonable risk sharing and realize joint decision and good interaction, which further which constitutes the source of justice perception. On the other hand, in the flexible executing process, the contractor is more likely to focus on the effective dialogue and negotiation with the owner to solve problems. This forms a supplement to the incompleteness of the formal contract [54] and offers the contractor more renegotiation opportunities to jointly negotiate the division of responsibilities and solutions with the owner [20]. In turn, the contractor will make positive judgment about acceptability and reasonability of project outcomes. Secondly, the proper authorization improves the contractor's positive recognition about the relationship and make him believe the executing and interaction with the owner are fair [55]. All of these realizes the reasonable division and response of uncertain results and processes. Correspondingly, the generation of justice perception will encourage the contractor to form a good will and make efforts towards the common goals, which can effectively curb moral hazard [56] and promote more cooperative behaviors [40]. Specifically, the distribution justice perception means the contractor's efforts are equally rewarded, which can enable him to invest more resources to maintain the partnership and contractual relationship [31,40]. Secondly, when the contractor can fairly participate in the project process, he will believe that his knowledge and ability are affirmed [57] and make efforts to improve the relationship quality [58]. Thirdly, justice from interaction lets the contractor feel him are being treated fairly and stimulates him to act for mutual benefit, even showing a positive behavior beyond the contract [59]. Thereby, we hypothesize: Hypothesis 3 (H3). Justice perception plays a mediation role between contract content (H3a)/ executing (H3b) flexibility and cooperative behavior. As discussed above, contractual flexibility builds a mechanism to deal with project uncertainty fairly and reasonably in a less costly and more efficient way. This brings the contractor a high sense of justice, which constitutes the premise of transaction reciprocity and promotes him to behave cooperatively. On the other hand, a flexible contract creates a cooperative atmosphere and reflects the positive expectation for the benign transaction, which helps to create the contractor's ongoing trust. Based on this, the contractor is more likely to focus on the coordination of overall interests and form self-moral constraints, which positively relate to cooperative behavior. Thereby, we hypothesize: Chain Mediation Effect of Ongoing Trust and Justice Perception Justice perception has a positive effect on the formation of continuous trust. Actually, justice is a necessary condition for trust [60]. The formation and maintenance of ongoing trust derives from the understanding that his investment is rewarded and treated [24]. When the contractor is to obtain a fair, reasonable income, a reliable system program as well as the owner's friendliness, he will believe that the owner can give reasonable returns, which promotes the contractor to continue to trust the owner. That is to say, the contractor with a justice perception believes the owner treats him fairly and his interests and rights could be protected and guaranteed effectively in the future, which enable him to be satisfied with the project's process and results and thus be willing to trust, i.e., to be vulnerable to the owner [61]. Furthermore, based on the above analysis, contract content and execution flexibility have a positive impact on the contractor's justice perception. Specifically, project contract flexibility can be regarded as a negotiated concession in the transaction relationship, which can form a positive attitude in the form of reciprocity between the parties to the transaction. This cooperative trading environment reduces the threat of opportunism and is conducive to the success of the social exchange process and the creation of value; that is, contract flexibility can promote sustainable and fair behavior. Accordingly, contract content and execution flexibility may promote contractors' cooperative behavior through the internal relationship path "fairness perception-continuous trust". Thereby, we hypothesize: Hypothesis 6 (H6). Contractor's justice perception and ongoing trust play a chain mediation effect between contract content(H6a)/executing(H6b) flexibility and cooperative behavior. Sampling and Data Collection This study adopted a questionnaire survey to collect data from project contractors in Chinese construction projects. The respondents needed to have more than 1 year of project work experience and understand the signing and execution of project contracts. Actually, the respondents are mainly from the following positions: project manager, technical engineer and contract manager, who have a deep understanding about project contracting. At the same time, respondents were required to answer the questionnaire based on the experience of specific construction projects they have participated in or were currently participating in. Based on the specific project, relevant data are collected by focusing on the content and process of a single project contract, so as to improve the reliability and validity of data. We found the survey samples in the field of construction projects and distributed questionnaires through Dalian University of Technology and Sun Yat-sen University University Alumni Association, MBA students' training courses and enterprise research programs. Researchers were able to get close to interviewees to ensure that they come from the construction project industry and have some experience in project management or participation. Moreover, an electronic questionnaire was issued to collect data through an online survey platform. Actually, the electronic questionnaire was distributed with the help of the above respondents. The questionnaire was further expanded to the respondents' related enterprises and peers, forming a "snowball" chain questionnaire distribution and acquisition, expanding the distribution range. The data collection lasted for 6 months from July 2018 to December 2018. A total of 438 questionnaires were issued, and 387 questionnaires were collected, which lasted for 6 months, and then 317 valid questionnaires were obtained. As shown in Table 1, the differences between men and women are relatively obvious, and the respondents are mostly with bachelor's or master's degrees, which is consistent with the Chinese construction industry that there are more male practitioners or managers who generally have a higher education degree. At the same time, the respondents mainly include project managers and contract managers who have extensive work experience, and the quantity distributions of age, working years and the number of projects are relatively uniform, and the variance of each variable is small, which indicates that the sample data are distributed evenly and representatively. Measurement We selected classical scales with reliability and validity and translated all the items between Chinese and English. Through the comparison and analysis of the Chinese and English scales, we designed the questionnaire and discussed the initial measurement scales with professors, lecturers, doctoral students, etc., in the form of emails or interviews before the formal distribution. Based on the feedback, the questionnaire was modified and revised. The questionnaire was tested by means of small-scale questionnaire distribution and data collection. A total of 150 questionnaires were issued, and 113 valid questionnaires were obtained as pre-survey data to test the reliability and validity of the questionnaire. After testing and modifying, the final questionnaire includes 37 items of 5 variables, as shown in Appendix A. The respondents were asked to indicate the extent of their agreement with statements, using a Likert scale ranging from 1 'strongly disagree' to 5 'strongly agree'. Considering the sample number is about 8.5 times of the items, the requirement of 5-10 times was met [62]. Contractual Flexibility The items to measure project contractual flexibility were adopted from the scale in Song, Zhu, Klakegg and Wang [12], which has been proven to have high reliability and validity and be applicable to the Chinese construction industry. As indicated in Appendix A, five items were used to capture contract content flexibility, and the other five ones were used to measure contract executing flexibility. Contractors' Cooperative Behavior The measurement of contractor cooperation behavior mainly refers to the research results of Pearce [36] and Zhang, Zhang, Gao and Ding [34] and used 9 items to measure the behavior in 3 aspects, namely, information exchange, shared problem solving and highlighted flexibility. All the items were revised to be applicable in the construction project context (Appendix A). Justice Perception and Ongoing Trust Contractors' justice perception is an intermediary variable in this study. Its measurement mainly refers to the research results of Poppo and Zhou [40], Yaling et al. [63] and Colquitt et al. [64] and includes 9 items, which were modified considering the Chinese construction practice and measured distribution, procedure and interaction justice, as displayed in Appendix A. The measurement for contractor's ongoing trust was based on the items from the research work of Rousseau, Sitkin, Burt and Camerer [26], Khalfan, Mcdermott and Swan [24] and Yang and Shuai [28] and explores ongoing trust using 9 items from 3 aspects, namely, capability, institution and relation. The scale was modified and adapted considering the Chinese construction practice. Control Variables This study mainly has controlled for two kinds of variables, namely, project properties (project type and size) and relationship properties (prior cooperation and future business prospects), to eliminate potential influences from other factors. The type and size of a project may affect its implement [65,66]. Meanwhile, prior relationships (PR) between organizational partners may reduce opportunism [67], and future business (FB) prospects also significantly influence contractors' behavior [35]. Common Method Bias Two tests were adapted to eliminate potential common method bias, including program control and statistical control [68]. Firstly, we analyzed the accuracy and comprehensibility of the scale by Chinese-English mutual translation and experts' discussion to ensure the scientific nature and comprehensibility of questionnaire items. The respondents are required to answer the questionnaire based on specific construction project to improve the reliability and effectiveness of the data. Meanwhile, we clearly stated the purpose of this and promised to keep their confidential information in order to let them answer questions with an open attitude and true view. Additionally, Harman single factor test was performed for unrotated factor analysis of all the items. The results showed there are five common factors, and the explanatory power of first one is 45.9% (<50%), which means the common method bias in the study is not a major concern. Table 2 shows that the KMO values of five research variables are over 0.50, and the chi-square values of Bartlett's spherical test are all significant, indicating the research data are suitable for exploratory factor analysis. Meanwhile, Cronbach's coefficient of each variable is greater than 0.8, the CITC value of each item is greater than 0.5, the factor load is greater than 0.6 and the KMO value is greater than 50%. These indicate that the formal survey measurement scale has a high internal consistency and structural validity. The results of principal component analysis showed that the standardized factor load coefficients of all latent variables are distributed between 0.700 and 0.874, over 0.50 [69], and p values are less than 0.001. The reliability of the internal indicators within each variable was ensured as the values of composite reliability (CR) are over 0.7 [70]. The convergent validity of the factors was achieved as the values of the average variance extracted (AVE) are greater than 0.5. The AVE of each factor is higher than its squared correlation with other factors (see Table 3), and each measurement item has the highest loading on the corresponding factor (see Table 4). These together indicate a satisfactory discriminate validity of the factors [71]. Overall, the measures in this study have adequate reliability and validity. There are 37 observed variables in this study, which can meet the T rule of model recognition. The measurement items of each latent variable are more than three. Therefore, the structural model is recognizable. This study used Amos24.0 to test the initial structure model. As shown in Table 5, although all indices have not reached the critical standards, the overall results are satisfactory. According to literature, all paths in the model have theoretical significance and can be retained. Therefore, this study modified the initial model according to model fitting correction index, which shows that some covariance correction indices of the error terms are high. Thus, this study adjusted the correlation paths until no significant results of each correction index existed, and the overall model fittings were continuously improved. The final fitting results and the model(M0) are shown in Table 5 and Figure 1, respectively. The modified error terms are all belonging to the same variable. The overall fitting degree of the structural model is good, which meets the critical value requirements, and the model passed the test. Table 6 shows that H1a failed the test (β = 0.067, p > 0.05), while the other eight paths are confirmed and H1b and H5 passed. This means that project contract content flexibility does not affect the contractor's cooperative behavior directly. Contract executing flexibility (0.274), justice perception (0.33) and ongoing trust (0.304) significantly affect cooperative behavior. Meanwhile, the positive impacts of contractual flexibility (content and executing) on justice perception or ongoing trust exist. Justice perception has a positive impact on ongoing trust (0.304). Table 6 shows that H1a failed the test (β = 0.067, p > 0.05), while the other eight paths are confirmed and H1b and H5 passed. This means that project contract content flexibility does not affect the contractor's cooperative behavior directly. Contract executing flexibility (0.274), justice perception (0.33) and ongoing trust (0.304) significantly affect cooperative behavior. Meanwhile, the positive impacts of contractual flexibility (content and executing) on justice perception or ongoing trust exist. Justice perception has a positive impact on ongoing trust (0.304). Preliminary Mediation Effect Test This study used PROCESS Model 4 for SPSS Macro [72] to estimate 4 simple mediation models, with 5000 resamples and bias-corrected 95% CI. The models and results are shown in Table 7. According to the results, justice perception and ongoing trust play a partial mediation effect between contractual flexibility (contract content and executing flexibility) and the contractor's cooperative behavior. H2 and H3 are verified. As Table 8 Chain Mediation Effect Test In M7, the mediation effect of path 2 is 0.0309, accounting for 17.70% of the total mediation effect, and the confidence interval is (LLCI = 0.0121, ULCI = 0.0599), which means the two-stage chain mediation effect from justice perception to ongoing trust has been verified. Furthermore, comparing the effect difference between paths, it is shown that the independent mediation effect of justice perception or ongoing trust is similar to their the two-stage continuous mediation effect. In M8, the mediation effect of path 2 is 0.0351, accounting for 15.98% of the total mediation effect, and the confidence interval is (LLCI= 0.0156, ULCI = 0.0667), which means the two-stage mediation effect has been verified. Furthermore, the effect differences of path 1 and 3 (LLCI = 0.0225, ULCI = 0.1196) or path 2 and 3 (LLCI = −0.1001, ULCI = −0.0047) are significant, which implies that the independent mediation effect of justice perception or ongoing trust is also stronger than the two-stage chain mediation effect. Positive Impact of Contractual Flexibility on Contractor's Cooperative Behavior The empirical results show that contract content flexibility has no significant direct effect on contractor's cooperative behavior (β = 0.109, p > 0.05) and H1a is invalid. This is different with the findings of Song, Zhu, Klakegg and Wang [12]. Considering we have larger samples, more mediation and control variables and various test methods, which are beneficial to improve the reliability and validity of the model, this finding is a revision and improvement of the previous study. There may be two reasons for this. First, although the improvement of the flexibility in a contract reduces the ambiguity, it cannot directly promote the contractor's willingness and behavior to bear certain risks for the owner. Secondly, the Chinese construction industry is a buyer's market. In this situation, the buffer space provided by the flexible contract content may also become an opportunity for the owner to seek rent. Therefore, the contractor may not be willing to cooperate. Furthermore, compared with the existing research on the content and behavior of contracts, most of them believe that the content of contracts first affects the relationship state of both parties and then affects their behaviors [4,37,38,42]. For example, the existence of the adjustable clause firstly reduces the contractor's worry about the uncertainty of the future and produces trust in the owner, thus reducing their opportunistic behavior and showing more willingness and behavior of cooperation. Thus, the results of this study further confirm this relationship through an empirical analysis. Relatively, contract executing flexibility can construct, reinforce and maintain relational contracts and then strengthen the cooperative basis. This prompts the contractor to form positive self-reinforcement and show more cooperative behavior. On the other hand, contract executing flexibility has a significant direct effect on the cooperative behavior (β = 0.274, p < 0.001). This result verifies the viewpoints of Harris et al. [73] and Kujala, Nystén-Haarala and Nuottila [21]. Mediation Effect of Ongoing Trust The path coefficients of contractual flexibility (content and executing flexibility) and ongoing trust are significant (β = 0.175, p < 0.05; β = 0.295, p < 0.001). This further deepened the discussion of Chow, Cheung and Chan [45] and Manu, Ankrah, Chinyio and Proverbs [46] from the perspective of the contract. At the same time, it also details the research of Shuibo, Junying and Zhenyu [51] about the relationship between contractual flexibility and ongoing trust. Contract content flexibility can stimulate the contractor commitment, good will and inner satisfaction. Contract executing flexibility can reduce the uncertainty in the contracting, maintain the contractor's interests and build benign relationship. These further promote the contractor's positive expectation for the future and maintain the internal sense of trust. Thus, the partner contract relationship helps build contractor's trust [24]. On the other hand, empirical results show that a contractor's ongoing trust has a significant impact on their cooperative behavior (β = 0.304, p < 0.001). This is consistent with the research results of Shuibo, Junying and Zhenyu [51] and Yaling, Huiling, Peng and Yilin [7] in the field of construction projects and further tests the positive relationship between the inner state of the subject and its behavior. This shows that, from a process perspective, ongoing trust is an important antecedent of cooperative behavior. This factor can strengthen the internal motivation of cooperation, form certain moral constraints, and thus reduce opportunistic behavior. Overall, contractual flexibility can indirectly affect contractor's cooperative behavior through the mediation effect of ongoing trust. Mediation Effect of Justice Perception The empirical results show that contract content and executing flexibility both have a positive impact on the contractor's justice perception, which also has positive impact on cooperative behavior (β = 0.223, p < 0.001). Thus, contractual flexibility can effectively affect the subjective judgment of the contractor, which is consistent with the research of [74]. When the contractor believes he is treated fairly in the project, he will respond to the goodwill of the owner, actively cooperate with the owner's activities and take on a cooperative attitude and behavior. In other words, the contractor's justice perception mediates the relationship between contractual flexibility and cooperative behavior, which is consistent with the research results of Luo [58] and Zhang, Zhang, Gao and Ding [34]. Multiple Mediation Effect of Ongoing Trust and Justice Perception According to the test results, contractual flexibility affects the cooperative behavior through the multiple mediation role of ongoing trust and justice perception. Firstly, the independent effect value of justice perception or ongoing trust accounts for about one third of the total effect in M1-4, which means both factors have small and partial mediation effects. Secondly, the parallel mediation effect accounts for about half of the total effect, M5(0.1746) and M6(0.2197). The mediation effect values of ongoing trust are similar to the ones of justice perception in both models. These suggests that ongoing trust is a crucial mediation factor between contractual flexibility and cooperative behavior, except justice perception, which confirmed the importance of integrating both. Thirdly, according to the results of the path coefficient analysis, the contractor's justice perception has a significant positive impact on ongoing trust (β = 0.304, p < 0.001), which confirms the discussions of Khalfan, Mcdermott and Swan [24] and Zhiwei and Ying [61] about justice and trust in construction project. Based on this, both constitute a chain mediation model and have the similar effects in M7 and M8, which also confirmed the value of ongoing trust. By comparison, the parallel mediation effects are larger than the independent ones, of which the chain mediation effect is similar (M7) to or smaller(M8) than their independent ones. These suggests that the parallel mediation model has the most explanatory power, and the chain one comes third. These further indicate that ongoing trust and justice perception are two crucial mediation factors. Moreover, the two-stage mediation paths have the least mediation effect. That is probably because it is usually more time-consuming to establish ongoing trust through justice, and the justice is more directly reflected in the contractor's behavior rather than the secondary transmission. Additionally, in M1-8, the multiple mediation effects of two factors between contract executing flexibility and cooperative behavior is stronger than those between the contract content flexibility and cooperative behavior. This may be because the formation of both factors comes from the contractor's subjective understanding and judgment, which is not entirely from the project contract itself but is gradually formed through the activities and interactions of both parties in the project process. In other words, the executing process is more conducive to the contractor's understanding and judgment. Therefore, the flexibility of project contract executing has a greater impact on both. In summary, justice perception and ongoing trust play significant and multiple mediation effects between contractual flexibility and a contractor's cooperative behavior, among which the parallel ones are most prominent and constitute the main indirect influence paths, which have closer relationship with contract executing flexibility. This suggests that justice and trust are important influencing factors of inter-organizational cooperation and have strong direct effects on cooperation. Conclusions and Implications This study integrates ongoing trust into the relationship model of contractual flexibility and contractor's cooperative behavior from the perspective of inter-organization relationships and aims to reveal the potential and multiple mediation effects of ongoing trust and justice perception. The empirical analyses show that (1) apart from justice perception, ongoing trust is another crucial mediation factor in the relationship. (2) Together with the former, ongoing trust plays significantly multiple mediation effects and constitutes the main indirect influence paths, among which the parallel one is strongest and the chain one comes third. (3) Moreover, both factors are more likely to be impacted by contract executing flexibility, compared with contract content flexibility. The foundations contribute to build a framework in the construction projects, which explores the relationship among contractual flexibility, justice perception, ongoing trust and cooperative behavior. The results take major relational factors of justice and trust into account and develop the research on the relationship between contractual flexibility and cooperative behavior by deconstructing the internal mechanism. What is more, it reveals the important multiple mediation effect of two relational factors and builds the influence paths. Specially, the direct and positive effects of "contract execution flexibility-cooperative behavior", "contract content/execution flexibility-justice perception/ongoing trust" and "justice perception/ongoing trust-cooperative behavior" are confirmed. Secondly, we examined and analyzed the existence of multiple mediating effects of justice perception and ongoing trust (simple mediating, parallel mediating and chain mediating) between project contractual flexibility and cooperative behavior. Through the comparative analysis of each intermediary relationship path, this paper revealed the importance of the parallel mediation effect, which deepens the research on the intermediary relationship path. Again, this study verified the existence of two relationship paths, "contractual flexible-justice perceptions-ongoing trust-cooperative behavior" and "contractual flexible-justice perception/ongoing trust, and cooperative behavior". The above empirical research results verify and support the existing qualitative research and systematically explains the multiple impact paths of contract flexibility on contractor cooperation behavior. For practitioners, especially project owners, this present research provides some guidance. Firstly, it is important for practitioners to focus on the contract relationship and contract's performance. By improving the level of the contractual flexibility, especially the executing flexibility, the owner can promote the ability to adapt to and adjust the internal and external dynamic environment and then stimulates the contractor's cooperative behavior. Secondly, considering the role of ongoing trust, the owner should pay attention to the equity of the project contract and self-reliability simultaneously. Based on this, a stable trading relationship can be strengthened, which will maintain the contractor's trust, provide reasonable return to the contractor and then motivate him to cooperate. Notwithstanding the contributions and implications to the academia and practice, this study also has several limitations. This study takes the perspective of a relationship to discuss and carry out the research and mainly focuses on two kinds of relationship factors that reflect bilateral cooperation, namely, justice perception and ongoing trust. Given their partial mediation effect, further research is necessary to identify and discuss other factors and enrich the related conclusion. In addition, the research data in this study come from the Chinese construction industry and are rooted in the Chinese context. In China, although contract laws and regulations are constantly improving and strengthening, the signing and execution of project contracts are still quite arbitrary in practice. Especially for the content and execution of contracts, the social characteristics of "guanxi" often provide more flexibility. However, this flexibility is not always beneficial. The study of the Chinese situation can more effectively discuss the value of beneficial flexibility in the contract content and execution process, improve the understanding of reasonable contract flexibility and then strengthen the project contract management activities. Future research could adopt a cross-cultural perspective and explore whether the two relation factors play the same mediation effects in in different cultural contexts. Institutional Review Board Statement: Ethical review and approval were waived for this study, due to REASON: in China, there is no mandatory requirement for relevant research to be conducted and approval by the Ethics Committee. Meanwhile, the relevant Ethics Committee department does not issue a formal approval document. Therefore, the ethics statements can only be expressed through the authors' relevant statements. And we confirm that all participants in the study were clearly and accurately informed about how the data were used before receiving the questionnaire survey, and they agreed to participate in the research process. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Not applicable.
v3-fos-license
2022-11-26T17:16:59.628Z
2022-11-23T00:00:00.000
253899746
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1420-3049/27/23/8165/pdf?version=1669284417", "pdf_hash": "f31c2271ed06cb3e2d9ce1bf002dd63b778d4236", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46788", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "sha1": "298c7b67bd835bd7fcdf59ec46490d2a24fe04f1", "year": 2022 }
pes2o/s2orc
Luminous Self-Assembled Fibers of Azopyridines and Quantum Dots Enabled by Synergy of Halogen Bond and Alkyl Chain Interactions Herein, a simple approach for the fabrication of luminous self-assembled fibers based on halogen-bonded azopyridine complexes and oleic acid-modified quantum dots (QDs) is reported. The QDs uniformly align on the edge of the self-assembled fibers through the formation of van der Waals force between the alkyl chain of oleic acid on the QD surface and the alkyl chain of the halogen-bonded complexes, 15Br or 15I. Furthermore, the intermolecular interaction mechanism was elucidated by using Fourier-transform infrared spectroscopy (FTIR), Raman spectroscopy, and density functional theory (DFT) calculations. This approach results in retention of the fluorescence properties of the QDs in the fibers. In addition, the bromine-bonded fibers can be assembled into tailored directional fibers upon evaporation of the solvent (tetrahydrofuran) when using capillaries via the capillary force. Interestingly, the mesogenic properties of the halogen-bonded complexes are preserved in the easily prepared halogen-bonded fluorescent fibers; this provides new insight into the design of functional self-assembly materials. Introduction The design and synthesis of molecules that can self-assemble into functional supramolecular structures with fascinating properties through multiple noncovalent interactions is a frontier in the material and chemical research fields. Several organic self-assembled supramolecular functional fibers, which show promise for biomedical applications, have been synthesized via electrospinning and inter-or intramolecular interactions, resulting in well-defined morphologies [1][2][3][4][5][6][7][8]. The halogen bond, a versatile noncovalent interaction that has directionality, tunable interaction strength, good hydrophobicity, and is compatible with the large size of halogen atoms, has been used for the fabrication of supramolecular assemblies, such as photoresponsive mesogens [9], supramolecular gels [10][11][12], microphase structures [13], and photo-actuators [14]. However, its application to supramolecular functional fibers has been little explored. The first report of halogen-bonded fibers, published in 2013, involved the use of bis (pyridyl urea) and 1,4-diiodotetrafluorobenzene in polar media to achieve gelation [10]. Subsequently, the analogous halogen bond-based supramolecular sol-gel transition of azopyridines was reported; however, halogen-bonded fiber structures were not discussed [11,12]. Azopyridines, which exhibit interesting self-assembly and photoresponsive abilities, have been used to fabricate supramolecular assemblies and are regarded as the most widely used components in the field of supramolecular chemistry [15,16]. The majority of reported azopyridine self-assembled fibers are constructed through hydrogen bonds and ionic bonds, and their self-assembly conditions and morphologies have been established. However, further investigation is required to develop supramolecular fibers with optical or electrical functions by mixing them with inorganic nanomaterial components. Quantum dots (QDs) are zero-dimensional inorganic materials with unique features, including high fluorescence efficiency, a narrow emission band, and tunable emission, owing to their size dependence; thus, they are commonly employed in solar cells, emitting diodes, biomedical imaging, and fluorescent anti-counterfeiting technology [17][18][19][20][21][22][23][24][25]. Recently, electrospun fluorescent fibers containing organic components and QD composites, such as CdSe/ZnS core-shell QDs and CdSe QDs, have been realized for use in polymeric lasers and optical sensors [26,27]. Furthermore, an intense circularly polarized luminescent material was prepared by forming a novel luminescent chiral nanotube using a chiral lipid gelator as a chiral template for the QDs [28]. However, the formation of azopyridine supramolecular fluorescent or directional fibers that incorporate inorganic nanomaterials through combined halogen bonds and van der Waals force has not yet been reported. Considering this, we used our previously reported azopyridine halogen-bonded liquid crystal (LC) complexes (nX, X = Br, or I) to assemble fibers that exhibit an ordered orientation and luminescence (Scheme 1); details of the synthesis are presented in the ESI. We established that 15Br could promote the targeted directional arrangement of disordered supramolecular fibers with the aid of a capillary when tetrahydrofuran (THF) is used as a solvent. Mixing oleic acid-modified CdSe/ZnS core-shell QDs with the LC complexes resulted in the formation of large luminous fiber crystals of 15X@QDs (X = Br or I), in which the QDs were aligned along the edge of the self-assembled fibers due to van der Waals force between the long alkyl chain of oleic acid on the QD surface and 15Br or 15I. This is a novel approach for the fabrication of supramolecular structures with new properties and functions that can be applied in drug detection, biosensors, and electroluminescent devices. ties, have been used to fabricate supramolecular assemblies and are regarded as the most widely used components in the field of supramolecular chemistry [15,16]. The majority of reported azopyridine self-assembled fibers are constructed through hydrogen bonds and ionic bonds, and their self-assembly conditions and morphologies have been established. However, further investigation is required to develop supramolecular fibers with optical or electrical functions by mixing them with inorganic nanomaterial components. Quantum dots (QDs) are zero-dimensional inorganic materials with unique features, including high fluorescence efficiency, a narrow emission band, and tunable emission, owing to their size dependence; thus, they are commonly employed in solar cells, emitting diodes, biomedical imaging, and fluorescent anti-counterfeiting technology [17][18][19][20][21][22][23][24][25]. Recently, electrospun fluorescent fibers containing organic components and QD composites, such as CdSe/ZnS core-shell QDs and CdSe QDs, have been realized for use in polymeric lasers and optical sensors [26,27]. Furthermore, an intense circularly polarized luminescent material was prepared by forming a novel luminescent chiral nanotube using a chiral lipid gelator as a chiral template for the QDs [28]. However, the formation of azopyridine supramolecular fluorescent or directional fibers that incorporate inorganic nanomaterials through combined halogen bonds and van der Waals force has not yet been reported. Considering this, we used our previously reported azopyridine halogen-bonded liquid crystal (LC) complexes (nX, X = Br, or I) to assemble fibers that exhibit an ordered orientation and luminescence (Scheme 1); details of the synthesis are presented in the ESI. We established that 15Br could promote the targeted directional arrangement of disordered supramolecular fibers with the aid of a capillary when tetrahydrofuran (THF) is used as a solvent. Mixing oleic acid-modified CdSe/ZnS core-shell QDs with the LC complexes resulted in the formation of large luminous fiber crystals of 15X@QDs (X = Br or I), in which the QDs were aligned along the edge of the self-assembled fibers due to van der Waals force between the long alkyl chain of oleic acid on the QD surface and 15Br or 15I. This is a novel approach for the fabrication of supramolecular structures with new properties and functions that can be applied in drug detection, biosensors, and electroluminescent devices. Scheme 1. Molecular structure of the used 15X (X = Br or I) and QDs, and the self-assembly process of the luminous fibers and directional fibers in the capillary in this work. Scheme 1. Molecular structure of the used 15X (X = Br or I) and QDs, and the self-assembly process of the luminous fibers and directional fibers in the capillary in this work. Results and Discussion The halogen-bonded azopyridine fibers spontaneously formed either in the THF or upon the evaporation of one drop of the THF solution on the surface of the glass substrates in random order. The optical and scanning electron microscopy (SEM) images of the fibers (Figure 1a,b, respectively) reveal that the width of the supramolecular fibers of 15Br is on the micron scale. The effect of the alkyl chain length of the halogen-bonded complexes on the morphology of the fibers was investigated. Self-assembled fibers were formed in the THF using bromine-bonded complexes with alkyl chain lengths of 7 to 15 carbons Figure S2, ESI). The fibers of 15Br had a large aspect ratio, while the corresponding azopyridine derivatives did not form fibrous structures, which suggests that halogen bonds play a key role in facilitating the formation of supramolecular fibers. Supramolecular luminescent fibers are particularly important because of their potential applications in interdisciplinary research, such as light-emitting electrochemical cells [34] and diagnostic devices [35,36]. The general method for the production of luminescent fibers involves electrospinning QDs in a polymer solution; however, this process has low production efficiency because the polymer solution has a slow rate of reaction, and the procedure is labor intensive [1]. In this study, novel luminescent halogen-bonded fibers were designed and easily obtained by mixing 15Br and oleic acid-modified CdSe/ZnS QDs in THF. As shown in Figure 1d,e the majority of the self-assembled fibers with bright fluorescence can be observed at the edges of the structures after the addition of QDs. The QDs supply fascinating luminescence properties without disturbing the fiber structures. Surface tension can cause the fluid to flow rapidly over the surface and remain almost stagnant in the internal area, while the QDs are carried to the edges by the Marangoni flow [37,38]. Furthermore, by increasing the volume-to-volume ratio of the QDs from 3:1 to 1:1, the aspect ratio of the self-assembled organic/inorganic hybrid composite In addition, taking 15Br as a typical example, an increase in the mass concentration of 15Br in the THF from 0.1 to 1.0 wt% resulted in an increase in the number of visible self-assembled fibers ( Figure S3, ESI). Subsequently, the morphologies of the fibers obtained from 15Br and its pristine counterpart, A15AzPy, were studied using six organic solvents with different polarity indices. The nature of the solvent influenced the morphology of the self-assembled fibers, indicating that the morphologies of the fabricated halogen-bonded fibers depended on the nature of the chosen organic solvent ( Figures S4 and S5, ESI). LCs are mesophasic, which means they exist between the melting and clearing points. Below its melting point, the material forms a normally ordered solid, and above the clearing point, it forms an isotropic liquid. The thermal properties of the self-assembled fibers fabricated by recrystallization of LC molecular 15Br were studied using differential scanning calorimetry ( Figure S6a). The crystal-to-mesophase transition and the mesophase- to-isotropic liquid transition temperatures increased significantly from 100.2 • C to 157.6 • C for the 15Br and 143.6 • C to 160.6 • C for the fibers, respectively. Moreover, the 15Br fibers have a narrower LC range than 15Br. In addition to obtaining optical microscopy images, the crystals of 15Br were purified in the THF and exhibited high melting and clearing points. These results were supported by the powder X-ray diffraction results ( Figure S6b). Three diffraction peaks appear in the low-angle region for the 15Br fibers, similar to those observed for 15Br, and the d-spacing (1.78, 0.89, and 0.46 nm), which has a ratio of 1:1/2:1/4 and is consistent with a lamellar structure [29]. The peak at 1.78 nm, observed in the first-order reflection of the 15Br fibers, is sharp compared to the peak obtained for the pristine material, suggesting that the 15Br fibers are more ordered than those of the pristine material [30]. Moreover, 15Br could form directional self-assembled fibers in capillaries upon evaporation of the THF, owing to the capillary force [31][32][33], as shown in Figure 1c. This is a key technology in the development of high-performance organic materials. Supramolecular luminescent fibers are particularly important because of their potential applications in interdisciplinary research, such as light-emitting electrochemical cells [34] and diagnostic devices [35,36]. The general method for the production of luminescent fibers involves electrospinning QDs in a polymer solution; however, this process has low production efficiency because the polymer solution has a slow rate of reaction, and the procedure is labor intensive [1]. In this study, novel luminescent halogen-bonded fibers were designed and easily obtained by mixing 15Br and oleic acid-modified CdSe/ZnS QDs in THF. As shown in Figure 1d,e the majority of the self-assembled fibers with bright fluorescence can be observed at the edges of the structures after the addition of QDs. The QDs supply fascinating luminescence properties without disturbing the fiber structures. Surface tension can cause the fluid to flow rapidly over the surface and remain almost stagnant in the internal area, while the QDs are carried to the edges by the Marangoni flow [37,38]. Furthermore, by increasing the volume-to-volume ratio of the QDs from 3:1 to 1:1, the aspect ratio of the self-assembled organic/inorganic hybrid composite (15Br@QDs) fibers can be increased, as shown by laser scanning confocal microscopy ( Figure S7a-d). To eliminate the effects of the solvent used for the QDs on the morphology of the fibers, n-hexane and the QDs in n-hexane were added separately to the THF solution of 15Br, using the same ratio as used previously, and analyzed by SEM and energy-dispersive X-ray spectroscopy (EDS). The mixture of 15Br in THF and the oleic acid-modified CdSe/ZnS QDs in n-hexane spontaneously self-assembled into fibers with a high aspect ratio ( Figure S8a), while inhomogeneous structures self-assembled upon the addition of n-hexane to 15Br in the absence of QDs ( Figure S8b). The EDS results indicate that QDs were present on the fibers of the 15Br@QDs ( Figure S8c,d), which contributed to the formation of largeaspect-ratio self-assembled luminous fibers. In addition, A15AzPy@QDs could also form self-assembled fibers due to the van der Waals interactions between alkyl chains ( Figure S9). The fluorescence properties of 15Br@QDs in solution are shown in Figure 2a. The emission peaks of CdSe/ZnS QDs exhibited a similar pattern to those of the resultant fibers, while being slightly red-shifted, indicating the conservation of the fundamental fluorescence properties of the QD-assisted luminescent fibers. Raman spectroscopy is a powerful tool for investigating halogen bonds [39,40]. As shown in Figure 2b, the Br-Br stretching peaks of the 15Br fibers shifted to a lower wavenumber relative to those of 15Br (219.7 cm −1 to 207.2 cm −1 ), which indicates not only that the 15Br fibers are halogen bonded, but also that the recrystallization of the 15Br fibers from 15Br in THF weakens the halogen bond and leads to a decrease in the Br-Br vibration frequency of the 15Br fibers. Furthermore, the Br-Br stretching vibration peaks of 15Br@QDs observed at 216.8 cm −1 , suggest that the weak van der Waals force between alkyl chains has a subtle influence on the halogen bond. This suggests that the halogen bond plays a major role in the self-assembly of the azopyridine fibers, and the QDs further aid the self-assembly process via van der Waals force, leading to the formation of luminous fibers. the formation of large-aspect-ratio self-assembled luminous fibers. In addition, A15AzPy@QDs could also form self-assembled fibers due to the van der Waals interactions between alkyl chains ( Figure S9). The fluorescence properties of 15Br@QDs in solution are shown in Figure 2a. The emission peaks of CdSe/ZnS QDs exhibited a similar pattern to those of the resultant fibers, while being slightly red-shifted, indicating the conservation of the fundamental fluorescence properties of the QD-assisted luminescent fibers. Raman spectroscopy is a powerful tool for investigating halogen bonds [39,40]. As shown in Figure 2b, the Br-Br stretching peaks of the 15Br fibers shifted to a lower wavenumber relative to those of 15Br (219.7 cm −1 to 207.2 cm −1 ), which indicates not only that the 15Br fibers are halogen bonded, but also that the recrystallization of the 15Br fibers from 15Br in THF weakens the halogen bond and leads to a decrease in the Br-Br vibration frequency of the 15Br fibers. Furthermore, the Br-Br stretching vibration peaks of 15Br@QDs observed at 216.8 cm −1 , suggest that the weak van der Waals force between alkyl chains has a subtle influence on the halogen bond. This suggests that the halogen bond plays a major role in the self-assembly of the azopyridine fibers, and the QDs further aid the self-assembly process via van der Waals force, leading to the formation of luminous fibers. Figure 2c shows the UV-vis absorption spectra of 15Br in the THF, CdSe/ZnS QDs, 15Br@QDs solution, and 15Br@QDs film. The maximum absorption of 15Br and 15Br@QDs in THF occurs at wavelengths of 359 and 354 nm, respectively, which is attributable to the azobenzene π-π * bands. The spectrum of 15Br@QDs reveals a small blue shift relative to the 15Br spectrum, which is caused by the substituent effect of the carboxylate groups of oleic acid on the surface of the QDs and bromine atom. In contrast, the peak at 363 nm in the spectrum of 15Br@QDs in solution was red-shifted in the spectrum of the 15Br@QDs film. This red shift may be attributable to the J-aggregation of the chromophores, which are partly arranged in close proximity to each other and with coplanar transition dipoles [41][42][43][44][45][46][47]. Figure 2c shows the UV-vis absorption spectra of 15Br in the THF, CdSe/ZnS QDs, 15Br@QDs solution, and 15Br@QDs film. The maximum absorption of 15Br and 15Br@QDs in THF occurs at wavelengths of 359 and 354 nm, respectively, which is attributable to the azobenzene π-π * bands. The spectrum of 15Br@QDs reveals a small blue shift relative to the 15Br spectrum, which is caused by the substituent effect of the carboxylate groups of oleic acid on the surface of the QDs and bromine atom. In contrast, the peak at 363 nm in the spectrum of 15Br@QDs in solution was red-shifted in the spectrum of the 15Br@QDs film. This red shift may be attributable to the J-aggregation of the chromophores, which are partly arranged in close proximity to each other and with coplanar transition dipoles [41][42][43][44][45][46][47]. The distribution of CdSe/ZnS QDs in the halogen-bonded fibers was observed using transmission electron microscopy ( Figure 3). Interestingly, the 15Br fibers can act as a template for the arrangement of QDs during self-assembly. Thus, the CdSe/ZnS QDs were uniformly distributed along the edge of the halogen-bonded fibers owing to the van der Waals force between 15Br and the QDs, resulting in the preservation of the original fluorescence properties of the QDs, as confirmed by the absorption and fluorescence spectral data. In addition, the EDS maps of the fibers indicate the presence and even distribution of Br, S, Cd, Se, and Zn in the QDs on the fibers (Figure 3c-g). The introduction of long-alkylchain-functionalized QDs promotes interactions between the QDs and halogen-bonded complexes, resulting in luminous fibers. The distribution of CdSe/ZnS QDs in the halogen-bonded fibers was observed using transmission electron microscopy ( Figure 3). Interestingly, the 15Br fibers can act as a template for the arrangement of QDs during self-assembly. Thus, the CdSe/ZnS QDs were uniformly distributed along the edge of the halogen-bonded fibers owing to the van der Waals force between 15Br and the QDs, resulting in the preservation of the original fluorescence properties of the QDs, as confirmed by the absorption and fluorescence spectral data. In addition, the EDS maps of the fibers indicate the presence and even distribution of Br, S, Cd, Se, and Zn in the QDs on the fibers (Figure 3c-g). The introduction of long-alkyl-chain-functionalized QDs promotes interactions between the QDs and halogen-bonded complexes, resulting in luminous fibers. Furthermore, the LC properties of the 15Br@QDs were determined using polarizing optical microscopy; 15Br@QDs were shown to have a focal conical fan texture similar to that of 15Br ( Figure S10). Moreover, the aligned 15Br@QDs mesogens show strong anisotropy in their polarized UV-vis absorption spectra, and the orientation factor (0.052) compares favorably with that of pristine 15Br (0.004) ( Figure S11). To evaluate the bonding capability of the AzPy derivatives and QDs as a function of the halogen atom, molecular iodine was used as a Lewis base instead of molecular bromine for the preparation of iodine-bonded 15I@QDs complexes. The aspect ratio of the self-assembled luminous 15I@QD fibers is expected to be large compared to that of the 15Br@QDs owing to the presence of the QDs (Figure 4a,b). This could be because the halogen bonds decrease in strength in the following order: I > Br > Cl [48][49][50][51], and the Furthermore, the LC properties of the 15Br@QDs were determined using polarizing optical microscopy; 15Br@QDs were shown to have a focal conical fan texture similar to that of 15Br ( Figure S10). Moreover, the aligned 15Br@QDs mesogens show strong anisotropy in their polarized UV-vis absorption spectra, and the orientation factor (0.052) compares favorably with that of pristine 15Br (0.004) ( Figure S11). To evaluate the bonding capability of the AzPy derivatives and QDs as a function of the halogen atom, molecular iodine was used as a Lewis base instead of molecular bromine for the preparation of iodine-bonded 15I@QDs complexes. The aspect ratio of the self-assembled luminous 15I@QD fibers is expected to be large compared to that of the 15Br@QDs owing to the presence of the QDs (Figure 4a,b). This could be because the halogen bonds decrease in strength in the following order: I > Br > Cl [48][49][50][51], and the relatively weak bromine bonds led to a reduction in the aspect ratios. The standard deviations of the aspect ratios of the 15Br, 15Br@QDs, and 15I@QDs fibers are calculated in the ESI. Considering the photoresponsivity of our prototype molecule 15I, the photoactivity of the 15I@QDs was monitored before and after UV irradiation at 360 nm in the THF solution. Exposure of the 15I@QD solution to UV light resulted in a gradual decrease in the absorption peak at 358 nm, which can be attributed to a π-π * transition, and a gradual increase in the peak at 442 nm, which can be attributed to the n-π * transition (Figure 4c). This is the result of photoisomerization of AzPy molecules from their trans to cis isomers. The intensity of the absorption peak at 358 nm gradually increased, and that at 442 nm decreased when the irradiated sample was kept in the dark, indicating that the 15I@QDs underwent trans to cis to trans isomerization ( Figure 4d). However, this photochemical phase transition was not observed for 15Br@QDs, which produced a spectrum analogous to that of the brominated 15Br compound when irradiated with UV light. Density functional theory (DFT), specifically the Gaussian 16 program, was used to investigate possible 15X@QDs (X = Br or I) interactions [52]. Geometry optimization calculations were performed using the B3LYP DFT-D3 method [53][54][55], and the 6−311G + (2d,2p) basis set was used to locate all stationary points involved. The vibrational frequencies were computed at the same level of theory to check whether each optimized structure is an energy minimum or transition state and to evaluate its zero-point energy. The binding energy (E b ) is defined as E b = E A + B -(E A + E B ), where E A + B is the total energy of A and B combined, and E A + E B is the sum of the total energies of A and B before the combination. A and B refer to 15X and alkyl chains of oleic acid on the QD surface, respectively. Density functional theory (DFT), specifically the Gaussian 16 program, was used to investigate possible 15X@QDs (X = Br or I) interactions [52]. Geometry optimization calculations were performed using the B3LYP DFT-D3 method [53][54][55], and the 6−311G + (2d,2p) basis set was used to locate all stationary points involved. The vibrational frequencies were computed at the same level of theory to check whether each optimized The E b energies of 15Br@QDs with 4, 8, 15 overlap of alkyl chains via van der Waals force were calculated to be −0.15125 eV, −0.30105 eV and −0.52469 eV, respectively, and the E b of 15I@QDs were calculated to be −0.15254 eV, −0.30102 eV and −0.52463 eV, respectively ( Figure 5). The energy analysis shows that the absolute value of binding energy of 15X@QDs is getting bigger and bigger as the overlap of alkyl chains increases between QDs and 15X molecules via van der Waals force, which led to the further stabilization of the entire system of 15X@QDs. Materials and Methods Materials: 4-aminopyridine, K2CO3, acetone and other chemical reagents were obtained from Sigma-Aldrich, Saint Louis, MO, USA. CdSe/ZnS QDs were purchased from Wuhan Jiayuan Quantum Dot Co., Ltd. Wuhan, China. The maximum emission wavelength is 625 nm ± 5 nm, and the size is 5-10 nm. The specification is 30 mg powder QDs dispersed in 10 mL n-hexane solvent. Characterizations: 1 H NMR spectra were executed on a Bruker Avance III 400. Differential scanning calorimetry (DSC) examination was performed on a Perkin-Elmer DSC 8000 with a heating and cooling rate of 10 °C/min. The morphologies of fibers were observed using scanning electron microscopy (SEM, Zeiss EVO18, Oberkochen, BW, Germany), polarizing optical microscope (POM, LEICA DM2700 M, Wetzlar, Hesse, Germany) and transmission electron microscopy (TEM, Tecnai G2 F20, Hillsboro, OR, USA). The powder X-ray diffraction analysis (XRD) was implemented on a Philips X pert pro. The FT-IR analysis and the UV-vis analysis were measured using Nicolet 510P IR spectra and a UV/VIS/NIR spectrometer (Perkin-Elmer lambda 950, Waltham, MA, USA). Laser scanning confocal microscopy was recorded on the Zeiss LSM800. Simulation method: All calculations were performed using the Gaussian 16 program. Geometry optimization calculations were performed using the B3LYP DFT-D3 method. Conclusions In conclusion, we report that the self-assembled fibers can be developed from halogen-bonded complexes by varying the alkyl-chain lengths, concentrations, and solvents. We found that 15Br, with the aid of a capillary, can form directional self-assembly fibers in THF. Interestingly, the self-assembled luminous fibers were obtained by the synergy interactions of halogen bonds and van der Waals force between the oleic acid groups of the QDs surface and the alkyl chain of the halogen-bonded complexes, which is critical to potential applications, such as drug detection, biosensors, electroluminescent devices, and other novel optical devices. Materials and Methods Materials: 4-aminopyridine, K 2 CO 3 , acetone and other chemical reagents were obtained from Sigma-Aldrich, Saint Louis, MO, USA. CdSe/ZnS QDs were purchased from Wuhan Jiayuan Quantum Dot Co., Ltd. Wuhan, China. The maximum emission wavelength is 625 nm ± 5 nm, and the size is 5-10 nm. The specification is 30 mg powder QDs dispersed in 10 mL n-hexane solvent. Characterizations: 1 H NMR spectra were executed on a Bruker Avance III 400. Differential scanning calorimetry (DSC) examination was performed on a Perkin-Elmer DSC 8000 with a heating and cooling rate of 10 • C/min. The morphologies of fibers were observed using scanning electron microscopy (SEM, Zeiss EVO18, Oberkochen, BW, Germany), polarizing optical microscope (POM, LEICA DM2700 M, Wetzlar, Hesse, Germany) and transmission electron microscopy (TEM, Tecnai G2 F20, Hillsboro, OR, USA). The powder Xray diffraction analysis (XRD) was implemented on a Philips X pert pro. The FT-IR analysis and the UV-vis analysis were measured using Nicolet 510P IR spectra and a UV/VIS/NIR spectrometer (Perkin-Elmer lambda 950, Waltham, MA, USA). Laser scanning confocal microscopy was recorded on the Zeiss LSM800. Simulation method: All calculations were performed using the Gaussian 16 program. Geometry optimization calculations were performed using the B3LYP DFT-D3 method. Conclusions In conclusion, we report that the self-assembled fibers can be developed from halogenbonded complexes by varying the alkyl-chain lengths, concentrations, and solvents. We found that 15Br, with the aid of a capillary, can form directional self-assembly fibers in THF. Interestingly, the self-assembled luminous fibers were obtained by the synergy interactions of halogen bonds and van der Waals force between the oleic acid groups of the QDs surface and the alkyl chain of the halogen-bonded complexes, which is critical to potential applications, such as drug detection, biosensors, electroluminescent devices, and other novel optical devices. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/molecules27238165/s1, Scheme S1. Synthesis of the 15X@QDs; Figure S1. 1 H NMR spectrum of the A15AzPy (compound a2); Figure S2. Optical images of bromine-bonded fibres with different alkyl chains formed in THF; Figure S3. The picture of different mass concentration of 15Br from 0.1 to 1.0 wt% in THF; Figure S4. Optical and SEM images of the self-assembled fibres formed from 15Br (0.4 wt%) in different organic solvents at room temperature. Figure S9. Optical images of the self-assembled fibres formed from A15AzPy@QDs (volume-to-volume ratio = 3:1) in THF at different temperature. Figure S10. POM pictures of 15Br (a) and 15Br@QDs (b); Figure S11. Polarized UV−Vis spectra of 15Br@QD (a) and 15Br (b) films. The black and red curves are the absorption parallel and perpendicular to the orientation direction, respectively.
v3-fos-license
2018-09-25T14:23:32.597Z
2018-09-24T00:00:00.000
52810997
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-018-32532-w.pdf", "pdf_hash": "747ca415effafb75ea07a3acf754ff56424caf84", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46789", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "747ca415effafb75ea07a3acf754ff56424caf84", "year": 2018 }
pes2o/s2orc
Malic Enzyme 1 (ME1) is pro-oncogenic in ApcMin/+ mice Cytosolic Malic Enzyme (ME1) provides reduced NADP for anabolism and maintenance of redox status. To examine the role of ME1 in tumor genesis of the gastrointestinal tract, we crossed mice having augmented intestinal epithelial expression of ME1 (ME1-Tg mice) with ApcMin/+ mice to obtain male ApcMin/+/ME1-Tg mice. ME1 protein levels were significantly greater within gut epithelium and adenomas of male ApcMin/+/ME1-Tg than ApcMin/+ mice. Male ApcMin/+/ME1-Tg mice had larger and greater numbers of adenomas in the small intestine (jejunum and ileum) than male ApcMin/+ mice. Male ApcMin/+/ME1-Tg mice exhibited greater small intestine crypt depth and villus length in non-adenoma regions, correspondent with increased KLF9 protein abundance in crypts and lamina propria. Small intestines of male ApcMin/+/ME1-Tg mice also had enhanced levels of Sp5 mRNA, suggesting Wnt/β-catenin pathway activation. A small molecule inhibitor of ME1 suppressed growth of human CRC cells in vitro, but had little effect on normal rat intestinal epithelial cells. Targeting of ME1 may add to the armentarium of therapies for cancers of the gastrointestinal tract. and adenomas in ilea of male mice. Quantification was via Aperio Imagescope. Boxes demarcate the inter-the KRAS gene had enhanced expression of ME1 16 . Mutations of both TP53 and KRAS are common in CRC 17 . Indeed, siRNA-mediated knockdown of ME1 leads to growth inhibition and senescence of CRC cell lines in vitro 9,15 . Moreover, ME1 abundance in some cancer cells was reported to be a prognostic marker for efficacy of radiation therapy 18 . Nevertheless, most of what is known about the actions of ME1 in cancer cells is derived from in vitro studies and xenograft transplants to mice. To mechanistically define the contributions of ME1 to intestinal cancer genesis within a more physiological context, we generated an ME1 transgenic mouse (ME1-Tg) which over-expresses ME1 predominantly in intestinal epithelial cells under the control of the murine villin gene promoter-enhancer 19 . We reported that ME1-Tg mice had greater intestinal 5-bromodeoxyuridine labeling index and exhibited deeper intestinal and colonic crypts 19 . In contrast, a functionally null ME1 mouse (MOD-1 mouse line) displayed shallower colonic crypts and reduced intestinal expression of pro-proliferative Ccnd1 and Mtor genes compared to WT mice 20 . Intestinal expression of genes encoding proteins responsible for lipid and cholesterol biosynthesis were elevated in the ME1-Tg mice 19 , indicating a shift towards increased lipogenesis. While these studies suggested a stimulatory role for ME1 in proliferation of gut epithelium, ME1-Tg mice did not spontaneously develop intestinal adenomas at increased frequencies. The Apc Min/+ mouse is a well-utilized model of Familial Adenomatous Polyposis (FAP), an inherited form of colorectal/intestinal cancer 21 . Here, we have used this mouse model to test the hypothesis that ME1 overexpression would lead to increased tumor burden. We characterized the male progeny of the novel intercross of heterozygous male Apc Min/+ mice with female ME1-Tg mice, namely, Apc Min/+ mice with intestine-specific augmentation of ME1 (designated Apc Min/+ /ME1-Tg) for tumor parameters and for expression of candidate tumor-associated genes. Further, we utilized small molecule inhibitors of ME1 and the canonical Wnt signaling pathway, respectively to elucidate single and combinatorial effects on two human CRC cell lines. Results document a stimulatory role for ME1 in intestinal tumor genesis. Effects of enhanced intestinal epithelial ME1 expression. To generate Apc Min/+ mice with enhanced intestinal expression of ME1, heterozygous male Apc Min/+ mice were intercrossed with female ME1-Tg mice. Sixteen-week-old male mouse progeny were used to quantify Me1 RNA abundance and adenoma burden in the small and large intestines. We observed a significant (~2.0-fold; P = 0.010) increase in expression of the endogenous (mouse) Me1 gene in the jejunums of Apc Min/+ /ME1-Tg mice when compared to WT mice (Fig. 1A). Similarly, total Me1 mRNA levels (i.e., endogenous plus transgenic Me1 RNAs) in mouse jejunum were significantly greater for ME1-Tg (P < 0.001) and Apc Min/+ /ME1-Tg (P < 0.001) mice when compared to WT and Apc Min/+ mice, respectively (Fig. 1B). As expected, no transgene-derived RNA was observed in the non-transgenic Apc Min/+ mouse intestine (Fig. 1C). We then evaluated relative abundance of ME1 protein in ileum by immunohistochemistry (IHC). An increase in ME1 protein (IHC staining) was observed within normal-appearing villi of the transitional mucosa (P = 0.002) as well as in adenomas (P = 0.026) of Apc Min/+ /ME1-Tg when compared to Apc Min/+ mice; by contrast, the crypts of both mouse lines did not differ ( Fig. 1D-J). The intestine smooth muscle layers (outer longitudinal and inner circular) stained intensely for ME1 ( Fig. 1D-G); although, as expected, this staining was unaffected by Me1 transgene. Apc Min/+ /ME1-Tg mice exhibited greater amounts of ME1 protein in adenomas when compared to those of Apc Min/+ . Interestingly, the borders of adenomas exhibited significantly greater ME1 staining than the corresponding inner regions irrespective of genotype ( Supplementary Fig. 1). However, the adenoma borders of Apc Min/+ /ME1-Tg mice displayed significantly greater (P = 0.042) ME1 staining than those of Apc Min/+ mice ( Supplementary Fig. 1). Goblet cells are the most abundant secretory cell type in the villus epithelium, and their numbers serve as a readout of lineage determination in intestines. We performed Alcian blue histochemistry to evaluate goblet cell numbers as a function of ME1 status. There was no difference in number of goblet cells in villi of Apc Min/+ / ME1-Tg and Apc Min/+ mice ( Supplementary Fig. 2). Villi immediately adjacent to adenomas had significantly more goblet cells (P < 0.01) than those that were more distant from adenomas ( Supplementary Fig. 2). However, the number of goblet cells in adenoma-associated villi of Apc Min/+ /ME1-Tg vs. Apc Min/+ mice did not differ ( Supplementary Fig. 2). Increased intestinal adenoma burden in male Apc Min/+ /ME1-Tg mice. We next evaluated the effect of the Me1 transgene on intestinal adenoma burden. Male Apc Min/+ /ME1-Tg mice exhibited a significant increase (~1.5-fold; P = 0.009) in numbers of small intestine adenomas when compared to Apc Min/+ mice ( Fig. 2A); this increase occurred in the Ileum (P = 0.032) and jejunum (P = 0.020) (Fig. 2B). The number of colon adenomas did not differ (P = 0.076) between Apc Min/+ /ME1-Tg and Apc Min/+ mice (Fig. 2B). Apc Min/+ /ME1-Tg mice displayed significantly greater numbers of adenomas that were less than 1 mm in diameter in the duodenum (P = 0.011), jejunum (P = 0.014), and ileum (P = 0.040) when compared to male Apc Min/+ mice ( Fig. 2C-E). Interestingly, the number of adenomas between 3 mm and 4 mm in diameter was significantly greater (P = 0.034) in the colons of male Apc Min/+ /ME1-Tg mice compared to Apc Min/+ mice (Fig. 2F). quartile range of 25-75% with mean (thick line) and median (thin line); (n = 6/group). (J) Student's t-tests were used to examine for differences in IHC staining intensity of ME1 protein between groups, and the Mann-Whitney Rank Sum Test was used for comparing non-normally distributed data. Significant differences were identified by P < 0.05. A tendency for a difference also is indicated (0.1 > P > 0.05). Boxes indicate the inter-quartile range (25-75%) with mean (thick line) and median (thin line); whiskers: 10 th and 90 th percentiles; dots: outliers. Student's t-tests were used to examine for differences between groups and the Mann-Whitney Rank Sum Test was used for comparing non-normally distributed data. Significant differences were identified by P < 0.05. Tendencies for differences also are indicated (0.1 > P > 0.05). SCIenTIfIC RepoRtS | (2018) 8:14268 | DOI:10.1038/s41598-018-32532-w β-catenin IHC of adenomas. An increase in nuclear β-catenin content is a hallmark of intestinal tumorigenesis. We therefore evaluated, by IHC, the number of nuclear β-catenin-positive cells in ilea from male Apc Min/+ /ME1-Tg and Apc Min/+ mice. No significant differences in numbers of nuclear β-catenin-positive cells were observed for crypts, villi or adenomas as a function of genotype ( Supplementary Fig. 3). However, all (H) Quantification of ileal villus length in male mice (n = 5/group). (I) Ratio of villus length to crypt depth in the ilea of male mice (n = 5/group). Boxes indicate the inter-quartile range of 25-75% with mean (thick line) and median (thin line); whiskers extend to the 10 th and 90 th percentiles. Student's t-tests were used to examine for differences between genotypes (significant difference, P < 0.05). adenomas stained very strongly for β-catenin (nuclear and cytoplasmic/membranes) (Fig. 3A,B), which facilitated the measurements of adenoma areas (in cross-section) by use of Aperio software. Both adenoma number and area were significantly greater (P = 0.032 and P = 0.004, respectively) for ilea of male Apc Min/+ /ME1-Tg mice compared to male Apc Min/+ mice (Fig. 3C,D). In addition, ileal crypts and villi of male Apc Min/+ /ME1-Tg mice were significantly deeper (P = 0.039) and longer (P = 0.003), respectively when compared to those of male Apc Min/+ mice; however, villus to crypt ratio was comparable between genotypes ( Fig. 3E-I). We examined whether the increases in crypt depth and villus length were due to increased numbers of cells within the corresponding epithelium. The number of cells lining the crypts did not significantly differ as a function of genotype; however, we observed a significant increase (~1.2-fold; P = 0.044) in the number of cells comprising the villus epithelium in the Apc Min/+ /ME1-Tg relative to Apc Min/+ mice ( Supplementary Fig. 3). Gene expression, proliferation and apoptosis in male Apc Min/+ /ME1-Tg mouse intestines. We performed targeted gene expression analysis to identify a pathway-oriented basis for the increased intestinal adenoma burden of Apc Min/+ /ME1-Tg mice. Among the oncogenes and tumor suppressor genes that were examined, only Sp5 was significantly (P = 0.013) altered (i.e., upregulated by more than two-fold) in the Apc Min/+ / ME1-Tg jejunum (Fig. 4A). Sp5 is a known induced target gene of the Wnt pathway [22][23][24] . Expression analysis of apoptosis-associated genes showed a two-fold up-regulation (P = 0.05) of anti-apoptotic Bcl2 in Apc Min/+ / ME1-Tg mice (Fig. 4B). No differences in expression of several epithelial to mesenchymal (EMT)-associated genes were noted between the two groups ( Supplementary Fig. 4). The jejunal expression of other Sp/Klf family member genes that were previously implicated in intestinal growth and homeostasis, did not differ between the mouse lines ( Supplementary Fig. 4). Moreover, the numbers of BrdU-positive and nuclear Ki67-positive cells were comparable between genotypes for the crypt and villus epithelium and adenomas ( Increased KLF9 protein abundance in crypts and villus lamina propria of male Apc Min/+ /ME1-Tg mice. We previously reported that null mutation of the transcription factor Krϋppel-like factor 9 (Klf9) negatively affected small intestine crypt stem-progenitor cell proliferation and villus cell migration in mice 25 . As a consequence, the villi of Klf9 knockout mice were shorter than their wild-type counterparts 25 . In view of the positive effect of ME1 transgene on villus length, we evaluated, by IHC, the presence of nuclear-localized KLF9 in the ilea of Apc Min/+ /ME1-Tg and Apc Min/+ mice. Nuclear KLF9 protein levels were significantly greater in the crypts (P = 0.0008) and the villus lamina propria (P = 0.000006) of Apc Min/+ /ME1-Tg compared to Apc Min/+ mice ( Fig. 5A-E). Nuclear KLF9 immunoreactivity was minimal in the villus epithelium, but was robust in the muscularis externa of both genotypes (Fig. 5A,B). Moreover, KLF9 immunostaining in adenomas was negligible in both mouse lines (Fig. 5C-E). Inhibition of ME1 activity suppresses growth of human CRC cells in vitro. We next evaluated the involvement of ME1 in the Wnt/β-catenin pathway using intestinal cell lines. In initial studies, we inhibited ME1 enzyme activity in the non-cancerous IEC6 intestinal epithelial cell line with a small molecule inhibitor of ME1 26 (designated ME1*). A small but significant decrease in colony formation was noted for these cells but only at the highest dose evaluated ( Supplementary Fig. 5). Since our earlier results for Sp5 (Fig. 4A) indicated a possible functional connection between ME1 and the canonical Wnt/β-catenin signaling pathway, we next treated HCT116 and HT29 CRC cells, singly and in combination, with the small molecule ME1 inhibitor and an inhibitor of the canonical Wnt pathway (JW74) 27 . Non-confluent cells were treated with 50 uM ME1*, 15 uM JW74, 50 uM ME1* plus 15 uM JW74, or vehicle (DMSO; control) for 72 h. Results showed a significant reduction in total cell numbers with ME1* or ME1* + JW74 for both HCT116 and HT29 cell lines (Fig. 6A,B). In HT29 cells, the combination of ME1* and JW74 had an additive inhibitory effect on cell counts compared to vehicle (P < 0.001) (Fig. 6B). JW74 alone did not alter cell numbers (Fig. 6A,B). Diameters of HCT116 and HT29 cells treated with ME1* or ME1* + JW74 were reduced when compared to vehicle; JW74 alone did not affect HCT116 cell diameter but showed a tendency for this in HT29 cells (P = 0.067) ( Supplementary Fig. 5). We also examined HCT116 and HT29 cell viability/metabolism by use of the MTS reagent. A significant reduction in cell viability/metabolic activity was observed with ME1* treatment, which was further diminished with co-addition of JW74 (Fig. 6C,D). Unexpectedly, JW74 alone had a small but significant (P < 0.001) stimulatory effect on HCT116 cells. We performed colony formation assays to confirm the above effects. ME1* and ME1* + JW74 dramatically reduced (P < 0.001) the number of colonies formed by both cell lines, while JW74 alone had no effect (Fig. 6E-H). ME1* dose-dependently reduced the number of colonies formed by both HCT116 (overall P < 0.001) and HT29 (overall P < 0.001) cells ( Supplementary Fig. 5), albeit HCT116 cells were more sensitive than HT29 cells to ME1*. Since the number of colonies formed after treatment with ME1* or ME1* + JW74 were so few, we inferred that the ME1 inhibitor was causing cell death. To investigate this further, HCT116 cells were seeded at high density and treated with ME1*, JW74, ME1* + JW74, or vehicle (DMSO). ME1* caused significant loss of cells, while the combination of ME1* and JW74 showed an additive effect on cell numbers (Fig. 6I,K). IEC6 cells when treated similarly showed no effect with ME1* while JW74 alone and combination of JW74 and ME1* had comparable inhibitory effects (Fig. 6J,L). the inter-quartile range of 25-75% with mean (thick line) and median (thin line); whiskers extend to 10 th and 90 th percentiles. Student's t-tests were used to examine for differences between groups and the Mann-Whitney Rank Sum Test was used for non-normally distributed data. Significant differences were identified by P < 0.05. Tendencies for differences also are indicated (0.1 > P > 0.05). SCIenTIfIC RepoRtS | (2018) 8:14268 | DOI:10.1038/s41598-018-32532-w ME1 expression in human colon adenocarcinomas. ME1 was immunolocalized to specific cells of human normal colon and colon tumor tissue using a commercial tissue array. Robust expression of ME1 was observed in the more differentiated epithelial regions of both human colorectal adenocarcinomas and normal colon ( Supplementary Fig. 6). Discussion This study reports a novel in vivo association between gastrointestinal ME1 expression and small intestine adenoma burden. We observed increases in: a) adenoma number and size distribution, b) ME1 abundance within adenomas, and c) crypt depth and villus height, in the non-adenoma (transitional) mucosa with Me1 overexpression in the background of Apc haplo-insufficiency. We also found a significant increase in Sp5 transcript levels and enhanced numbers of nuclear KLF9 positive-cells in the crypt epithelium and lamina propria of Apc Min/+ / ME1-Tg mice. These findings link an important cytosol-residing metabolic enzyme namely ME1 with alterations in expression (and by inference, downstream pathway effects) of two members of the SP/KLF family of transciptional regulators, which are themselves increasingly considered as context-dependent participants in oncogenesis and canonical Wnt pathway signaling. Moreover, our in vitro experiments utilizing two human CRC cell lines confirmed that ME1 is important for cancer cell growth (hyperplasia and hypertrophy). Importantly, we found that growth of cancer cells was highly sensitive to inhibition of ME1 enzyme activity; by contrast, growth of non-tumorigenic intestinal epithelial cells (IEC6) was relatively resistant to ME1 inhibition albeit sensitive to Wnt pathway inhibition. Finally, we showed that ME1 is abundantly expressed within epithelial-like regions of human colon adenocarcinoma akin to what was observed in the adenoma borders in Apc Min/+ /ME1-Tg mice. Collective results implicate ME1 as a functional contributor to intestinal cancer development and suggest that targeting ME1 should be explored as a potential therapeutic strategy to improve patient outcome. Analogous to our previous findings for ME1-Tg mice 19 , the Apc Min/+ /ME1-Tg mice exhibited greater ME1 RNA and protein levels in the small intestine, when compared to their littermate controls. The greater numbers of small adenomas and the larger adenoma sizes, respectively within the small intestine of Apc Min/+ /ME1-Tg mice, coincident with increased ME1 expression, are suggestive of ME1 promotion of the initial step(s) of adenoma formation and of ME1 participation in tumor progression. Moreover, the significant elevation in ME1 abundance within the adenoma borders is consistent with previously reported increases in lipid content in the epithelial-like adenoma borders of Apc Min/+ mice 28 . The lack of differences in the expression of proliferation-associated genes, which were corroborated by results of BrdU and Ki67 staining, between the two mouse lines with distinct intestinal ME1 expression, suggest that the increased adenoma burden in vivo may not result from increased cell proliferation but rather to decreased apoptotic status. Consistent with this, we found an increase in anti-apoptotic Bcl2 gene expression in the jejunum and a decrease in TUNEL staining in the villi of Apc Min/+ /ME1-Tg mice. The lack of observed changes in the expression of several canonical EMT-associated genes may be related to EMT occuring at later stages of tumorigenesis than studied here or to the lack of functional relationship of ME1 and EMT. Further studies conducted at later stages of tumorigenesis should address this question. Both crypt depth and villus length were enhanced in the transitional mucosa of Apc Min/+ /ME1-Tg mice. We found no increase in the number of cells resident in the crypt epithelium of Apc Min/+ /ME1-Tg mice, although the average number of cells per crypt was numerically greater than for Apc Min/+ mice; thus, a combination of increased cell number and cell size likely contributed to the increased crypt depth. Consistent with this, ME1 inhibition conferred smaller cell diameters in vitro. Moreover, in a previous study, we reported that mice functionally null for Me1 had shallower colon crypts when fed a high fat diet 20 . By contrast, ME1-Tg mice (on the WT Apc background) fed a high-fat diet exhibited increased jejunum crypt depth and more crypt stem-progenitor proliferation then wild-type littermate controls 19 . The ME1-Tg mice on the WT Apc background and fed a high fat diet, also had altered liver metabolism, reflecting gut-liver communication 19 ; similar effects may have contributed to the current findings. We conclude that ME1 may play a role, either directly or indirectly, in the maintenance of intestinal crypt stem-progenitor cell number and/or size. We noted a virtual absence of goblet cells within the central regions of adenomas, whereas their borders contained a high frequency of these cells. While we found no quantitative effects of the Me1 transgene on this pattern, there was a significant increase in the number of goblet cells in the adenoma-associated villi when compared to normal villi. Perhaps the greater number of goblet cells within the transitional mucosa reflects a tumor-protective function through promotion of mucus production and of paracrine signaling elicited by tumor-promoting mucins 29 . Our studies implicate two members of the Sp/Klf-family of transcription factors as potential mediators of ME1-induced GI tumorigenesis. SP5, a downstream target of the Wnt/β-catenin signaling pathway in colon and several other tissues, is reported to be up-regulated in CRC and to promote tumor cell growth [22][23][24] . Given that Sp5 transcript levels were elevated in the jejunums of Apc Min/+ /ME1-Tg mice yet we observed no differences in nuclear β-catenin localization in the two mouse lines, we infer that the Sp5 induction in response to ME1 occured via an alternate pathway or via interconnectivity with the Wnt/β-catenin signaling pathway but downstream of nuclear β-catenin. Alternatively or in addition, the increase in Sp5 mRNA abundance may reflect increased numbers of adenoma cells expressing Sp5 mRNA at high levels. However, since the effect of ME1 transgene on Sp5 is relatively specific and is not accompanied by differential expression of genes (e.g., c-Myc, cyclin D1) known to be overexpressed in intestinal adenomas, the latter scenario is not likely. Future studies should address this non-canonical link between SP5 and Wnt-β-catenin signaling. KLF9 is another potential player in ME1-enhanced tumorigenesis. In a previous study, we reported that mice null for Klf9 had shorter intestinal villi than wild-type mice, due in part to reduced crypt cell proliferation and slower epithelial cell migration to the villus tip 25 . Thus, the increased villus length observed in the present study may be a result, in part, of the enhanced number of KLF9-positive cells in the crypts of Apc Min/+ /ME1-Tg mice. Interestingly, numbers of nuclear KLF9-positive cells were also elevated within the villus lamina propria of Apc Min/+ /ME1-Tg mice. At present, the identities of KLF9-positive cells in the villus lamina propria remain unknown, although this tissue compartment harbors lymphoid cells, macrophages and myofibroblasts, among other cell types. KLF9 is reported to be oncogenic in some contexts and to be tumor suppressive in others [30][31][32][33] . In a previous study, we found that Klf9 KO caused a significant reduction in adenoma number in the colon but had no effect on adenoma number in small intestines of Apc Min/+ mice 33 . Taken together with the findings reported here, our results suggest the tissue and context-dependent functions of KLF9. We speculate that the enhanced frequency of KLF9-positive cells in the small intestine crypts is growth-promoting for normal appearing and transitional villi, whereas KLF9 exerts tumor-suppressive actions in adenomas of the colon but not small intestine. Remarkably, normal rat intestinal cells (IEC6) were more resistant than cancer cells to the ME1 inhibitor but were highly sensitive to Wnt pathway inhibition; the latter is in keeping with the well-known stimulatory role of with 50 uM ME1*, 15 uM JW74, 50 uM ME1* plus 15 uM JW74, or vehicle (DMSO). After 3 days, cells were stained with crystal violet. (K,L) Quantification of remaining cells from (I,J) expressed as % area of stained cells per well. Boxes show the inter-quartile range of 25-75% with mean (thick line) and median (thin line); whiskers: 10 th and 90 th percentiles. One way ANOVA was used to examine for differences between treatment groups. Different lowercase letters (a-d) designate groups that differ (P < 0.05); bars sharing the same letter are not significantly different. SCIenTIfIC RepoRtS | (2018) 8:14268 | DOI:10.1038/s41598-018-32532-w the Wnt/β-catenin signaling pathway in intestinal stem-progenitor cell proliferation. The suppressive effect of ME1 inhibitor on cancer cell size in vitro is consistent with our previous work with the global Me1 hypomorphic null mouse in which we observed significant reductions in colon Mtor expression 20 . The mTOR pathway is dominant in determining cell size for many tissues/cells 34 ; hence, reductions in its expression may partly explain our in vitro data. In conclusion, our results significantly extend previous findings from other laboratories implicating a role for ME1 in gastrointestinal cancers. The body of work implicating fatty acid synthesis in CRC initiation, progression and metastasis [12][13][14] , coupled with the documented role of ME1 in promoting lipogenesis in gut epithelium via the NADPH supply 9,19 , further identifies this pathway as a vulnerability to exploit for CRC treatment and therapy. Methods Animals. All procedures involving mice were approved by the Institutional Animal Care and Use Committee (IACUC) of the University of Arkansas for Medical Sciences in accordance with federal guidelines and regulations. Mice were housed under a 12 h light/12 h dark cycle and were fed a regular chow diet. Apc Min/+ mouse breeders (Strain Name: C57BL/6J-Apc Min /J; Apc Δ850 , stock number: 002020) were from the Jackson Laboratory (Bar Harbor, ME, USA). Mice with augmented intestinal epithelial expression of ME1 (rat ME1 cDNA under the control of the murine villin promoter-enhancer; ME1-Tg; C57BL/6 J background) were described previously 19 . To generate Apc Min/+ mice with gut-specific enhanced expression of ME1, heterozygous male Apc Min/+ mice were intercrossed with female ME1-Tg mice. Age-matched male mice were used to quantify tumor burden in the small and large intestines. At 16 weeks of age, mice (n = 17 Apc Min/+ male mice and n = 10 Apc Min/+ /ME1-Tg male mice) were euthanized for tissue collection. Their small intestines and colons were removed, flushed with phosphate-buffered saline (PBS) and opened longitudinally. The small intestine was divided into three equal parts, which were operationally designated as duodenum, jejunum, and ileum. The junction (~1 cm in length) between the jejunum and ileum was snap-frozen in liquid nitrogen and stored at −80 °C for later analysis. Jejunum-ileum junctions (referred to as jejunum in Results) were also obtained from age-matched non-transgenic wild-type and ME1-Tg male mice (both strains wild-type for Apc). After microscopic examination of tissues (below), each Ileum was coiled into a Swiss-roll and fixed in methanol Carnoy solution (60% methanol, 30% chloroform, 10% glacial acetic acid) for 24 h and then transferred to 70% ethanol, followed by embedding in paraffin. We chose ilea for embedding and subsequent IHC, since this region typically displays the most adenomas in Apc Min/+ mice 33 . To evaluate gastrointestinal proliferation, Apc Min/+ /ME1-Tg and Apc Min/+ mice were injected intraperitoneal with BrdU at a dose of 100 mg/kg of body weight (Sigma Aldrich, St. Louis, MO, USA) 2 h before euthanizing, as described previously 20 . Microscopic examination of adenomas. Intestine segments and colons were examined for number and size of adenomas, in blinded fashion, with a dissecting microscope (Discovery 8, Carl Zeiss MicroImaging GmbH, Jena, Germany). All adenomas were counted, measured and categorized by size Mouse gene Forward primer (5′-3′) Reverse primer (3′-5′) Histology and immunohistochemistry. Five-micron sections of ilea (Swiss rolls) were used for immunohistochemistry (IHC). Paraffin-embedded sections were dewaxed and rehydrated through a graded alcohol series as described previously 19,20 . Antigen unmasking was conducted by boiling the sections in Coplin jars in a microwave using Citra Plus (Biogenex, San Ramon, CA, USA) for a duration of 2 min high power and then for 10 min at low power setting. After cooling for 30 min at room temperature, sections were treated with 3% Slides were imaged with a Aperio CS2 image capture device (Leica Biosystems Nussloch GmbH, Germany), or Nikon Eclipse E400 Microscope (Nikon Instruments, Melville, NY) fitted with an Olympus Dp73 digital camera. Quantification of positive antibody staining was performed using Aperio ImageScope algorithms or manually by counting the number of individual nuclear-stained cells. All positive staining was represented as a percentage of total staining (positive + negative) within a given field; with the exception of Ki67 and KLF9 for which the Aperio nuclear stain algorithm was used. Five representative areas per slide/mouse, each with 3 to 8 representative crypts or villi, were used for quantification. Apoptosis assay. Sections were stained using the ApopTag Peroxidase In Situ Apoptosis Detection kit following the manufacturer's instructions (Millipore, Burlington, MA, USA). Staining was evaluated using the Aperio ImageScope. Measurement of crypts and villi. Ileal sections were scanned using the ScanScope CS2 slide scanner and Aperio ImageScope software. Crypt depths were measured from the base of the crypts to the base of the villus, while villus lengths were measured from the base of the villus to its tip. The number of cells along the sides of crypts and villi were manually counted from representative images. Three to five representative areas per slide/ mouse, each with 3 to 5 representative crypts or villi, were used for quantification. Alcian Blue staining. Alcian Blue staining was performed using a kit (Alcian Blue (pH 2.5) stain kit H-3501, Vector Laboratories, Burlingame, CA, USA). The counter-stain was nuclear fast red. Goblet cells were manually counted from representative slides. Analysis of cell numbers and cell sizes. Cells were plated in 6-well tissue culture plates at a density of 2 × 10 4 cells in 2 ml of medium (DMEM + 10% heat-inactivated FBS) per well. After 24 h, medium was removed, and ME1 inhibitor (ME1*), canonical Wnt signaling pathway inhibitor (JW74), the combination of ME1* and JW74, or (0.5%) DMSO was added to plated cells in 2 ml of medium (containing 2% heat-inactivated FBS) and incubation continued for an additional 72 h. All treatments contained 0.5% DMSO. Wells were gently washed with PBS and then incubated with 1 ml of trypsin (Gibco ™ Trypsin-EDTA (0.25%), with Phenol Red) and the entire 1 ml sample analyzed in a Vi-CELL ™ XR viability counter (Beckman Coulter, Brea, CA, USA). Cell diameter was simultaneously recorded using this same instrument. Colony-formation assays. HT29 and HCT116 cells were plated at a density of 1 × 10 3 cells in 1 ml/ well (however, 2 × 10 3 IEC6 cells were plated) in 24-well tissue culture plates (medium was DMEM + 10% heat-inactivated FBS) followed by incubation for 24 h. One ml containing the treatment (ME1*, JW74, the combination of ME1* and JW74, or 0.5% DMSO) were added in DMEM containing 10% heat-inactivated FBS (complete media), followed by incubation for six days (all treatment media contained 0.5% DMSO). Cells were washed with Dulbecco's phosphate-buffered saline and stained with 0.1% crystal violet in 10% formalin for 1 h. Plates were scanned using an Epson Perfection V600 Photo Scanner at 600 DPI, and colonies counted using OpenCFU 36 colony counting software. For OpenCFU, the threshold was set to 'regular' with a value of 3; minimum radius was set to 1, and maximum radius was set to auto. For IEC6 cells, colony areas were calculated using FIJI software 37 . For additional evaluation of the acute cytotoxic effects of inhibitors, cells were plated at high density (100,000 cells) per well in 12-well tissue culture plates (medium was DMEM containing 10% heat-inactivated FBS). After 24 h, treatments (ME1*, JW74, the combination of ME1* and JW74, or 0.5% DMSO) were added in 1 ml of DMEM containing 2% heat-inactivated FBS, and incubation was continued for 3 days (all treatment samples contained 0.5% DMSO). Cells were washed with Dulbecco's phosphate-buffered saline and stained for 1 h with 0.1% crystal violet in 10% formalin. The plates were air-dried and scanned using an Epson Perfection V600 Photo Scanner at 600 DPI. Area of remaining cell coverage in each well was quantified using FIJI software as a percentage of total area of well. Statistics. Power analysis indicated an ability to detect a difference of 10 adenomas per mouse (per experimental group) with n = 12 animals per genotype (SD = 8, P < 0.05, power = 0.8173, two sample t-test test). To detect a 50% difference in mRNA abundance as a function of ME1 transgene at the 0.05 level required a minimum of 6 animals per group (S.D. = 0.282; power = 0.8035; two sample t-test). Statistical analysis was performed using SigmaPlot V13.0 (Systat Software, San Jose, CA, USA). One-Way ANOVA and Student's t-tests were used to examine for differences between groups. The Normality Test (Shapiro-Wilk) was used to check if data were normally distributed before conducting Student's t-tests. The Mann-Whitney Rank Sum Test was used to compare data between two groups that were not normally distributed. Only two-tailed P values were used and are listed in the figures. Significant differences were identified by P < 0.05. Data Availability Additional data relating to the manuscript will be made available upon request.
v3-fos-license
2020-05-17T13:03:38.390Z
2020-05-15T00:00:00.000
218659893
{ "extfieldsofstudy": [ "Medicine", "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0232795&type=printable", "pdf_hash": "1a6e10d19550f4ceb7448c015b0dd1701c96bfc5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46790", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "ae5fc34e15e3095d8dbeea970af4037682311a13", "year": 2020 }
pes2o/s2orc
The influence of spouses and their driving roles in self-regulation: A qualitative exploration of driving reduction and cessation practices amongst married older adults. Introduction There is growing evidence to suggest the importance of self-regulatory practices amongst older adults to sustain mobility. However, the decision to self-regulate driving is a complex interplay between an individual’s preference and the influence of their social networks including spouse. To our best knowledge, the influence of an older adult’s spouse on their decisions during driving transition has not been explored. Materials and methods This qualitative descriptive study was conducted amongst married older adults aged 60 years and above. All interview responses were transcribed verbatim and examined using thematic approach and interpretative description method. Results A total of 11 married couples were interviewed. Three major themes emerged: [1] Our roles in driving; [2] Challenges to continue driving; and, [3] Our driving strategies to ensure continued driving. Older couples adopted driving strategies and regulated their driving patterns to ensure they continued to drive safely. Male partners often took the active driving role as the principal drivers, while the females adopted a more passive role, including being the passenger to accompany the principal drivers or becoming the co-driver to help in navigation. Other coping strategies include sharing the driving duties as well as using public transportation or mixed mode transportation. Discussion Our findings suggest spouse play a significant role in their partners’ decision to self-regulate driving. This underscores a need to recognise the importance of interdependency amongst couples and its impact on their driving decisions and outcomes. Introduction With the projected rise in ageing population worldwide, ensuring safe mobility amongst older adults is a growing concern [1]. Older adults often rely on driving to accomplish their transportation needs. However, as an individual begins to age, his/her skill and ability to drive safely begin to decline [2][3][4]. To cope with this, older adults often choose to self-regulate their driving patterns (e.g., reduction in terms of speed, distance, and frequency of travel and avoid driving in challenging situations); or in some cases cease driving completely, due to chronic medical conditions or a major life event (e.g., road crash) [5]. According to Ang and associates, selfregulation is a decision-making process that progressively changes over time to compensate for declining abilities or to reduce discomfort experienced in challenging situations by making appropriate adjustments in driving behaviour and alterations to driving pattern [2]. Whilst driving cessation is a major life changing event for many older adults, most of them often try to continue driving safely by engaging in self-regulatory practices [5]. Previous studies have largely focused on exploring an individual's identity with driving, associated factors, and consequences of driving cessation [6][7][8][9][10] which are likely to be important for their social partners. Whilst the decision to self-regulate is often complex and multifactorial, little is known about how and to what extent external factors (e.g., social influence and support) may influence the decision-making process. In most individuals, the process of transitioning from driving to driving cessation is not an individual experience, but often involves a wider social network, including their spouse, children, friends, and even professionals such as clinicians [11,12]. Curl and associates suggest that spouses or partners have a profound impact on the individuals' experiences and their decisions in driving transition, including chauffeuring duty, care giving task, and maintenance of their roles in social engagement [13]. According to Gormley and O'Neill, both married individuals were more likely to be active drivers [14]. Nevertheless, the impact of spousal influence on driving transition amongst older couples is largely understudied. Until recently, most of the available studies have explored spousal influence on life satisfaction and social engagement amongst older couples in developed countries with limited information from developing countries like Malaysia [13,15,16]. Given that reliance on spouse for transportation support is common amongst older couples [16] and safe travel remains a crucial goal, is there a role for spouse in sustaining safe mobility in later stage of life? Hence, the present study examined the influence of spouses on driving self-regulation. Specifically, this study aims to understand their driving roles and engagements in driving self-regulation and the subsequent impact of their involvement during the transition on driving decisions and outcomes. The findings of the current study may offer new insights into the how spouses influence of spouses on self-regulation and suggest new avenues for programs focused on the roles and contributions of spouse in driving transition and cessation planning in the future. Participants and definitions This qualitative study recruited married couples, who were current or former car drivers, aged 60 years and above, and literate in English. Participants were recruited through snowball sampling technique from senior citizen organisations, community centres, and medical centres located in two states in the west coast of Peninsular Malaysia, including Selangor (a more economically developed state) and Kedah (a less economically developed state) [17]. This project was approved by Monash University Human Research Ethics Committee (Project Number: 1517). The outcome of interest of this study was self-regulation of driving, specifically on reduction and cessation practices. Reduction involves partial avoidance, whereby individuals reduce the use of vehicle, including the distance, frequency of use, and avoid driving in challenging situations such as peak hours, bad weather, and demanding locations [2,10]. Cessation on the other hand, involves total avoidance, whereby individuals choose to cease driving completely and never drive again [2,10]. As we aimed to explore the role of spouse in driving self-regulation, we examined three different driving roles that were identified through thematic analyses performed for this study based on the participants' responses (See Theme 1 in the results), defined as follow: i. Principal driver: Individual who takes charge of driving duty most of the time. ii. Alternate driver: Individual who drives less frequent than the principal driver and takes over driving duty only when the principal driver is not available. iii. Former driver: Individual who had ceased driving completely. Data collection and instruments A one-to-one, semi-structured interview was conducted between January 2017 and April 2018 using a list of pre-determined open-ended questions guided by a moderator. To ensure consistency in data collection, the first author conducted all the interviews. The interview guide was developed based upon prior experience of our research team and literature focusing on barriers and facilitators to driving reduction and cessation practices by older married couples to maintain their mobility in later life (S1 File). Couples who consented (written consent) to participate were interviewed separately and received a gift voucher worth RM 50 (approximately USD 11.96) each. Interviews were conducted at a time and location (e.g. a closed room or a quiet place at home, organisation, or medical centre) of convenience for the participants. Each interviews averaged one hour, and responses were audio recorded throughout the sessions. Recruitment of participants was conducted until thematic saturation was achieved when no new themes identified in the subsequent interview. Prior to commencement of the interview sessions, additional demographic information, including medical history and travel patterns were collected using the Driving and Riding Questionnaire (a self-administered battery of existing scales with multiple choice and open-ended questions) [18]. All the interviews were conducted in English and no translation was involved. Data analyses All audio-recorded interviews were transcribed verbatim. The analyses were performed using a thematic approach to identify patterns and emerging themes from the quotes based on inductive coding technique within the data and not based on pre-existing assumption or theory [19]. In this approach, two coders independently reviewed and familiarised themselves with the interview transcripts and later, collaboratively created a codebook of all preliminary codes that emerged from the independent analyses forming a meaningful framework after mapping of similar codes and renaming of the themes. Peer reviews, including counterchecking with other investigators occurred at several stages of the analyses to ensure rigor and reliability of response coding. Using interpretative description method, the responses from participants were embedded within the text to support and build evidence for the proposed themes and sub-themes. All the quotes included in the text were transcribed exactly from the original recordings. In the event of absence of clarity, additional descriptions were incorporated in parentheses to aid in understanding or [sic] was indicated where necessary. All interviews were coded using NVIVO version 11.0 (QSR International Pty Ltd). Participant characteristics A total of 11 couples were interviewed with 20 current car drivers and two former car drivers with a median of 68 years old (60-79) (S1 Table). All the participants had some form of education and most were retirees (n = 16, 72.7%). Almost all the current drivers drove their cars daily (n = 15/20, 75.0%), often their own locality (n = 19/20, 95.0%). Most of the participants reported no crash history within the past five years (n = 13, 59.1%). Amongst the two participants who reported to have ceased driving completely, both were females, reported having trouble with their vision, and were living with a spouse who drove a car almost daily. Thematic analyses Three major themes with 11 supporting sub-themes were identified relevant to mobility status: [1] our roles in driving; [2] challenges to continue driving and; [3] our driving strategies to ensure continued driving (Fig 1). All the male partners in this study reported that they were the principal drivers in the household, whilst female partners reported to be either the alternate drivers or former drivers. For many couples who continued to drive, they reported changing their driving habits, such as imposing driving limitations, sharing driving responsibility, and travelling by public transportation or mixed mode transportation (interchanging between public and private transportations for a single trip). For some couples, having a partner for navigation whilst driving was perceived important. More detailed descriptions of the themes with supporting quotes are elucidated in detail below. Theme 1: Our roles in driving. Sub-theme 1: Male partners as principal drivers. Our analysis revealed that male partners generally took on the principal role of driving the family around, especially in any family trips. Most male participants replied that this was the norm, and they rarely got other family members to drive, unless necessary (e.g., too tired to drive, unfamiliar with roads). This need to drive was even stronger amongst males whose female partners had ceased driving as they felt that this was their responsibility to chauffeur their spouse. Only two female participants were former drivers and they heavily relied on their male partners to chauffeur them after ceasing to drive. Our study also noted that it was common for female partners to rely on male partners for transportation. Most of them expressed that they would depend on their spouse for mobility if they were to cease driving someday. "Couple 1 Female: If I stopped driving, I will depend on my husband." Sub-theme 2: Female partners as alternate drivers. Our study also found that female spouses were often the alternate driver. Just like principal drivers, alternate drivers held an important driving role in their early phase of driving, to support the family needs, including chauffeuring the children to school related activities or visiting their grandparents. As they begin to age, most female participants suggest that they continued driving for personal needs. "Couple 3 Female: When my children were small, I have to take them to see doctor, take them to tuition, and take them for activities." "Couple 9 Female: Every day, I used to drive to work before I retire, drive my children to tuition, drive myself to the market, and then sometimes drive to church." Nevertheless, they often relied on the principal driver whenever they planned for any family outings and trips. They also, however expressed that they would take over the role as principal driver if needed, especially when the partners were not available. Sub-theme 4: Physical and physiological changes within me. Participants mentioned that they were very conscious about the physical changes and their driving abilities as they age. "Couple 5 Male: I will have to be conscious of the symptoms or signs as I am growing old." "Couple 8 Male: When you reach a certain age, you will find it a bit difficult to drive." Many foresee that they may face some inevitable degenerative changes due to ageing that will affect their ability to drive safely and longer, including having one or more comorbidities, suffering from vision and cognitive impairments, reduced physical strength and flexibility, and poor psychomotor ability. Driving was also described as physically exhausting and tiring for some participants, especially when it involved long distance trips. And if they were to cease driving eventually, they would prefer a gradual transition, unless they were physically disabled. "Couple 7 Male: Driving can be tiring; it needs a lot of energy." "Couple 5 Male: If I am incapacitated due to injury. I will have to force myself to stop driving." Apart from physical changes, psychological factors include low confidence, being fearful of traffic conditions or other road users, and concern about their safety as well as others. "Couple 10 Male: We know that we don't have the confidence to drive, better not to drive. With all the heavy traffics, it's terrible, and all the youngsters being reckless on the road, scares me." Their acceptance toward receiving advice from others varied. There were only a few who took the advice seriously, whilst the others did not. "Couple 3 Female: Sometimes, [I would take] his advice if I felt that his advice is good." Amongst those who did not, they viewed themselves as safe and experienced drivers. Sub-theme 6: Challenging driving environment. Some of the common challenges that the participants reported during driving were related to the environment they were in, namely; poorly maintained road signage, road conditions, complex road networks or unfamiliar places, reckless road users, and vehicle condition, including the size of the car (too big) or car with mechanical problems. Even though these strategies contribute to overall reduction in their driving, they would rather choose to limit their driving than to cease driving completely. Their primary goal was mainly to continue driving for as long as possible to keep doing what they want to do. "Couple 11 Male: Without driving, I cannot move around. Driving is very important for me." Despite their active driving routines, they also admitted that their driving habits had changed and that they were being more careful by planning their trips or practice defensive driving. Other examples include imposing driving limitations such as making fewer trips, driving shorter distances, driving at lower speed, and avoid driving in challenging situations such as at night, in the rain, and during peak hours. Sub-theme 8: Sharing our driving responsibilities. Sharing the driving responsibility was common amongst older couples who continued driving. There are two major factors which were taken into consideration namely, spouse's condition and nature of travel. Most of the time, they would share their driving if they were travelling long distances or when the partner was feeling tired, sleepy, or unwell. "Couple 8 Male: Sometimes we share, especially if we are travelling to far places, then we take turns." "Couple 3 Male: If it is a long journey, then yes, we take turns in driving." "Couple 1 Male: Sometimes, she will feel giddy [sic], so I will take over." Sub-theme 9: Spouse as a co-driver. There was a mix of responses for having the spouse as a co-driver (co-pilot). Some couples were positive due the benefits of having a co-driver (copilot) beside them, especially if they were making a trip to a new place. They also admitted that having a co-driver (co-pilot) will be good in later stages of their lives. Conversely, a few did not favour having a co-driver (co-pilot), as they felt that it was a distraction. "Couple 8 Male: It is a good idea when I reach the later stage of my life." Sub-theme 10: Public transportation. Many participants had reservation when it came to travel by public transportation. They expressed the difficulty in reaching destination, long waiting time, limited accessibility as well as unfamiliarity with the transit systems. "Couple 7 Male: There was one time when I took the bus, I did not know when to get down. I had the trouble to tell the driver where I want him to stop." Nevertheless, participants mentioned that they were willing to use public transportation to: [1] avoid traffic congestion; [2] as an alternative to driving if they were to travel somewhere far; and, [3] time-and effort saving, especially when it involved the search for a car parking space. "Couple 4 Male: If I am going to a congested area, I will take the public transport. Because it's terrible to get stuck in traffic jams." "Couple 4 Male: When you drive, you have to find a parking space and this is a hassle." Sub-theme 11: Mixed mode transportation. Unlike previous studies whereby mobility amongst older adults was often described as mono-mode (e.g., drive a car or travel by public transport all the way to reach their destination) and there was no sign of interchange between private and public transportations, travelling using two different modes of transportation was a unique coping strategy identified amongst older couples in this study. There were two common ways how it was applied in their travel routine. Most of them drove their car to the nearby train station and then used public transport to reach their destination. Another way is to depend on their spouse to drive them to nearby train station and then travel by public transport to continue their journey. "Couple 9 Male: I usually ask my wife to drive me to the station and then, when coming back, she will fetch me home." Discussion Driving transition can potentially impact the lives of older drivers and people within their social circles, especially their spouse or partner. We found that there was some form of association between older married couple's engagement level in driving and their driving decisions, which influenced their subsequent driving transition outcomes. Although male partners were the principal drivers in this study, it is important to acknowledge the different roles held by female partners as reflected in the themes and sub-themes identified, including their roles as a co-driver or a driving partner to share the duty of driving. Ultimately, this study highlighted several key findings emphasising the concept of interdependency in older married couples and their engagement in driving transition, including passive strategies adopted by female partners, impact of personal factors (intrapersonal and interpersonal) on driving decisions, and mixed mode transportation as a coping strategy. Whilst some of the factors and challenges (e.g., retirement, financial status, age-related changes, road system, and vehicle condition) identified in this study were previously reported in studies on older car drivers [2,4], this study also noted how spouses influence driving selfregulation and how this decision influenced their driving roles (principal, alternate or former driver). It appears that one driver's engagement in driving is impacted by his/her partner's engagement simultaneously. This engagement may exist in various forms, including responsibility/duty (e.g., chauffeuring), opinion (e.g., advice, comment) or even just their presence (e.g., co-driver/navigator). Our findings were consistent with the conclusion of previous studies. For example, Rosenbloom and Herbel noted that male partners often took on the primary role of driving compared to their female counterparts [20]. In this study, majority of females expressed being dependent on their spouse for mobility and often drove only if needed. As such, most of the female partners in this study who were either an alternate driver or a former driver adopted passive strategies such as accompanying the principal driver (spouse) as a passenger or helping them with navigation as a co-driver (co-pilot), whilst assisting their spouse to maintain their driving competence. Previous studies also found that some older couples began their reliance on one another for transport, by serving as a co-driver (co-pilot) for each other emphasising the significance of female partners in sustaining mobility [21][22][23]. Nevertheless, in this study, the need for a co-driver (co-pilot) was perceived important only if they were to travel to unfamiliar places. Whilst demographic and environmental factors (e.g., financial status and support systems) are important to ensure continued driving (Fig 1), personal factors are equally essential for older couples to be safe and mobile. These include driving abilities of the principal drivers (intrapersonal factor) and their willingness to bear the chauffeuring responsibility (interpersonal factor). In this study, mobility needs were maintained at satisfactory levels provided that at least one partner did not have any physical limitations affecting their ability to continue to drive safely. Most importantly, the driving partner must be willing to take charge of driving duty more than the other partner. A significant drawback of such strategy was that the driving partners that bear the responsibility may need to drive longer than they should. As such, this study suggests that self-regulation was less stressful and more adaptive amongst older married couples when both male and female partners continued driving. For instance, several couples adopted the strategy to share the duty of driving amongst themselves, given their nature of travel (long distances) and spouse condition (e.g. tired, sleepy, unwell). The advantage of such coping strategy for married couples was that it lessened the burden of the chauffeuring partner (often male partners) since both would take turns in driving. In terms of alternative transportation, older married couples did not perceive public transportation as equal to private transportation, as they lamented the lack of convenience when using public transportation. However, older adults mentioned that they would use public transportation in certain situations, such as travelling to faraway places or to a congested area. Unique to this study, mixed mode transportation was a form of driving self-regulation for current car drivers. They reported to utilise two modes of transport by shifting between public and private transportations for a trip. This they felt was a smart and cost-effective coping strategy; whereby older adults were able to maintain their driving abilities and stay mobile. Implications and recommendations In Malaysia, several national policies for older adults have been developed and implemented (e.g., National Policy for Older Persons and Plan of Action for Older Persons in 2011, National Health Policy for Older Persons in 2008), however there is currently no licensing system and medical assessment in place to ensure that older adults are medically fit and able to drive safely [24]. Furthermore, many older adults do not perceive public transportation (e.g., bus, train, etc.) to be as convenient as private transportation, especially when there is no additional supports to assist them [4]. Given the rapid ageing population in Malaysia, it is timely and essential to conduct research to better understand and support the needs of growing older population. Ongoing gerontology and geriatric research is warranted to formulate, monitor, and evaluate the effectiveness of policies and programs implemented. Whilst the key problem is to achieve the balance between safety and mobility in older drivers through policy and emerging technology, our study examined how married older adults help supported each other and adopted different driving and coping strategies in an effort to continue driving safely. The current study findings explored the impact of having at least one spouse who can drive on older couples' driving decisions and outcomes. We noted that the couple's engagement in driving may influence or constraint his/her partner's engagement in driving for transportation support [13,16]. Whilst this is true, not many studies have examined the impact of living arrangement (living with spouse or other immediate family members) on driving self-regulation. Previous studies evidenced some form of interactions between living arrangement (multi-person household) and the incident of driving cessation [9,25]. Traditionally, the family has been the main pillar of care and support for older people. Unlike Western community, Asians practice filial piety and often, adult children are the main providers of transport and financial support to the ageing parents [24]. However, demographic and social trends indicate more and more older adults will be living alone or with spouse only [2]. This suggests that future programs and interventions may need to consider the diversity of living arrangements as shaped by the presence of spouses, their roles, and contributions in driving transition and cessation planning, and the subsequent impact on driving decisions and outcomes. Given that driving ability is often linked to safety and mobility which affects wellbeing of the older populations [26,27], further research in this area is clearly warranted to better address this issue in the future. Strengths and limitations To our best knowledge, this is the first study examining older married couples' perceptions of driving self-regulation and strategies adopted to maintain mobility. However, this need to be taken into consideration on the limitations of the study. Firstly, participants were urban dwellers; thus, the findings may be skewed to the themes reflecting perspectives of older couples from urban setting, limiting the generalisability to the general older population in Malaysia. Secondly, some of the responses obtained were retrospective reflections of their driving experiences underscoring the need for prospective studies (with follow-up interviews) to validate themes identified in this study. In term of analyses, there were only two former drivers and thus, and we were unable to provide robust evidence on the impact of having a spouse who ceased driving on older couples' driving decisions and outcomes. Lastly, all male partners were principal drivers and none of the female partners held active driving role at the time of data collection. Therefore, our attempt to further validate findings on the driving roles was not possible due to limited information. Overall, interpretation of themes and sub-themes should be cautious beyond our sample characteristics and future research would need to be conducted amongst a larger sample with greater focus on older couples with female principal drivers and former car drivers to validate current study findings. Conclusions Current findings emphasised the importance of spousal influence (duty/responsibility, opinion, and presence) in driving self-regulation and the importance of couple-level engagement for later life mobility status. We deduce that shared driving was an ideal coping strategy for older married couples, particularly amongst those who may not wish to cease driving. As evident in this study, the diverse roles played by spouses during the driving task and their contributions in driving transition were found to be influential, underscoring the need to recognise the importance of interdependency in couples and the subsequent impact on their driving decisions and outcomes. Therefore, future programs and interventions should consider the different roles and contributions of spouses in driving transition and cessation planning, especially for older couples to maintain their independence in mobility and access in safety.
v3-fos-license
2022-03-23T15:33:11.575Z
2022-03-01T00:00:00.000
247605879
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1944/15/6/2241/pdf", "pdf_hash": "5a5ad3b74dd2e090cb973e17ca14c135f2599355", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46791", "s2fieldsofstudy": [ "Chemistry", "Engineering" ], "sha1": "75cc0cb6a6edb76b902876317b36cb5aa3be127f", "year": 2022 }
pes2o/s2orc
CVD-Synthesis of N-CNT Using Propane and Ammonia N-CNT is a promising material for various applications, including catalysis, electronics, etc., whose widespread use is limited by the significant cost of production. CVD-synthesis using a propane–ammonia mixture is one of the cost-effective processes for obtaining carbon nanomaterials. In this work, the CVD-synthesis of N-CNT was conducted in a traditional bed reactor using catalyst: (Al0,4Fe0,48Co0,12)2O3 + 3% MoO3. The synthesized material was characterized by XPS spectroscopy, ASAP, TEM and SEM-microscopy. It is shown that the carbon material contains various morphological structures, including multiwalled carbon nanotubes (MWCNT), bamboo-like structures, spherical and irregular sections. The content of structures (bamboo-like and spherical structure) caused by the incorporation of nitrogen into the carbon nanotube structure depends on the synthesis temperature and the ammonia content in the reaction mixture. The optimal conditions for CVD-synthesis were determined: the temperature range (650–700 °C), the composition (C3H8/NH3 = 50/50%) and flow rate of the ammonia-propane mixture (200 mL/min). Introduction Steady interest in carbon nanotubes (CNTs) is due to their unique properties; already today, they find: practical applications-for the creation of fire-retardant materials, fuel cell electrodes, in catalysis-as catalyst supports, in nanoelectronics-for the creation of one-dimensional conductors, nanosized transistors, supercapacitors, in technology-as additives to polymer and inorganic composites to increase mechanical strength, electrical conductivity and heat resistance [1][2][3][4][5][6][7][8]. A new approach to changing the chemical and electrical properties of CNTs is the modification of the carbon structure by a heteroatom, nitrogen. Currently, to obtain nitrogendoped carbon nanotubes (N-CNTs), methods and approaches based on the direct formation of material from a nitrogen-containing carbon precursor or on the thermal treatment of undoped CNTs in a nitrogen-containing atmosphere are being developed [9][10][11][12][13][14]. The development of new economical methods for the synthesis of carbon nanomaterials in large quantities is a very urgent task because their widespread use is currently constrained by their high cost, which does not allow the use of N-CNTs on an industrial scale. Admittedly, the most flexible, providing a variety of possible synthesis modes, is the chemical vapor deposition (CVD) method [2]. It is known that the qualitative and quantitative composition of carbon nanotubes obtained by this method depends on the temperature, duration of synthesis, catalyst and gas mixture compositions [15][16][17][18][19][20][21]. It is known that the growth mechanism of N-CNTs is different from the CNT mechanism only by fact that the destruction of nitrogen precursor leads to the formation of nitrogen which diffuses like carbon to the catalyst volume [21]. N-CNT can form bamboo-like or spherical section structures. There are four stages reported in the literature model: (1) the catalyst reacts with carbon, forming carbide particle; (2) carbon is forming graphite layer on the surface of carbide particle; (3) new layers of graphite are formed with a cup-shaped structure; (4) the cup slides leaving a gap at the tip of the particle [22]. The choice of the catalyst composition for the synthesis of N-CNT by the method of direct incorporation of nitrogen into the carbon structure was made on the basis of preliminary experiments and literature analysis [23][24][25]. The choice of catalyst composition was justified by the following considerations. We proceeded from the fact that the catalyst should have the ability to: • form metastable carbides; • accelerate the reactions of dissociation of hydrocarbons with the formation of carbon; • form metastable nitrides; • to accelerate the reactions of dissociation of nitrogen-containing compounds, with the formation of nitrogen. These conditions are satisfied by elements of groups VI and VIII, which can be found in the compositions of catalysts for dehydrogenation, oxidative conversion, and dissociation of hydrocarbons. Most of these catalysts containing these elements simultaneously exhibit catalytic activity in the reaction of synthesis-dissociation of ammonia. The most accessible and cheap, at the same time, are compounds of Fe, Co, Ni, Mo, etc. [1,26]. Compounds of groups III, IV and VI can be considered as promoters that ensure the stability of the catalyst structure under the reaction conditions (Al, Cu, Mo) or as a support (Si, Al). In this case, compounds of VI and VIII groups can participate as a catalyst in both reactions [2]. The choice of the composition of the initial gas mixture is determined by availability and price. It is clear that CH 4 and NH 3 are the most accessible for these purposes, but despite this, only a few publications were devoted to the N-CNT synthesis using ammonia and propane [27][28][29]. These compounds are considered among the most stable when heated, but the dissociation reactions of methane and ammonia on iron and cobalt start at very different temperatures. Approximate decomposition temperature ranges for methane: 600-800 • C; propane: 400-700 • C; ammonia: 300-500 • C [29,30]. At high temperatures in the presence of catalysts in a methane-hydrogen mixture, it is difficult to maintain a high concentration of ammonia, since its dissociation begins at a noticeable rate already at temperatures of 300-350 • C. As a result, the concentration of nitrides on the surface of the catalysts and the content of ammonia in the gas mixture at N-CNT synthesis temperatures will be extremely low; this will not allow getting products with a high nitrogen content in them. In this work, these considerations substantiated the choice of propane (C 3 H 8 ), which exhibits high reactivity even at a temperature of 600 • C. Therefore, a mixture of propane with ammonia in various ratios was used as the initial mixture in this work. The novelty of this work is due to the possibility of obtaining nitrogen-doped carbon nanomaterials using inexpensive precursors and at a relatively low temperature of 650 • C. In the future, the development of a method for obtaining a material with a certain morphological structure and a certain nitrogen content will make it possible to evaluate the mechanism of the formation of carbon nanotubes through the stage of formation of nitrides. The aim of this work was to establish the dependence between the nitrogen content in the N-CNT, the composition of the initial mixture of propane and ammonia and synthesis temperature, and also established the morphological composition of the synthesized carbon materials. Materials For the synthesis of N-CNTs by chemical vapor deposition, the following gases were used: nitrogen-N 2 (99.99%), liquefied propane C 3 H 8 (CH 4 -0.3%, C 2 H 6 -4, 7%, C 3 H 8 -95%), anhydrous liquefied ammonia NH 3 (99.9%). All gases were purchased by NII KM (Moscow, Russia). In a beaker, weighed portions of crystalline hydrates of iron (III) nitrates (2.852 g), cobalt (0.493 g), aluminum (2.119 g) and glycine (1.711 g) are placed. Pure water (2 mL) is added to a sample of crystalline ammonium paramolybdate and heated to 40 • C, stirring until the salt is completely dissolved. Then, the resulting solution of ammonium paramolybdate is transferred into a beaker with weighed portions of salts, the mixture is stirred until complete dissolution. The mixture is heated and stirred until a clear solution of intense red-brown color is formed for 1 h at a temperature of 40 • C. The resulting solution is transferred to a porcelain cup, which is placed in a muffle furnace preheated to 550 • C for 10 min, then removed, cooled to room temperature. The flow rate of propane and ammonia was set using RRG-12 flow regulators (Eltochpribor, Zelenograd, Russia). The temperature regime for the synthesis of carbon nanotubes was set using a temperature controller TERMODAT-17E6 (PP "Control Systems", Perm, Russia). The flow rate of the product mixture was determined with an ADM G6691A flow meter (Agilent Tech., Santa-Clara, CA, USA). The ammonia flow rate at the outlet of the reactor was determined by gas titration. Characterization of Nitrogen-Doped Carbon Nanotubes The sizes of the carbon nanotubes were determined by LEO 912AM Omega (Carl Zeiss, Oberkochen, Germany) transmission electron microscope. Images were acquired at 100 kV accelerating voltage. The analysis of the microphotographs and the calculation of particle sizes were carried out using the Image Tool V.3.00 (Image Tool Software, UTHSCSA, San Antonio, TX, USA). At least 100 particles per sample were processed. The outer and inner diameters of the nanotubes were determined. The morphology of carbon materials was studied using a scanning electron microscope (SEM). The micrographs of the samples were taken on JSM 6510 LV + SSD X-MAX microscopes (JEOL, Tokyo, Japan) at an accelerating voltage of 20 kV. The XPS spectra were recorded using ESCA X-ray photoelectron spectrometer (OMI-CRON Nanotechnology GmbH, Taunusstein, Germany). The samples of N-CNT investigated by XPS spectroscopy contained catalyst. The parameters of the porous structure of the samples were calculated based on the isotherm of low-temperature nitrogen adsorption. The studies were carried out on a Gemini VII analyzer (Micromeritics, Norcross, GA, USA) at the Center for Shared Use. DI. Mendeleev. The specific surface area was determined by the BET method. The total pore volume was found from the maximum value of the relative pressure, equal to 0.995. The predominant pore diameter was calculated using the BJH method. Influence of Flow Rate and Composition of the Initial Gas Mixture It is known that ammonia on iron catalysts can dissociate at atmospheric pressure already at temperatures of 300-350 • C. Hydrocarbon gases under these conditions remain practically inert and begin to dissociate at a noticeable rate at temperatures above 600 • C; therefore, it is proposed to use propane as a carbon-containing precursor. In order to maintain the ammonia concentration under the synthesis conditions at a sufficiently high level, it is necessary to reduce the residence time of the reaction medium in the reactor by increasing its flow rate and lowering the temperature in the reactor. As can be seen from the results in Table 1, at a temperature in the reactor of 800 • C and a change in the mole flow of pure ammonia at the inlet to the reactor from 18.7 to 37.3 kmole/s, the mole flow of ammonia at the outlet remains practically unchanged. The result obtained indicates that in the studied range of mole flow rates in the outgoing mixture, an equilibrium concentration of unreacted ammonia is established, the level of which is determined by the temperature in the reactor. As the temperature in the reactor decreases, the flow rate and the equilibrium concentration of ammonia under stationary conditions increase. We concluded that it is not advisable to increase the ammonia concentration at the reactor outlet by reducing the contact time (or, which is also an increase in the linear velocity), since this can lead to catalyst carryover from the reactor and may be accompanied by an unsustainable increase in the consumption of raw materials. Secondly, as can be seen from the table, it is more rational to reduce the temperature in the reactor. For the synthesis of carbon nanotubes, the value of 26/1 kmol/s (200 mL/min) was chosen as the optimal flow rate of the initial mixture of propane and ammonia, while the synthesis of the carbon material was carried out at different ammonia contents in the initial mixture (C 3 H 8 /NH 3 , vol%): 100; 25:75; 50:50; 75:25; 90:10. The initial temperature for synthesis was chosen as 650 • C. In order to establish the dependence of the amount of nitrogen introduced into CNTs on the different compositions of the initial gas mixture, as well as to determine the electronic state of atoms on the surface of the material under study, the obtained samples were investigated by X-ray photoelectron spectroscopy. In Figure 1, the typical XPS spectra of CNT, obtained from propane are shown. For carbon, the line shape of the spectrum has a maximum with E = 284.6 eV, which is typical for sp 2 hybridization carbon structures. A characteristic peak for nitrogen is observed; in a pyridine-like (398.8 eV) state, other forms of nitrogen are absent. The formation of a product containing nitrogen in its structure could occur only from a mixture of gases that was formed when nitrogen was supplied to the reactor during its heating or cooling. Therefore, N 2 can also be a nitrogen precursor gas, under the given synthesis conditions. was formed when nitrogen was supplied to the reactor during its heating or cooling. Therefore, N2 can also be a nitrogen precursor gas, under the given synthesis conditions. The content of the different states of nitrogen, carbon and oxygen in synthesized samples are presented in Table 2. was formed when nitrogen was supplied to the reactor during its heating or cooling. Therefore, N2 can also be a nitrogen precursor gas, under the given synthesis conditions. The content of the different states of nitrogen, carbon and oxygen in synthesized samples are presented in Table 2. The content of the different states of nitrogen, carbon and oxygen in synthesized samples are presented in Table 2. Figures 3 and 4 show the dependences of the content of total nitrogen and various forms of nitrogen on the initial content of ammonia in the reaction mixture. 4 show the dependences of the content of total nitrogen and various forms of nitrogen on the initial content of ammonia in the reaction mixture. From Figures 3 and 4 it can be seen that with an increase in the ammonia content in the initial gas mixture, the total nitrogen content in the samples first increases to the maximum nitrogen content in the sample, synthesized from a mixture of C3H8/NH3 = (50/50%), and then decreases. In this case, the maximum content of the pyridine-like form of nitrogen is observed in the sample, which is synthesized from a mixture of C3H8/NH3 (75/25%). A graphite-like form is seen in the sample C3H8/NH3 (25/75%). The smallest content of oxidized forms of nitrogen contains the N-CNT, synthesized using C3H8/NH3 (25/75%). According to the literature, the maximum nitrogen content in a doped carbon material can reach about 10%; however, a high nitrogen content is not always justified from 4 show the dependences of the content of total nitrogen and various forms of nitrogen on the initial content of ammonia in the reaction mixture. From Figures 3 and 4 it can be seen that with an increase in the ammonia content in the initial gas mixture, the total nitrogen content in the samples first increases to the maximum nitrogen content in the sample, synthesized from a mixture of C3H8/NH3 = (50/50%), and then decreases. In this case, the maximum content of the pyridine-like form of nitrogen is observed in the sample, which is synthesized from a mixture of C3H8/NH3 (75/25%). A graphite-like form is seen in the sample C3H8/NH3 (25/75%). The smallest content of oxidized forms of nitrogen contains the N-CNT, synthesized using C3H8/NH3 (25/75%). According to the literature, the maximum nitrogen content in a doped carbon material can reach about 10%; however, a high nitrogen content is not always justified from From Figures 3 and 4 it can be seen that with an increase in the ammonia content in the initial gas mixture, the total nitrogen content in the samples first increases to the maximum nitrogen content in the sample, synthesized from a mixture of C 3 H 8 /NH 3 = (50/50%), and then decreases. In this case, the maximum content of the pyridine-like form of nitrogen is observed in the sample, which is synthesized from a mixture of C 3 H 8 /NH 3 (75/25%). A graphite-like form is seen in the sample C 3 H 8 /NH 3 (25/75%). The smallest content of oxidized forms of nitrogen contains the N-CNT, synthesized using C 3 H 8 /NH 3 (25/75%). According to the literature, the maximum nitrogen content in a doped carbon material can reach about 10%; however, a high nitrogen content is not always justified from the point of view of further use, including as catalyst supports [6,8,31]. Significant incorporation of nitrogen into the structure of carbon nanotubes occurs at a high level of ammonia; however, at a concentration of more than 50%, the process of nitrogen doping slows down, and such an effect of the ammonia content may be due to the different mechanism of formation of nitrogen-doped structures [8]. To establish the mechanism, first of all, it is required to trace which phase transformations the catalyst undergoes. We assume that nitrides or carbonitrides are formed during CVD-synthesis, and the ammonia concentration has a direct effect on this process. Figure 5 shows TEM-images of samples of synthesized N-CNT using a gas mixture with a different ratio of C 3 H 8 /NH 3 at a temperature of 650 • C. the point of view of further use, including as catalyst supports [6,8,31]. Significant incorporation of nitrogen into the structure of carbon nanotubes occurs at a high level of ammonia; however, at a concentration of more than 50%, the process of nitrogen doping slows down, and such an effect of the ammonia content may be due to the different mechanism of formation of nitrogen-doped structures [8]. To establish the mechanism, first of all, it is required to trace which phase transformations the catalyst undergoes. We assume that nitrides or carbonitrides are formed during CVD-synthesis, and the ammonia concentration has a direct effect on this process. Figure 5 shows TEM-images of samples of synthesized N-CNT using a gas mixture with a different ratio of C3H8/NH3 at a temperature of 650 °С. As can be seen from the images, the synthesized product is presented by different structures CNTs. There are several types of morphological structures found in the obtained products of nitrogen-doped carbon materials, including multiwalled carbon nanotubes (MWCNT), bamboo-like structures, spherical and irregular sections [13,25]. These types of structures are presented on Figure 6. As can be seen from the images, the synthesized product is presented by different structures CNTs. There are several types of morphological structures found in the obtained products of nitrogen-doped carbon materials, including multiwalled carbon nanotubes (MWCNT), bamboo-like structures, spherical and irregular sections [13,25]. These types of structures are presented on Figure 6. the point of view of further use, including as catalyst supports [6,8,31]. Significant incorporation of nitrogen into the structure of carbon nanotubes occurs at a high level of ammonia; however, at a concentration of more than 50%, the process of nitrogen doping slows down, and such an effect of the ammonia content may be due to the different mechanism of formation of nitrogen-doped structures [8]. To establish the mechanism, first of all, it is required to trace which phase transformations the catalyst undergoes. We assume that nitrides or carbonitrides are formed during CVD-synthesis, and the ammonia concentration has a direct effect on this process. Figure 5 shows TEM-images of samples of synthesized N-CNT using a gas mixture with a different ratio of C3H8/NH3 at a temperature of 650 °С. As can be seen from the images, the synthesized product is presented by different structures CNTs. There are several types of morphological structures found in the obtained products of nitrogen-doped carbon materials, including multiwalled carbon nanotubes (MWCNT), bamboo-like structures, spherical and irregular sections [13,25]. These types of structures are presented on Figure 6. From a review of the literature, we can conclude that the nanotubes with bamboo-like, spherical and irregular structures (Figure 6b-d) are most likely nitrogen-doped carbon materials [13,25]. Based on the results of transmission electron microscopy of the samples, Table 3 contains the percentage of different type structures in samples, synthesized in the presence of NH 3 content, and the values of predominant inner and outer diameters. It can be seen that the composition of the initial gas mixture affects the morphology, at various ratios of initial gases, certain morphological structures prevail in the product. With an increase in the content of ammonia in the mixture, there is an increase in the content of bamboo-like structures and a decrease in the content of irregular structures. For the sample synthesized from C 3 H 8 /NH 3 (50/50%) mixture, the quantitative ratio of all the observed structures occupies an intermediate position with respect to other samples. It also corresponds to the highest total content of fibers with bamboo-like and spherical sections (81.8%). The values of the predominant outer and inner diameters lie in the range from 31 to 24 and from 24 to 15, respectively. Despite the change in the fractional composition of the carbon material with a change in the initial content of ammonia, no significant difference in the sizes of nanotubes is observed. Temperature Effect The choice of synthesis temperature is based on the dependence of the content of nitrogen atoms embedded in the CNT structure. For this purpose, several CNT samples were synthesized in an ammonium-propane mixture at several temperatures at a constant ratio of ammonia and propane in the initial gas mixture (50/50%). The Table 4 shows the values of product yield and residual catalyst content. An important parameter in the synthesis of CNTs is their yield, which is determined by the ratio of the mass of the formed product to the mass of the initial catalyst. As can be seen from the presented data, with an increase in the synthesis temperature, an increase in the yield of N-CNTs is observed; moreover, the increase in initial ammonia content leads to the decrease of the N-CNT yield. It is known that the amount of nitrogen contained in CNTs is affected by the conditions of synthesis [22,31]. Table 5 contains the results of XPS-analysis for samples, synthesized at different temperature and constant gas mixture (50/50%). From the data presented in Table 5, it can be seen that with an increase in the process temperature from 650 • C to 800 • C, a general decrease in the nitrogen content in N-CNTs is observed. This result may be due to a decrease in the concentration of ammonia in the gas phase. It can be also seen that a decrease in temperature promotes the synthesis of materials containing non-oxidized forms of nitrogen incorporated into the carbon structure. An increase in the synthesis temperature leads to the appearance of oxidized forms of nitrogen. At temperatures of 700 • C and 750 • C, nitrogen in the tubes is in two states: pyridine-like and graphite-like, while the content of nitrogen in the graphite-like state is higher than in the pyridine-like state at any synthesis temperature. With an increase in temperature to 800 • C, a third form of nitrogen appears: oxidized. In addition, oxygen was found in the product, the content of which decreases with increasing synthesis temperature. One of the reasons for the appearance of oxygen in the samples, is residual catalyst in N-CNT, which components can be only partially reduced to a metallic state. Figure 7 shows TEM and SEM-images of samples of synthesized N-CNT using a C 3 H 8 /NH 3 (50/50%) gas mixture at a temperature 800 • C. in the yield of N-CNTs is observed; moreover, the increase in initial ammonia content leads to the decrease of the N-CNT yield. It is known that the amount of nitrogen contained in CNTs is affected by the conditions of synthesis [22,31]. Table 5 contains the results of XPS-analysis for samples, synthesized at different temperature and constant gas mixture (50/50%). From the data presented in Table 5, it can be seen that with an increase in the process temperature from 650 °С to 800 °С, a general decrease in the nitrogen content in N-CNTs is observed. This result may be due to a decrease in the concentration of ammonia in the gas phase. It can be also seen that a decrease in temperature promotes the synthesis of materials containing non-oxidized forms of nitrogen incorporated into the carbon structure. An increase in the synthesis temperature leads to the appearance of oxidized forms of nitrogen. At temperatures of 700 °С and 750 °С, nitrogen in the tubes is in two states: pyridine-like and graphite-like, while the content of nitrogen in the graphite-like state is higher than in the pyridine-like state at any synthesis temperature. With an increase in temperature to 800 °С, a third form of nitrogen appears: oxidized. In addition, oxygen was found in the product, the content of which decreases with increasing synthesis temperature. One of the reasons for the appearance of oxygen in the samples, is residual catalyst in N-CNT, which components can be only partially reduced to a metallic state. Figure 7 shows TEM and SEM-images of samples of synthesized N-CNT using a C3H8/NH3 (50/50%) gas mixture at a temperature 800 °С. As can be seen from the presented figures, the carbon nanomaterial is presented by nanotubes of various morphologies, including multiwalled carbon nanotubes (MWCNT), bamboo-like structures, spherical and irregular sections, similar to the material synthesized at 650 °C. As can be seen from the presented figures, the carbon nanomaterial is presented by nanotubes of various morphologies, including multiwalled carbon nanotubes (MWCNT), bamboo-like structures, spherical and irregular sections, similar to the material synthesized at 650 • C. Based on the obtained TEM-images, the predominant inner and outer diameters and percentage of different morphological structures of nitrogen-doped CNTs were calculated. These results are presented in the Table 6. It can be seen that the synthesis temperature affects the fractional composition of the resulting carbon material. With an increase in temperature, the content of bamboo-like and irregular structures increases, and the content of structures with spherical sections decreases. Based on the fact that only bamboo-like and spherical structures can be attributed to nitrogen-doped carbon nanotubes, the optimal temperature range for synthesis is 650 • C. The values of the predominant outer and inner diameters lie in the range from 26 to 20 and from 16 to 8, respectively. There is no significant change in the particle size with an increase in the synthesis temperature. For investigating of doped N-CNT porous structure samples o with a total nitrogen content (N) total of 4.6 and 2.3%, obtained at temperatures of 700 and 800 • C, were studied using low-temperature nitrogen adsorption, which are presented on the Figure 8. Based on the obtained TEM-images, the predominant inner and outer diameters and percentage of different morphological structures of nitrogen-doped CNTs were calculated. These results are presented in the Table 6. It can be seen that the synthesis temperature affects the fractional composition of the resulting carbon material. With an increase in temperature, the content of bamboo-like and irregular structures increases, and the content of structures with spherical sections decreases. Based on the fact that only bamboo-like and spherical structures can be attributed to nitrogen-doped carbon nanotubes, the optimal temperature range for synthesis is 650 °C. The values of the predominant outer and inner diameters lie in the range from 26 to 20 and from 16 to 8, respectively. There is no significant change in the particle size with an increase in the synthesis temperature. For investigating of doped N-CNT porous structure samples o with a total nitrogen content (N)total of 4.6 and 2.3%, obtained at temperatures of 700 and 800 °С, were studied using low-temperature nitrogen adsorption, which are presented on the Figure 8. The isotherms refer to Type II according to the De-Boer classification, and indicates the occurrence of polymolecular adsorption. Based on the obtained isotherms, the main characteristics of the porous structure of were calculated: specific surface area, pore volume, and predominant pore diameter. The obtained values are shown in Table 7. The isotherms refer to Type II according to the De-Boer classification, and indicates the occurrence of polymolecular adsorption. Based on the obtained isotherms, the main characteristics of the porous structure of were calculated: specific surface area, pore volume, and predominant pore diameter. The obtained values are shown in Table 7. As can be seen from the presented data, the studied samples of carbon material have mesoporous structure, the specific surface of which is 113 and 215 m 2 /g. The pore volume of the samples is 0.4 and 1.2 cm 3 /g, the main contribution to which is made by mesopores, which are formed due to the interparticle volume between carbon nanotubes. Mesopore size distribution calculated BJH method (Barrett-Joyner-Halenda) shows that the predominant pore size is about 3-4 nm. An increase in the surface area and pore volume may be associated with an increase in the content of amorphous carbon with an increase in the synthesis temperature. Discussion The main purpose of this work is to choose the conditions for a simple and costeffective process for obtaining N-CNTs, in which the incorporation of the N atom into the structure of nanotubes will be possible. The carbon material was obtained by embedding a carbon material into the crystal lattice during synthesis using a (Al 0.4 Fe 0.48 Co 0.12 ) 2 O 3 catalyst doped with MoO 3 (3%) and a propane-ammonia gas mixture. The choice of ammonia and propane as a precursor of the doped carbon material was due to the low cost and the possibility of obtaining the material already at a temperature of 650 • C. An analysis of the literature data showed that this temperature of N-CNT synthesis is relatively low compared to other variants of CVD-synthesis using light hydrocarbons [22]. The content of nitrogen embedded in the crystalline structure of carbon varied due to changes in the ammonia content in the reaction mixture and the synthesis temperature. A total nitrogen content of about 5% has already been achieved with an ammonia content of 25% in the mixture. A further increase in the ammonia content to 90% did not lead to a significant change in the content of total nitrogen; however, the propane/ammonia ratio affects the content of a certain form of nitrogen (graphite-like, pyridine-like and oxidized forms of nitrogen), as well as the fractional composition of N-CNT. According to the TEM results, it was found that the synthesized carbon material contains various morphological structures, including multiwalled carbon nanotubes (MWCNT), bamboolike structures, spherical and irregular section structure [26]. In accordance with the literature data, only bamboo-like and spherical section structures can be classified as structures in which nitrogen is embedded in the crystalline structure of carbon [13,25]. It was found that the maximum number of such structures is formed when using an equimolar propane/ammonia mixture. The next part of the work was devoted to establishing the influence of the synthesis temperature on the N content and the morphological composition of the carbon material. A series of experiments was carried out at a constant propane/ammonia equimolar ratio; it was found that with an increase in temperature from 650 to 800 • C, the product yield increases, but the total nitrogen content decreases from 5.5 to 2.3%. In addition, the analysis of TEM and SEM images showed that there is a change in the content of various morphological structures; there is a decrease in the content of the spherical section structure and irregular structures begin to accumulate. An increase in the product yield, accumulation of irregular structures, and an increase in the specific surface area of the final product indicate the accumulation of amorphous carbon, which in turn leads to a deterioration in the quality of the resulting product. Based on this, it is advisable to carry out the synthesis of N-CNT at temperatures not exceeding 700 • C. The temperature dependence of the growth rate of N-CNT can be related to several factors such as concentration, diffusion rate, and growth rate at the interface between the catalyst and the formed nanotube. With an increase in temperature, the rate of diffusion of carbon and nitrogen atoms increases significantly, and the growth rate of N-CNT will correspondingly increase [12]. As a result of this work, the optimal conditions for obtaining doped carbon material (at least 80% N-CNT, total nitrogen content at least 5-6%) from readily available precursors were established. Further studies of CVD-synthesis of N-CNT using propaneammonia mixture can be directed to: detailed consideration of the growth process of carbon nanotubes and phase transformations of the catalyst during growth; an increase in the activity of catalysts based on Al, Co, Fe oxides and a search for promoters, including molybdenum oxides. Conclusions In this work, nitrogen-doped carbon materials were synthesized using propane and ammonia by the CVD method. It was shown that the total nitrogen content, as well as the content of a certain form of nitrogen (graphite-like, pyridine-like and oxidized forms of nitrogen) depend on the initial ammonia content in the reaction mixture and the synthesis temperature. The optimal conditions for N-CNT synthesis were chosen as follows: mixture flow rate, 200 mL/min; the composition of the reaction mixture-C 3 H 8 /NH 3 (50/50%); temperature range-650-700 • C. Under these conditions, a mesoporous carbon material is formed with a total nitrogen content of about 5% and with a content of nitrogen-doped structures (bamboo-like and spherical) of at least 80%. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2018-09-15T22:43:47.302Z
2018-08-24T00:00:00.000
52135645
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.3382/ps/pey350", "pdf_hash": "4c17a728b722bffbba1921e05c1ac8ff333cd6f9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46792", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "sha1": "4c17a728b722bffbba1921e05c1ac8ff333cd6f9", "year": 2018 }
pes2o/s2orc
A reassessment of the vitamin D requirements of modern broiler genotypes ABSTRACT We hypothesized that performance and bone mineralization of 2 broiler lines will benefit from increasing vitamin D (vitD) supplementation above current commercial levels and by partial substitution of D3 by 25-OH-D3. Male Ross 308 and 708 chicks (n = 576), were offered diets with low (LD; 1,000), medium (MD; 4,000) or high levels of D3 (HD; 7,000 IU/kg), and medium levels of vitD where the majority of D3 was substituted by 25-OH-D3 (25MD; 1,000 D3+3,000 25-OH-D3 IU/kg). Performance was measured at the end of starter (day 10), grower (day 24), and finisher periods (day 38). Three birds per pen were dissected at the end of each period to assess tibia and femur ash percentage (%), ash weight, bone breaking strength (BBS), and serum levels of 25-OH-D3. Remaining birds were gait scored (GS) at day 37 of age. Genotype and diet did not interact for any trait, whilst performance was not affected by diet. Ross 708 had lower body weight (P < 0.005), higher feed conversion ratio over the grower period (P < 0.05), similar levels of 25-OH-D3, but higher GS (P < 0.05) than Ross 308. Serum 25-OH-D3 levels were affected by diet at the end of the starter and grower periods (P < 0.05), being lowest for LD and highest for 25MD. Diet affected GS (P < 0.01), being higher in LD than 25MD. Femur ash % was higher at the end of the starter and grower periods for 25MD than LD and for both HD and 25MD than LD (P < 0.05). Femur and tibia ash weight were higher for 25MD in comparison to LD birds (P < 0.05) at the end of the grower period. Femur and tibia BBS were higher (P < 0.05) for 25MD in comparison to LD at the end of the grower and finisher periods, respectively. Overall, effects of vitD supply were more pronounced for femur than for tibia mineralization. Results do not suggest supplementation of vitD above current maximum levels and support partial substitution by 25-OH-D3. INTRODUCTION Genetic selection of broiler chickens has amongst other characteristics improved substantially growth rate and feed conversion ratio (FCR) (Siegel, 2014), and has shifted focus towards breast muscle rather than leg muscle yield, moving the center of gravity of the bird forward (Corr et al., 2003a). Higher growth rates have been suggested as a cause for the increasing occurrence of skeletal abnormalities of the locomotory sys-tem observed in broilers (Kestin et al., 2001;Williams et al., 2004), although genetic selection for improving leg health may reduce their incidence, despite somewhat unfavorable genetic correlations with growth rates (Kapell et al., 2012). Vitamin D (vitD) is involved in skeletal integrity through the stimulation of the expression of genes in the small intestine which govern intestinal calcium and phosphorus absorption, in bone to osteoclast differentiation and calcium reabsorption promoting mineralization of the bone matrix (St-Arnaud, 2008). It can either be produced in the skin by the photochemical conversion of the provitamins ergosterol and 7-dehydrocholesterol to cholecalciferol (D 3 ) or absorbed from the diet in the intestinal tract. D 3 is hydroxylated to 25-hydroxycholecalciferol (25-OH-D 3 ) primarily in the liver and is circulated by the vitD binding protein (Haussler et al., 2013). This form is further hydroxylated in the kidneys, but also in the intestine, to the hormonally active form, 1α,25-dihydroxycholecalciferol (1,25-OH-D 3 ) (Fleet and Schoch, 2010). The use of windowless houses in conventional broiler farms does not allow 330 the skin to produce endogenous vitD, whilst raw materials in broiler diets contain little or no vitD (Atencio et al., 2005). As a result requirements are covered by means of dietary supplementation only (Waldenstedt, 2006). It has been demonstrated that dietary 25-OH-D 3 is absorbed more efficiently than D 3 in the upper portion of the intestine of broilers (Bar et al., 1980). At comparable levels of D 3 , the 25-OH-D 3 isomer has been generally shown to improve performance and skeletal health (Cantor and Bacon, 1978;Fritts and Waldroup, 2003;Yarger et al., 1995b). Historically, inclusion levels of vitD at a commercial level far exceed what is typically reported as the requirement by the NRC (200 IU/kg of feed) (Applegate and Angel, 2014). The latter, however, are based on old type, slower growing birds, whilst the current legal dietary limit in the European Union is 5,000 IU/kg feed. The basis for establishment of the upper limit is rather ambiguous and does not take into account bird genetic improvement; previous studies indicate adverse effects on performance, bone ash and renal calcification at levels of supplementation levels above 20,000 IU/kg (Baker et al., 1998;Browning and Cowieson, 2014;Yarger et al., 1995a). In commercial practice, dietary inclusion levels range between 2,000 and 5,000 IU/kg of feed (Whitehead et al., 2004;Leeson, 2007). In legislation there is no distinction about the form of vitD that can be added in the diet, whilst the European Food Safety Authority prohibits its addition to water (European Food Safety, 2009). The objective of the present study was to re-evaluate the vitD requirements of 2 modern growing modern broiler genotypes offered Ca and P adequate diets. We hypothesized that at increasing levels, beyond the current legal limits, vitD supplementation of fast growing broilers will lead to improved skeletal integrity as measured by bone breaking strength (BBS), long bone mineralization, and better walking capacity. Furthermore, effects on bone mineralization were expected to be more pronounced at initial stages of growth when mineralization of the skeleton occurs most rapidly (Angel, 2007;Talaty et al., 2009). In addition, we hypothesized that if legal dietary maximum levels are to be maintained, partial substitution of D 3 by 25-OH-D 3 will be beneficial for bird skeletal integrity. Birds, Husbandry and Diets All procedures were conducted under the UK Animals (Scientific Procedures) Act 1986 and were approved by the Animal Welfare and Ethical Review Body (AWERB) of Newcastle University. The trial was conducted in 2 rounds with an 8-wk interval. For each round, 144 male Ross 308 chicks and 144 male Ross 708 d old chicks were obtained from a commercial hatchery, vaccinated against avian infectious bronchitis virus (576 in total). Ross 308 is one of the most commonly used modern growing genotypes. Ross 708 is also a modern growing genotype, albeit displaying inferior FCR and ADG during the starter and grower period, commonly used in roaster market and is selected for high breast muscle yield. The same flocks of origin were used for both rounds, and were of 32 and 40 wk of age for the Ross 308 flock and of 36 and 44 wk of age for the Ross 708 flock, during the first and the second round respectively. Birds were housed in a windowless thermostatically controlled building in 24 circular pens with a diameter of 1.2 m (1.13 m 2 ), at an initial stocking density of 12 birds/pen. Pens were equipped with tube feeders and bell-drinkers and wood shavings were used as litter to a depth of 5 cm. Birds had ad libitum access to feed and water throughout the trial. Heat was supplemented with dull emitter ceramic bulbs. Temperature at pen level was monitored daily and maintained to meet Aviagen recommendations for spot brooding (Aviagen, 2014b), starting at 34 • C at chick placement and gradually reduced to 21 • C by 27 d, where it was maintained until 39 d of age when the trial was terminated. Light intensity at pen level was 80 lx whilst a lighting schedule of 23L:1D was applied for the first 7 d of age switched to 18L:6D for the remainder of the trial. All birds were individually wing-tagged at day 2 post-hatch. A basal starter (day 1 to 10) grower (day 11 to 24) and finisher (day 25 to 39) diet were manufactured according to nutrition specifications (Aviagen, 2014a) apart from level and source of vitD (see below). The starter diet was offered in crumbled form and the grower and finisher diets in pelleted form (Table 1). Experimental Design The experiment was 4 × 2 × 2 factorial design with dietary treatment, genotype, and round as the independent variables. Upon arrival, day old chicks of each of the Ross 308 and 708 genotypes were randomly allocated to 1 of 4 dietary treatments: a diet offering low level of D 3 (LD; 1,000 IU/kg) which aimed at inducing a marginal vitD deficiency (Fritts et al., 2003), a diet offering medium level of D 3 , close to what is used in commercial practice (MD, 4,000 IU/kg), a diet offering high levels of D 3 , which is above the European Union legal limit of 5,000 IU/kg (HD; 7,000 IU/kg D 3 ) and a diet offering the medium level of vitamin D 3 , where the majority of D 3 was substituted by 25-OH-D 3 (25MD; 1,000 D 3 +3,000 25-OH-D 3 IU/kg). Each treatment (genotype × diet) had 3 replicate pens for each experimental round. Three birds per pen were selected based on their wing tag, 2 of which were sampled on day 10, day 24, and day 38 and 1 on day 11, day 25, and day 39. Pen body weight (BW) was measured at placement and pen FI and individual bird BW was measured at Blood Sampling and Serum Levels of 25-OH-D3 The selected birds were individually weighed before blood-sampling via the wing vein and were subsequently euthanized with a lethal injection of sodium barbiturate (Euthatal, Merial Harlow, United Kingdom). Blood was placed in 5 mL serum tubes with serum clot activator, gel separator (BD Vacutainer, SST II Advance Plus Blood Collection Tubes-BD, Plymouth, United-Kingdom). Samples were allowed to clot for 1.5 h at room temperature and serum was collected, following centrifugation for 5 min at 1,300 × g in Eppendorf tubes and stored at −20 • C pending analyses for serum levels of 25-OH-D 3 using a commercially available ELISA kit specifically designed for chicken serum or plasma (MyBioSource, San Diego, CA). Bone Measurements Following euthanasia the right tibia and femur were immediately dissected, defleshed, and stored in polystyrene air tight sealed bags at −20 • C. Bones were thawed at 4 • C in a walk-in fridge overnight and tibia and femur length and diameter at the center of the diaphysis were measured with digital callipers. Bones were subjected to a 3-point break test using an Instron testing machine (Instron 3340 Series, Single Column-Bluehill, Fareham Hants, UK). The testing support consisted of an adjustable 2-point block jig, spaced at 30 mm for tibias and 20 mm for femurs for the 10 and 11 d-old birds, and 30 mm for the older birds. The crosshead descended at 5 mm/min until a break was determined by measuring a reduction in force of at least 5%. Following breaking strength, bones were split in 2 and the bone marrow was manually removed. Subsequently bones were soaked in petroleum ether for 48 h for lipid removal, were then placed in an oven at 105 • C for 24 h and the dry bone weight was recorded. Samples were then ashed for 24 h at 600 • C for the determination of ash weight (g) and ash percentage (%). In birds dissected at day 11, 25, and 39 of age, tibia calcium and phosphorous ash content was measured by an inductively coupled plasma (ICP) emission spectroscopic method. Calcium and phosphorous standard for ICP (Fluka, Neu-Ulm, Switzerland) was prepared. Ashed sample of 0.5 g was weighed in a 100 mL beaker and 30 mL of 6 N HCl was added to each sample and placed in a fume cupboard to digest for 18 h at room temperature. Samples were then placed on a hot plate (100 • C) in a fume cupboard and slowly brought to the boil and digested for 30 min with 50% (6 M) hydrochloric. The samples were cooled and quantitatively transferred to a 250 mL volumetric flask. Beakers were rinsed twice with 7 mL of 5% HCL to ensure complete transfer. The final volume was made up to 250 mL with 1% nitric acid solution. 1 mL of sample filtrate was pipetted into a 50 mL centrifuge tube and made up to 50 mL with 1% nitric acid solution. Samples were centrifuged at 1,500 rpm for 10 min at room temperature and introduced to an ICP spectrometer equipped with a CCD detector (ICP-OES; Varian Vista MPX, Varian, Palo Alto, CA, USA) and measured at wavelength 317.933 for calcium (Ca) and 213 nm for phosphorus (P) against prepared standards ranging from 1 to 100 ppm in calcium and phosphorus content. Gait Scoring The remaining birds were individually assessed for their walking capacity at 37 d of age, using the 0 to 5 gait scoring system of Kestin et al. (1992), where a score of 0 represents a perfect gait and a score of 5 represents inability to stand. Briefly, gait score 0 (GS 0) describes a bird with no detectable gait abnormality; GS 1 birds have a slight walking defect; GS 2 birds have an identifiable defect; a GS 3 bird has an obvious gait abnormality; GS 4 birds have a severe gait defect, only walking when motivated. There were no birds with a GS of 5 in the current trial. The wing tag of each bird was noted and the bird was subsequently assessed by 3 independent observers, directly in their pens, after being herded away from the other birds. The score was then discussed between observers and the bird was possibly re-assessed until an agreement was reached. Statistics Pen was the experimental unit for all data acquired at the end of starter, grower, and finisher period and all analysis were carried with SAS software (SAS 9.3, Cary, NC, USA). Femur and tibia ash % were averaged per pen for the 3 birds dissected at the end of each period. Tibia and femur dimensions (mm), and ash weight (g) obtained from the 3 sampled birds per pen at the end of each of the 3 periods were expressed as a proportion of individual BW at dissection (kg) and were then averaged per pen to account for differences in growth between genotypes, rounds, and sampling days. Expressing bone variables as a proportion of BW has been previously used in studies comparing genotypes differing in their growth potential (Shim et al., 2012). Analyzed Ca and P expressed as a percentage of tibia ash, Ca: P ratio and levels of 25-OH-D 3 from 1 bird per pen dissected at day 11, day 25, and day 39 were analyzed with bird representing the pen. Data were statistically analyzed with the GLM procedure with dietary treatment, genotype, and round as the main effects including all 2-way and the 3-way interaction among and between the main effects. For the analysis of average pen BW and ADFI for the starter, grower and finisher stages the average pen BW obtained at day 2 of birds maintained until day 10, day 24, and day 38 post-hatch, respectively, was used as a covariate respectively to account for differences in starting BW among treatments and between rounds. Initial analysis revealed the presence of round effects attributed to the older age of the maternal flocks at Round B, which resulted in a higher BW at placement in round B in comparison to round A (43.98 ± 0.24 vs. 38.06 ± 0.16 respectively). However, there were no significant interactions between round and the other factors for any of the measured variables. Therefore round and its 2-way and 3-way interaction with genotype and diet were excluded from the final model, which included only genotype, diet, and their interaction. When significant differences were detected, treatment means were separated and compared by the Tukey's multiple comparison test. For assessing the normality of the residuals the Shapiro-Wilk test was used. Significance was determined at P < 0.05. Values are expressed as model predicted means along with their pooled SEM. Additional polynomial contrasts were performed to study the linear and quadratic responses of bone variables and serum 25-OH-D 3 levels to increased levels of D 3 (1,000, 4,000, and 7,000 IU/kg). Single degree of freedom contrasts were carried to compare the 25MD treatment with the MD and HD treatment and the MD with the HD treatments, on bone variables and serum 25-OH-D 3 levels. Performance and GS Both diet and its interaction with genotype did not affect any of the performance variables. Main effects of genotype and diet are presented in Table 2. Genotype significantly affected BW (P < 0.05) with birds of the Ross 708 genotype achieving lower BW than the Ross 308 at the end of the starter (P < 0.05), grower (P < 0.01), and finisher period (P < 0.0001). At the same time, they had significantly higher FCR during the grower period (P < 0.05). Genotype and diet did not interact for GS. Birds of the Ross 708 genotype had significantly higher GS (P < 0.05) than birds of the Ross 308 genotype. Diet significantly affected GS (P < 0.01), with birds on the LD treatment achieving significant higher GS than birds on the 25MD treatment. Bone Measurements There was no significant interaction between genotype and dietary treatment on any of the bone variables tested. Main effects of genotype and diet of the GLM model, as well as linear and quadratic effects and contrasts on femur and tibia variables are presented in Tables 3 and 4, respectively. Genotype. Birds of the Ross 708 growing genotype had significantly longer femurs per unit of BW (P < 0.01) at the end of the finisher period, which were significantly wider at the end of the starter (P < 0.05) and finisher periods (P < 0.01) in comparison to those of the Ross 308 genotype. Tibias were significantly longer (P < 0.001) per unit BW in birds of the Ross 708 genotype at the end of the finisher period than in Ross 308 birds. Tibia of birds of the Ross 708 line yielded less ash weight at the end of the grower (P < 0.01) and finisher periods (P < 0.01). Diet. Femur ash weight was affected by diet (P < 0.05) at the end of the grower period, being significantly higher (P < 0.05) for birds on the 25MD diet in comparison to the LD treatment. Femur BBS was significantly affected during the finisher period (P < 0.01), being significantly higher for the 25MD in comparison to LD birds (P < 0.05). Femur ash % was significantly affected by diet at the end of the starter and grower periods; it was significantly higher (P < 0.05) for birds on the 25MD than on the LD treatment (P < 0.05) and for birds in both HD and 25MD treatments in comparison to birds on the LD treatment (P < 0.05). Tibia width was significantly affected by diet (P < 0.01), being significantly higher for both 25MD and MD birds (P < 0.05) than LD birds at the end of the grower period. Tibia ash weight was significantly affected by diet (P < 0.05), being higher in 25MD birds in comparison to LD birds at the end of the grower period (P < 0.05) and tended to be affected at the end of the finisher period being numerically higher for birds on the 25MD treatment. Tibia BBS was significantly affected (P < 0.05) at the end of the grower period being higher for 25MD birds than LD birds (P < 0.05). Linear and Quadratic Effects and Contrasts. Significant linear effects (P < 0.05) were obtained at the end of the grower period for femur ash % and for tibia ash weight, both increasing at increasing levels of D 3 . A quadratic effect was obtained at the end of the grower period for tibia width (P < 0.01) and at the end of the finisher period for femur ash weight (P < 0.05). Contrasts between MD and 25MD treatments revealed that 25MD treatment resulted in significantly higher femur ash weight at the end of the finisher period and tibia ash weight at the end of the grower and finisher periods weight (P < 0.05). Similarly, higher tibia BBS was achieved at the end of the grower period for 25MD in comparison to the MD treatment weight (P < 0.05). There was a significant linear effect of diet on tibia Ca percentage on day 11, with its levels increasing as vitD increased. However, no significant differences were revealed when contrasting 25MD with HD or MD with HD other than a significantly lower tibia width for HD in comparison to both other treatments (P < 0.05). Serum 25-OH-D3 There was no significant effect of genotype and no significant interaction between genotype and dietary treatment on serum 25-OH-D 3 levels of birds dissected at day 11, day 25, and day 39 of age. Main effects of genotype and diet of the GLM model, as well as linear and quadratic effects and contrasts, are presented in Table 5. Dietary treatment significantly affected (P < 0.001) serum 25-OH-D 3 levels at both day 11 and day 25 of age (P < 0.0001) ( Table 5); the effect of dietary levels of D3 was linear, with increasing level of D3 leading to increased serum 25-OH-D 3 levels. 25MD birds contained higher levels of serum 25-OH-D 3 than MD birds on d11 and d25. DISCUSSION In the present study we assessed the effects of vitD supplementation by offering a diet with suboptimal levels of D 3 (LD), diets with levels close to commercial recommendations where the additional vitD was offered either as D 3 (MD) or as 25-OH-D 3 (25MD), and a diet above commercially recommended levels, and European Union allowed levels offered as D 3 (HD). We hypothesized that a linear increase of D 3 supplementation levels will result in higher skeletal integrity, as measured by BBS, bone mineralization and better walking capacity, which will be linked with higher levels of serum 25-OH-D 3 . In addition, we expected that partial substitution of D 3 by 25-OH-D 3 treatment at commercial supplementation levels would affect measured variables to a higher degree as it has been shown to be metabolically more potent on a per unit basis than D 3 (Fritts et al., 2003). Linear effects of vitD supply in the form of D 3 were observed on the level of femur ash (%) and tibia ash (g) at the end of the growing period. However, contrary to our hypothesis, vitD supplementation levels above 5,000 IU/kg (HD) in the form of D 3 did not confer substantial benefits on tibia and femur mineralization in comparison to commercially used levels (MD). Partial substitution of D 3 by 25-OH-D 3 at commercial levels of supplementation leads to improved aspects of long bone mineralization. Nonetheless, the majority of dietary effects derived from the differences observed between 25MD and LD treatment groups, the latter Table 2. Main effect of genotype (Ross 308 or 708) and dietary treatments on body weight (BW), average daily feed intake (ADFI), and feed conversion ratio (FCR) over the starter (day 1 to 10), grower (day 11 to 24), and finisher period (day 25 to 38) and on gait score (GS) at day 37 of age. LS means of BW and ADFI are adjusted for BW at day 2 post-hatch (cvBW) which was used as a covariate. Table 5. Main effect of genotype (Ross 308 or 708) and dietary treatments (GLM), linear, quadratic and single degree of freedom contrasts on tibia calcium (Ca) and phosphorus (P) expressed as a proportion of bone ash, their ratio and serum levels of 25-OH-D3 at day 11, day 25, and day 39 post-hatch. a,b Means in each column with different superscripts are significantly different (P < 0.05). Abbreviations: LD (low level of D 3 ; 1,000 IU/kg), MD (medium level of D 3 ; 4,000 IU/kg), HD (high level of D 3 ; 7,000 IU/kg D 3 ), 25MD (medium level consisting of 1,000 D 3 +3,000 25-OH-D 3 IU/kg). leading to a state of marginal deficiency in regards to bone mineralization according to hypothesis. In the present study, performance was largely unaffected by level or source of vitD supplementation. Although, positive effects of vitD supplementation above NRC recommendations are common (Baker et al., 1998;Fritts and Waldroup, 2003;Whitehead et al., 2004;Rao et al., 2006), the range at which effects are observed and the nature of the parameters affected vary substantially amongst studies. Waldroup et al. (1965) were the first to illustrate that vitD requirements vary according to Ca and P dietary contents and their ratio. The absence of a significant effect on performance in our experiment, is possibly related to the adequate levels of Ca and P which were supplemented in the diets, as efficacy of vitD has been shown to be intrinsically related to their nutrient availability (Rennie et al., 1995;Ledwaba and Roberson, 2003;Whitehead et al., 2004;Rao et al., 2006Rao et al., , 2007Rao et al., , 2008. One would expect improved performance in the 25MD treatment during the starter period in comparison to the D 3 treatments owing to the relatively fat independent absorption of 25-OH-D3 as opposed to D 3 (Sitrin and Bengoa, 1987;Borel et al., 2015), and the limited ability of young chicks to digest and absorb fat (Tancharoenrat et al., 2013), but this was not supported by our findings. In the absence of effects of our dietary treatments on performance, femur, and to a lesser extent tibia, showed generally increased mineral deposition (ash, g) and increased mineralization (ash, %). These effects were ob-served at the end of the starter and grower period for femur, whilst effects only approached a tendency for tibia. This is not surprising as increased mineralization in the absence of effects on performance is commonly observed in response to supplementation with Ca, and to a lesser extend P, as well as with vitD (Bar et al., 2003;Angel et al., 2006). The majority of absorbed Ca and P is primarily required for bone formation (Adedokun and Adeola, 2013). An adequate level of dietary Ca is essential for the bone deposition and mineralization of P (Rousseau et al., 2012), whilst rapid bone growth, requires adequate Ca and P supply (Williams et al., 2000b). VitD regulates calcium and phosphorus metabolism mainly by enhancing intestinal calcium and phosphate absorption and renal reabsorption whilst it also stimulates osteoclast differentiation and calcium reabsorption from bone and promotes mineralization of the bone matrix (St-Arnaud, 2008;Bikle, 2012;Haussler et al., 2013). It has been estimated that during the initial stages of growth the load applied to the skeleton due to a quadratic increase in BW gain may be by as much as 32-fold imposing a requirement for rapid skeletal adaptation (Yair et al., 2012). The rate of growth and mineralization of the skeleton occurs most rapidly during the first 2 wk of age in the growing chick (Angel, 2007) and continues until 4 wk of age, whereas after this period tibias grow in length, width, and surface but this is not accompanied by increases in bone mineral density and bone mineral content (Talaty et al., 2009). The stronger responses to dietary supplementation with 25-OH-D 3 and to increasing dietary levels of D 3 at the end of the starter and grower periods on femur ash % are in accordance to the higher earlier mineralization rate of the femur as compared to tibia (Applegate and Lilburn, 2002) and are in agreement with the suggestion that it is a better marker to assess responses of dietary treatments on bone mineralization (Angel et al., 2006). Dietary effects on other measurements of skeletal integrity were less consistent; effects on BBS were significant in the grower period for the tibia but not on the femur. Surprisingly, there was an effect of diet on femur BBS at the end of the finisher period with 25MD diets having significantly higher BBS than both LD and MD dietary treatments. This is contrary to our expectation as efficacy of the metabolites should be reduced by the end of the finisher period as it has been previously demonstrated that vitD supplementation is more critical during the initial stages of growth for bone development (Whitehead et al., 2004). On the other hand, serum levels of 25-OH-D 3 were responsive to the dietary treatments at the end of the starter and grower stage, but absent by the end of the finisher stage. One could interpret that there was a state of adequacy at the later stages of growth, despite the observed skeletal effects later in the finisher period (Whitehead et al., 2004). However, it is contrary to conventional expectation as circulating levels of 25-OH-D 3 should increase more rapidly in response to 25-OH-D 3 supplementation over time (Yarger et al., 1995b). In the starter diets both 25MD and HD treatments had significantly higher values of 25-OH-D 3 than LD, suggesting that at least for the starter phase one could increase D 3 supplementation level above EU recommendations in order to improve vitD adequacy. Therefore, dietary treatment effects on ash weight as a proportion of BW at the end of the finisher period are more likely a consequence of higher levels of supplementation earlier in the starter and grower periods. In the present study vitD supplementation was associated with a lower GS in LD birds, in agreement with Sun et al., (2013). Contrary to these findings, Venalainen et al. (2006) did not find any effect of increasing dietary Ca and P content on the walking ability of broilers, although increased Ca and P supplementation affected tibia mineral deposition (Venalainen et al., 2006). Even if the incidence of tibial dyschondroplasia has been reduced in modern broiler populations (Kapell et al., 2012), it is possible that development of tibial dyschondroplasia may have been augmented by our LD treatments as vitamin D supplementation has been shown to reduce its prevalence (Elliot et al., 1995;Berry et al., 1996;Rennie et al., 1995;Zhang et al., 1997;Ledwaba et al., 2003). Furthermore, one cannot discard the protective role that 25-OH-D 3 has been shown to exert in the development of bacterial chondronecrosis, which is considered the most prevalent cause of lameness, in comparison to D 3 (Wideman et al., 2015). Therefore, a deterioration of GS in the LD treatment may be linked to impaired bone long bone development as indicated by the markers of mineralization, but also to the incidence and severity of skeletal disorders although these were not measured in the present study. Higher growth rates have been associated with altered Ca: P ratios in cortical bone (Williams et al., 2000a) and at the same time altered Ca: P ratio has been found in broiler bones suffering from leg pathologies (Thorp and Waddington, 1997) although their concentration as components of hydroxyapatite is considered to be stable at a molar ration of 2:1 (Field, 2000). It has been suggested that although bone ash % may be decreased by low Ca or P diets, the Ca concentration in ash still remains constant close to 37% whilst deviations from this rule reflect differences in bone preparation and analysis (Field, 2000). However, in the present study a reduced Ca percentage was observed at the end of the starter period in LD birds, which nonetheless was not associated with penalties on other markers of mineralization. More insight into the effects of vitD on Ca: P ratio in reference to bone development may have been gained had we focused on cortical bone samples rather than the ash of both trabecular and cortical bone. We hypothesized that reduced growth rates and feed efficiency early in the starter period and grower period would lead to reduced requirements of D 3 . When assessing effects of genotype on growth rate, it should be noted that birds performed significantly higher than reported in the performance objectives (3223 and 3434 vs. 2472 and 2599, respectively at day 38 of age; Aviagen 2014c,d). In the present study birds of the Ross 308 showed increased BW at the end of the starter, grower, and finisher periods according to expectations. This was not accompanied by increases in ADFI. According to the performance objectives Ross 708 should have displayed a higher initial FCR whilst differences between genotypes should be smaller at the end of the grower and finisher periods. In our study there was a tendency for a lower FCR of Ross 308 at the starter period whilst this difference was significant at the end of the grower period. In terms of their skeletal traits Ross 708 displayed longer bones at the end of the finisher period whilst femurs were wider. However, tibias carried less ash weight as a proportion of BW at the end of the grower and finisher periods. On the other hand, there was a strong tendency (P = 0.055) for tibia ash % to be higher at the end of the finisher period for Ross 708 indicating that at older ages bones of this genotype may be more mineralized. This genotype is maintained until a later age and a higher degree of mineralization may have resulted in better skeletal integrity at later stages of growth. Despite observed differences in ash values, BBS, which is indicative of the load bearing capacity of the long bones (Rath et al., 2000) was ultimately similar for both strains. The absence of genotype and diet interactions indicates that subtle differences in growth rate and FCR did not influence the requirements of the 2 genotypes in vitD. Previous research comparing the 2 genotypes has similarly shown that their requirements for P are not significantly different (Persia and Saylor., 2006). Although the GS of Ross 708 was inferior to that of Ross 308 broilers, the difference between the 2 genotypes was relatively small (1.97 vs. 2.23) and likely reflects different selection objectives of the 2 genotypes as Ross 708 is more heavily selected for breast muscle. According to performance objectives at the slaughter weight achieved at the end of this study, Ross 708 has 1.7% of additional breast muscle in comparison to the Ross 308 genotype whilst the weight of thigh is similar and that of drumstick is reduced as a proportion of the eviscerated carcass. It has been suggested that the rapid growth of breast muscle moves the center of gravity forward (Corr et al., 2003b). In a field evaluation trial, birds of the same commercial line with a higher GS, displayed differences in skeletal conformity traits related to breast development such as different breast angle (Skinner-Noble and Teeter, 2009). Finally, studies assessing the effects of selection on locomotion have clearly illustrated that differences in breast muscle conformity traits have pronounced effects on locomotion patterns and gait dynamics (Paxton et al., 2013(Paxton et al., , 2014. It is noteworthy that GS is a subjective method of evaluation of the bird's walking capacity and gait patterns of the 2 genotypes were substantially different; the Ross 708 has a wider stride which is in line with the prevalence of more breast muscle. As a result it walks differently and tends to get a higher GS (Sakkas, personal observation). In conclusion, although dietary treatment effects on performance were absent, offering a diet which included 25-OH-D 3 led to consistent improvements in bone mineralization. These were seen as improvements in femur and tibia ash content (per unit of BW) and BBS. The effects were more obvious at the end of the grower period (∼25 d of age) rather than at the end of starter and finisher periods. The effects of the high vitamin D 3 on bone mineralization were less consistent and not statistically different from either the effects of MD or 25MD. It is possible that this was due to the statistical power of our experiment. Our results do not suggest that current maximum legal limits of dietary vitD inclusion need to be re-evaluated.
v3-fos-license
2021-04-23T05:14:53.550Z
2019-10-09T00:00:00.000
233346383
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.7554/elife.69094", "pdf_hash": "d05e907bcbf618e5c340af4bac036955e2c80263", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46793", "s2fieldsofstudy": [ "Biology", "Physics" ], "sha1": "40ef11cd26bf09fbfecf156be50891b1ff124378", "year": 2022 }
pes2o/s2orc
Multiphoton imaging of neural structure and activity in Drosophila through the intact cuticle We developed a multiphoton imaging method to capture neural structure and activity in behaving flies through the intact cuticle. Our measurements showed that the fly head cuticle has surprisingly high transmission at wavelengths >900nm, and the difficulty of through-cuticle imaging is due to the air sacs and/or fat tissue underneath the head cuticle. By compressing or removing the air sacs, we performed multiphoton imaging of the fly brain through the intact cuticle. Our anatomical and functional imaging results show that 2- and 3-photon imaging are comparable in superficial regions such as the mushroom body, but 3-photon imaging is superior in deeper regions such as the central complex and beyond. We further demonstrated 2-photon through-cuticle functional imaging of odor-evoked calcium responses from the mushroom body γ-lobes in behaving flies short term and long term. The through-cuticle imaging method developed here extends the time limits of in vivo imaging in flies and opens new ways to capture neural structure and activity from the fly brain. Introduction Animal nervous systems across lineages have evolved to solve many of the same problems; foraging for food and water, finding mates to reproduce, and avoiding predators to stay alive. They navigate their environment via coordinated movements and learn and remember the relative values of sensory stimuli around them to maximize their fitness and survival. At each instant in time, an animal must evaluate external sensory information and its current behavioral state to decide what to do next (Dickson, 2008;Hunt and Hayden, 2017;Lütcke et al., 2013;Tinbergen, 1969). A major technological challenge to revealing how the brain encodes behavioral states in real-time is that even the simplest neural computation involves interactions across the nervous system at various time scales, while our tools for assessing neural activity are restricted in time and space because of the currently available imaging sensors, methods, and preparations (Lerner et al., 2016). Optical methods remain the most established and fruitful path for revealing population dynamics in neural circuits at long time scales (ranging from minutes to hours) by providing high temporal and spatial resolution measurements (Ji et al., 2016;Luo et al., 2018;Svoboda and Yasuda, 2006). The fly, Drosophila melanogaster, offers an ideal experimental system to investigate neural correlates of behavioral states and decisions because of its compact nervous system and diverse statedependent behaviors that it executes in response to sensory stimuli (Barron et al., 2015;Dickson, 2008). To understand how molecularly defined neural circuits evaluate sensory information in different behavioral states, it is critical to capture the activity of populations of neurons over long time scales as flies are changing their physiological needs (Luo et al., 2018;Simpson and Looger, 2018). These functional imaging experiments require imaging preparations, which should allow chronic neural activity imaging for at least 12 hr. Current methods used in fly optical physiology require the fly head cuticle, trachea, and fat body to be removed by microsurgery to provide optical access to the nervous system (Grover et al., 2016;Minocci et al., 2013;Seelig et al., 2010;Sinha et al., 2013;Wang et al., 2003). These preparations are limited in imaging duration because, after some time, the brain tissue starts to degenerate due to damaged circulation resulting from the cuticle removal surgery. For example, with current imaging preparations, fly olfactory neurons show reliable Ca 2+ responses for four to five hours after surgery (Wang et al., 2003). An imaging method in which the head cuticle is intact, thereby eliminating the need for traumatic head surgery before functional imaging, is essential for advancing fly neuroscience research in the direction of chronic recordings of neural activity during ongoing behaviors. This includes being able to image the same fly brain across multiple days. In mice, multiday imaging experiments are achieved by implanting a cranial window following removal of part of the skull (Hefendehl et al., 2012;Trachtenberg et al., 2002). Similar imaging preparations have been developed for flies (Grover et al., 2016;Huang et al., 2018;Sinha et al., 2013). However, because imaging window implantation requires a tedious surgery with low success rates and complications that occur afterwards, these methods are not commonly used. A recent development in multiphoton imaging is the use of long-wavelength lasers in 3-photon (3 P) microscopy which improves signal-to-background ratio (SBR) by several orders of magnitude compared to current 2-photon (2 P) imaging methods (Horton et al., 2013;Wang et al., 2018a;Wang et al., 2018b). While 3 P microscopy with 1700 nm excitation of red fluorophores and adaptive optics has shown promising results in imaging the fly brain through the cuticle (Tao et al., 2017), it is not clear if the technique is widely applicable to common blue and green fluorophores with much shorter excitation wavelengths (e.g. 1320 nm). Here, we developed a method for imaging fly neural structure and activity through the intact head cuticle using both 2P and 3P microscopy. We first measured the ballistic and total optical transmission through the dorsal fly head cuticle and surprisingly found that the head cuticle has high transmission at the wavelengths that are used to excite green fluorophores in 2P and 3P microscopy (~920 nm and ~1320 nm, respectively). We showed that the tissue that interferes with the laser light and limits imaging through the cuticle into the brain is not the head cuticle but the air sacs and the tissue underneath the cuticle. Next, we developed fly preparations by either compressing the air sacs or removing them from the imaging window, allowing through-cuticle imaging of the fly brain. Using these imaging preparations, we performed deep, high spatial resolution imaging of the fly brain and determined the attenuation length for imaging through the cuticle with 2P (920 nm) and 3P (1320 nm) excitation and compared our results to cuticle-removed preparations. Our measurements showed that 2P and 3P excitation performed similarly in shallow regions (i.e. in the mushroom body) of the fly brain, but 3P excitation at 1320 nm was superior for imaging neural activity and anatomical features in deeper brain structures (i.e. in the central complex). Furthermore, using 2P and 3P excitation, we recorded food odor-evoked neural responses from the Kenyon cells comprising the mushroom body γ-lobes using a genetically encoded Ca 2+ indicator, GCaMP6s (Chen et al., 2013). In our simultaneous 2P and 3P functional imaging experiments, we found no differences between 2P and 3P excitation, while recording odor-evoked responses from the mushroom body γ-lobes through the cuticle. To demonstrate that our cuticle-intact imaging method can be used for recording neural activity in behaving flies, we used 2P excitation and captured odor-evoked neural responses from mushroom body γ-lobes in flies walking on an air-suspended spherical treadmill. Finally, we demonstrated long-term functional imaging by reliably capturing odor-evoked neural responses from γ-lobes with 2P excitation for 12 consecutive hours. The cuticle-intact imaging method developed here allows multiphoton imaging of the fly brain through the head cuticle opening new ways to capture neural structure and activity from the fly brain at long time scales and potentially through the entire lifespan of flies. Fly head cuticle transmits long-wavelength light with high efficiency To develop a cuticle-intact imaging method using multiphoton microscopy, we first measured light transmission at different wavelengths through the fly head cuticle. Previous experiments showed that, within the wavelength range of 350-1000 nm, the relative transmission of the dorsal head cuticle of D. melanogaster improves with increasing wavelengths (Lin et al., 2015). However, the absolute transmission, which is critical for assessing the practicality of through-cuticle imaging, was not reported. In our experiments, we quantified both the total and ballistic transmission of infrared (IR) laser lights through the cuticle using the setup from our previous work (Mok et al., 2021). Dissected head cuticle samples were mounted between two glass coverslips and placed in the beam path between the laser source and the photodetector ( Figure 1A). The total and ballistic transmission through the cuticle samples were measured using a custom-built system ( Figure 1B). For ballistic transmission, light from a single-mode fiber was magnified and focused on the cuticle with a ~25 µm spot size. Figure 1C illustrates the light path of ballistic transmission experiments. The sample stage was translated to obtain measurements at different locations on the head cuticle. Ballistic transmission through the cuticle was measured at seven different wavelengths (852 nm, 911 nm, 980 nm, 1056 nm, 1300 nm, 1552 nm, 1624 nm) that match the excitation wavelengths for typical 2 P and 3 P imaging. We found that for all the IR wavelengths tested, the ballistic transmission through the cuticle was high, reaching >90% at 1300 nm ( Figure 1D, Figure 1-source data 1). Since fluorescence signal within the focal volume in 2P and 3P microscopy is mostly generated by the ballistic photons (Dong et al., 2003;Horton et al., 2013), our results showed that ballistic photon attenuation by the fly cuticle does not limit multiphoton imaging through the intact cuticle. To assess the absorption properties, we measured the total transmission through the head cuticle. For these measurements, laser light from a single mode fiber was magnified and focused on the cuticle sample with a ~50 µm spot size (Mok et al., 2021). An integrating sphere (IS) was placed immediately after the cuticle to measure the total transmission. Figure 1F illustrates the light path of total transmission experiments. Total transmission through the cuticle was measured at nine different wavelengths (514 nm, 630 nm, 852 nm, 911 nm, 980 nm, 1056 nm, 1300 nm, 1552 nm, 1624 nm). The shorter wavelengths of 514 nm and 630 nm were chosen to match the typical fluorescence emission wavelengths of green and red fluorophores. Similar to the ballistic transmission experiments, we found that the total transmission generally increased with wavelength ( Figure 1G, Figure 1-source data 1), and the total transmission for both the green and red wavelengths was sufficiently high ( >60%) for practical epi-fluorescence imaging using 2 P or 3 P excitation. We also scanned the cuticle with a motorized stage in the setup at selected wavelengths ( Figure 1E and H), and these spatially resolved transmission maps confirmed that there are only a few localized regions at the periphery of the cuticle with low transmission. Our results demonstrated that absorption and scattering of longwavelength light by the Drosophila head cuticle is small, and cuticle-intact in vivo imaging of green (e.g. green fluorescent protein (GFP) and GCaMP) and red fluorophores (e.g. red fluorescent protein (RFP), and RCaMP) through the intact cuticle is possible in adult flies using 2 P or 3 P excitation. Through-cuticle multiphoton imaging of the fly brain Based on our cuticle transmission results, we developed a cuticle-intact imaging method where we either used head compression to minimize the volume of the air sacs ( Figure 2A, Video 1) or removed them completely from the head capsule ( Figure 2-figure supplement 1B-D). Using our new fly preparations, we imaged the fly brain through the cuticle with no head compression, semi compression, or full compression ( Figure 2B-D). We expressed membrane-targeted GFP (CD8-GFP) selectively in mushroom body Kenyon cells and scanned the fly brain through the cuticle using 2 P and 3 P excitation at 920 nm and 1320 nm, respectively ( Figure 2E-G). Kenyon cells are the primary intrinsic neurons in the insect mushroom body. Diverse subtypes of Kenyon cells (n = ~2200) extend their axons along the pedunculus and in the dorsal and medial lobes (Crittenden et al., 1998;Ito et al., 1998;Strausfeld et al., 1998). These neurons receive and integrate information from heterogeneous The online version of this article includes the following source data for figure 1: Source data 1. Source data for plots Figure 1D and G. sets of projection neurons which carry olfactory, gustatory, and visual sensory information (Owald and Waddell, 2015;Yagi et al., 2016). Kenyon cell dendrites arborize in the calyx, while their axons fasciculate into anatomically distinct structures called lobes, with the dorsal lobes forming α and α′ branches, and the medial lobes containing β, β′, and γ branches (Crittenden et al., 1998;Ito et al., 1998;Zheng et al., 2018). We used transgenic flies that specifically expressed membranetargeted GFP in Kenyon cells forming α, β, and γ lobes (Krashes et al., 2007). In noncompressed flies, the mushroom body lobes were barely visible in both 2 P and 3 P imaged flies. Compressing the head against the cover glass with forceps during the curing process drastically improved image quality ( Figure 2E-G), mushroom body lobes were visible in the semi compressed and full compressed preparations in both 2 P and 3 P imaged flies. Based on our observations of the leg movements, flies behaved similarly with no head compression or in semi compressed preparations but not in full compression. We also tested the male courtship behavior of flies whose heads were previously semi compressed. Our results showed that semi head compression does not affect male courtship behavior grossly; head-compressed males are able to copulate with females at similar rates as control males ( Figure 2-figure supplement 1A). Based on our imaging and behavior results, we decided to use the semi compressed preparation in our experiments. Full-Compression Why does head compression improve image quality during 2 P and 3 P imaging? We hypothesized, head compression might reduce the volume of air sacs and the surrounding tissue between the cuticle and the brain, allowing better transmission of long wavelength laser light through these structures. To test our hypothesis, we surgically removed air sacs from one side of the fly head and imaged the brain using 2 P and 3 P excitation without any head compression. As predicted, we were able to image the mushroom body lobes on the side where air sacs were removed but not on the side where intact air sacs were present (Figure 2-figure supplement 1B-D, Video 2). Our results demonstrated that the tissue that interferes with 2 P and 3 P laser light is not the cuticle itself but the air sacs and other tissues that are between the head cuticle and the brain. Comparison of 2P and 3P excitation for deep brain imaging through the fly head cuticle Our experiments showed that through-cuticle imaging is possible with both 2 P and 3 P excitation. In general, 3 P excitation requires higher pulse energy at the focal plane compared to 2 P excitation because of the higher-order nonlinearity. On the other hand, longer wavelength (1320 nm) used for 3 P excitation can experience less attenuation while traveling in the brain tissue leading to increased tissue penetrance and imaging depth (Wang et al., 2018a). To compare the performance of 2 P and 3 P excitation for throughcuticle imaging, we imaged the entire brain in a fly expressing membrane-targeted GFP pan neuronally. Figure 3A shows the images from the same fly brain at different depths obtained with 2 P (920 nm) and 3 P (1320 nm) excitation. At the superficial brain areas such as the mushroom bodies, 2 P and 3 P excitation performed similarly. As we imaged deeper in the brain, 3 P excitation generated images with higher contrast compared to 2 P excitation and was capable of imaging brain regions below the esophagus. We further quantified the effective attenuation length (EAL) for 2 P and 3 P excitation, and we found EAL 920nm = 41.7 µm, EAL 1320nm = 59.4 µm within depth 1-100 µm, and EAL 1320nm = 91.7 µm within depth 100-180 µm ( Figure 3B, Figure 3-source data 1). The third harmonic generation (THG) signal from the head cuticle and the trachea was also measured as a function of depth. THG signal can be used to measure the EAL (EAL THG ) (Yildirim et al., 2019). The EAL THG within the cuticle was much larger than the EAL THG inside the brain, once again demonstrating the high ballistic transmission of the 1320 nm laser light through the head cuticle ( Figure 3C, Figure 3-source data 1). The full width at half maximum (FWHM) of the lateral brightness distribution at 200 µm below the surface of the cuticle was ~1.4 µm for tracheal branches captured by the THG signal ( Figure 3D). Similarity in the attenuation lengths of THG and 3 P fluorescence signal indicates that the labeling of membrane-targeted GFP is uniform across the brain, validating the use of the fluorescence signal when quantifying the EALs. Cuticle-removed preparations are widely used in the fly neuroscience imaging studies (Seelig et al., 2010;Simpson and Looger, 2018;Wang et al., 2003). To directly compare the spatial resolution of cuticle-intact and cuticle-removed imaging preparations, we imaged the entire brain in flies expressing membrane-targeted GFP pan neuronally. We found that in the superficial layers of the fly brain (i.e. ~50 µm), through-cuticle 2 P and 3 P imaging generated images with similar signalto-background ratio (SBR) to cuticle-removed preparation (Figure 3-figure supplement 1A, B). We were able to distinguish the mushroom body and the central complex neuropils clearly in both imaging preparations. 3 P generated images with better SBR compared to 2 P in both cuticle-intact and cuticle-removed preparations. We measured the EAL for both cuticle-removed and cuticleintact 2 P/3 P imaging and found that removing the cuticle and underlying tissues increased the EAL by ~1.5× (Figure 3-figure supplement 1C, D, Figure 3-figure supplement 1-source data 1). As the imaging depth increases, the image contrast decreases. The degradation of the image contrast in both 2 P and 3 P images is manifested by the change of the slope (Akbari et al., 2021;LaViolette and Xu, 2021) in the semilog plot of fluorescence signal versus depth (Figure 3figure supplement 1C, D). Within ~100 μm depth, 2 P imaging provided reasonable contrast resolution measurement in the THG image captured at 200 µm depth. Lateral intensity profile measured along the white line (indicated by the orange arrow) is fitted by a Gaussian profile for the lateral resolution estimation (scale bar = 50 µm). (E) Cross-section images of the central complex (CC) ring neurons through the cuticle with 1320 nm 3P excitation (green). THG imaging visualizes the tracheal arbors (yellow). Arrows indicate different CC compartments that are identified (scale bars = 30 µm). (F-G) Lateral resolution measurements in 3P images captured at 56 µm depth. (F) The GFP fluorescence profile of CC ring neurons (green) and (G) the THG profile of surrounding trachea (yellow). Lateral intensity profiles measured along the white lines are fitted by Gaussian profiles for the lateral resolution estimation (scale bars = 20 µm). The online version of this article includes the following source data and figure supplement(s) for figure 3: Source data 1. Source data for plots Figure 3B and C. when imaging through intact cuticle. While imaging is still possible with 3 P at the subesophageal zone ( >150 μm), it shows an increase in background that degrades image contrast when imaging beyond the esophagus (~150 μm). When cuticle was removed, 2 P imaging depth increased to ~180 μm, and 3P imaging depth increased to ~300 μm, reaching to the bottom of the fly brain. We further quantified and compared the laser power required to obtain the same fluorescence signal of 0.1 photon per laser pulse. For cuticle-removed fly, 3 P requires 1.4 nJ and 2 P requires 0.2 nJ on the brain surface to image the mushroom body. For cuticle-intact fly, 3 P requires 3.0 nJ and 2 P requires 0.5 nJ on the cuticle surface to image the mushroom body. 2P/3P imaging of mushroom body and central complex neurons through the fly head cuticle To further test the performance of through-cuticle imaging with 2 P and 3 P excitation, we imaged the central complex ellipsoid body ring neurons. The insect central complex is a brain neuropil which processes sensory information and guides a diverse set of behavioral responses (Pfeiffer and Homberg, 2014;Seelig and Jayaraman, 2015;Wolff et al., 2015). It is composed of anatomically distinct compartments: the protocerebral bridge, ellipsoid body, fan-shaped body, and the noduli (Wolff et al., 2015). The ellipsoid body consists of a group of neurons, the ring neurons, that extend their axons to the midline forming a ring-like structure (Pfeiffer and Homberg, 2014;Wolff et al., 2015;Xie et al., 2017). Using an ellipsoid body-specific promoter, we expressed a membranetargeted GFP in the ring neurons and imaged them with 2 P and 3 P excitation. Compared to 3 P ( Figure 3E, Video 3), the resolution and contrast of images taken by 2 P was reduced when imaging through the cuticle at this depth (Figure 3-figure supplement 2E). Using ring neuron arbors and tracheal branches, we estimated the lateral resolution of the 3 P images. The FWHM of the lateral brightness distribution measured by a ring neuron's neurite cross-section was ~1.2 µm for the fluorescent signal ( Figure 3F) and ~0.8 µm for tracheal branches captured by the THG signal ( Figure 3G). Next, we investigated whether cellular and subcellular resolution is achievable using the cuticleintact imaging preparation, and compared our results to cuticle-removed 2 P and 3 P imaging. For these experiments, we imaged the Kenyon cells and the ellipsoid body ring neurons expressing a membrane-targeted GFP. Our data showed that Kenyon cell bodies were visible with cuticle-intact 2 P and 3 P imaging ( We also investigated imaging stability during through-cuticle imaging by tracking ellipsoid body cell bodies in flies walking on a ball (Video 4). We did not detect major changes in the fluorescence intensity during walking. The average motion measured was 1.3 µm, which is much smaller than the size of a fly neuron (~5 µm) (Figure 3-figure supplement 4. Figure 3figure supplement 4-source data 1Figure 3figure supplement 4-source data 1). Based on our results, we concluded that motion is not an issue during through-cuticle imaging at depths we have investigated. Together our results demonstrate that although both 2 P and 3 P excitation can be used for through-cuticle imaging at the superficial layers of the fly brain such as the mushroom body, 3 P outperforms 2 P in deeper brain regions such as the central complex especially when cellular and subcellular resolutions are necessary. This conclusion is consistent with the imaging studies conducted in the mouse brain (Mok et al., 2019;Wang et al., 2018a;Wang et al., 2020). 2P and 3P through-cuticle imaging does not induce heating damage to the fly brain tissue The recommended power level for 2 P imaging of fly neural activity with cuticle-removed preparations is ~15 mW (Seelig et al., 2010). However, it is not known what the safe power levels for 2 P and 3 P imaging are, when imaging through the cuticle. 3 P excitation can induce heating in the mouse brain at high laser powers (Wang et al., 2018a;Wang et al., 2018b;Wang et al., 2020). Therefore, we measured how heat generated by 2 P and 3 P excitation impacts the fly brain using HSP70 protein as a marker for cellular stress response (Lindquist, 1980;Podgorski and Ranganathan, 2016). We first tested whether HSP70 protein levels reflect heat-induced stress in the fly brain. Flies that were kept at room temperature had low levels of the HSP70 protein ( Figure 4-figure supplement 1A). In contrast, placing flies in a 30°C incubator for 10 minutes caused a significant increase in HSP70 protein levels across the fly brain ( Figure 4-figure supplement 1B). Next, we tested whether 2 P and 3 P excitation causes an elevation in HSP70 protein levels when imaging through the cuticle. Head-fixed flies in a semi compressed imaging preparation were imaged either with 3 P or 2 P excitation. Our results showed that there was no measurable heat-stress response detected by the HSP70 protein levels when flies were exposed to 2 P (920 nm) and 3 P (1320 nm) excitation at 15 mW for 24 min (four 6-min intervals, see methods for details) (Figure 4-figure supplement 1C-E). However, increasing laser power to 25 mW for 3 P caused a significant increase in HSP70 protein levels in the fly brain ( Figure 4-figure supplement 1). These results suggest that 2 P and 3 P cuticle-intact imaging is safe at power levels below 15 mW, similar to power levels used for 2 P cuticle-removed imaging. Whole brain 2P and 3P imaging in response to electrical stimulation Encouraged by our structural imaging results, we next tested the applicability of 2 P and 3 P microscopy to capture neural activity in the entire fly brain through the intact head cuticle. In these experiments, we used a mild electric shock (1 s, ~ 5 V) and recorded neural activity in flies expressing GCaMP6s pan neuronally. We imaged the entire fly brain using the cuticle-intact and cuticle-removed imaging preparations with 2 P and 3 P. As expected, electrical stimulation generated a neural response in all the region of interests (ROIs) recorded across different depths of the fly brain. 3 P cuticle-removed preparation allowed us to image down to ~250 µm deep ( Figure 4B), while the depth limit for 3 P cuticle-intact imaging was ~120 µm ( Figure 4A). Additionally, we found that dF/F o for 3 P cuticleremoved imaging preparation was between 0.2 and 0.7 and for cuticle-intact imaging preparation it was between 0.2 and 0.5 ( Figure 4A and B). We repeated the depth and dF/F o analysis for 2 P cuticle-intact and cuticle removed imaging. The depth limit for 2 P functional imaging through the cuticle was ~65 µm, while cuticle-removed imaging allowed optical access to ~120 µm ( Figure 4C and D, Figure 4-source data 1). The dF/F o for 2 P cuticle-removed imaging preparation ranged between 0.2 and 0.4, while in the cuticle-intact preparation it was between 0.2 and 0.3. These results suggested that the presence of the cuticle and the underlying tissue decreases 2 P and 3 P functional imaging depth in the brain by ~2× and reduces dF/F o . Similar to structural imaging, 3 P outperforms 2 P at deeper regions of the fly brain when recording neural activity in both cuticle-intact and cuticleremoved imaging preparations. Simultaneous 2P and 3P imaging of odor responses from mushroom body γ-lobes We next recorded neural responses in the fly brain through the intact cuticle using a more natural stimulus, food odor. In these experiments, a custom odor delivery system was used where flies were head fixed and standing on a polymer ball under the microscope ( Figure 5A and B). We expressed GCaMP6s in the mushroom body Kenyon cells and stimulated the fly antenna with the food odor apple cider vinegar ( Figure 5C). Using a multiphoton microscope, odor-evoked Ca 2+ responses of mushroom body γ-lobes were simultaneously captured with 2 P (920 nm) and 3 P (1320 nm) excitation using the temporal multiplexing technique (Ouzounov et al., 2017). A brief 3 second odor stimulus triggered a robust fluorescence increase in the mushroom body γ-lobes ( Figure 5F). Based on dopaminergic innervation, γ-lobes can be subdivided into five anatomical compartments (Cohn Figure 5D and E). To investigate whether food odor is represented by different spatiotemporal patterns in the γ-lobe compartments, we calculated the normalized fluorescence signal for each compartment. No significant differences were observed in neural activity in responses to food odor stimulation across different compartments of the γ-lobes or between 2 P and 3 P excitation of GCaMP6s ( Figure 5G-I, Figure 5-figure supplement 1, Figure 5-source data 1). We also recorded neural activity from Kenyon cell bodies in response to olfactory stimulation using 3 P excitation ( Figure 5-figure supplement 2). Our data demonstrated that both 2 P and 3 P excitation can be used to image odor responses from mushroom body γ-lobes using through-cuticle imaging but for cell body imaging 3 P excitation is preferred. 2P through-cuticle imaging captures odor-evoked responses in behaving flies To investigate how head compression impacts fly behavior and neural activity, we investigated how flies that are head compressed but allowed to walk on a spherical treadmill respond to an odor stimulation. Using our custom behavior/imaging setup ( Figure 6A), we stimulated the fly antennae with food odor (apple cider vinegar), while recording neural activity from the mushroom body γ-lobes using 2 P excitation (920 nm) through the head cuticle. In these experiments, we also captured fly's behavioral responses using a camera that is synchronized with the 2 P microscope. A head-fixed fly was continuously exposed to a low speed air flow before and after the 3 s odor stimulus with the same air flow speed, and the behavioral responses of flies were captured by tracking the spherical treadmill motion using the FicTrac software during each trial (Video 5). Because internal states impact behavioral responses to food odors (Lin et al., 2019;Sayin et al., 2019), we used flies that are 24-hr food deprived. Previous studies have demonstrated that during food odor exposure, hungry flies increase their walking speed, orient, and walk toward the odor stimulus. After odor stimulation, however, flies increase their turning rate which resembles local search behavior. The odor offset responses persist for multiple seconds after the odor exposure (Álvarez-Salvado et al., 2018;Sayin et al., 2019). In our experiments with semi head-compressed flies, flies increase their turning rate upon brief stimulation with food odor apple cider vinegar ( Figure 6B, Figure 6-source data 1). During these experiments, we were able to capture odor-evoked neural responses from all mushroom body γ-lobe compartments reliably ( Figure 6C, Figure 6-source data 1). We further analyzed the odor-evoked changes in fly walking behavior and showed that after the brief exposure to food odor stimulus, flies increased their forward walking speed and turning rate ( Figure 6E-G, Figure 2). These responses lasted for multiple seconds ( Figure 6H-K). Moreover, statistical analysis showed that there is a significant difference between the average forward and rotational speed values before and after the food odor exposure ( Figure 6I and K, Figure 2). Our results are in agreement with previous studies that quantified odor-induced changes in walking behavior in headfixed flies (Sayin et al., 2019). Altogether, these results indicate that head-compressed flies in our spherical treadmill setup can walk and exhibit behavioral and neural responses to odor stimulation. 2P through-cuticle imaging captures chronic odor-evoked responses Studying how neural circuits change activity during learning or in alternating behavioral states requires chronic imaging methods that permit recording neural activity over long time scales. Leveraging our preparation, we pushed the limits of functional imaging of the fly brain in response to food odor stimulation at longer time scales (12 hr). Using a custom odor delivery system, we stimulated the fly antenna with food odor (apple cider vinegar) every 4 hr while imaging through the head cuticle using 2 P excitation (920 nm) ( Figure 7A, Video 6). We calculated the normalized peak fluorescent signal Activity traces within the ROIs enclosed by dotted white lines are shown on the right. Gray lines show the traces of individual stimulations and green lines show the traces of an average of three stimulations. Images were captured at 256 × 128 pixels/frame and 6.5 Hz frame rate for 3P and 113 Hz frame rate for 2P. 3P and 2P data were averaged to 1.1 Hz effective sampling rate for plotting. The online version of this article includes the following source data and figure supplement(s) for figure 4: Source data 1. Source data for plots for Figure 4A-D. Figure 7-source data 4). Our analysis showed that the odor-evoked neural responses did not change with food and water deprivation in any of the γ-lobe compartments imaged (Figure 7, Figure 7-source data 1). During these long-term imaging experiments, we captured the fly's behavior in parallel with the odor stimulation to assure that the fly stayed alive during long-term imaging (Video 6). These results suggest that the cuticle-intact imaging method developed here allows recording of neural activity within an individual fly over long time scales (12 hr), which was previously not possible with commonly used cuticle-removed imaging preparations. Discussion Imaging through the fly cuticle was considered to be not feasible at the wavelengths typically used for 2 P ( ~ 920 nm) and 3 P ( ~ 1300 nm) imaging because of concerns about cuticle absorption (Lin et al., 2015;Tao et al., 2017). By quantitatively measuring the optical properties of the fly cuticle at wavelengths that correspond to 2 P and 3 P imaging, we discovered that fly cuticle transmits long wavelength light with surprisingly high efficiency (Figure 1). We found that it is not the absorption by the cuticle but rather the opacity of the air sacs and the tissues located between the head cuticle and the brain that limit the penetration depth of multiphoton imaging (Video 2). By compressing the fly head using a glass coverslip, we reduced the volume of the air sacs between the cuticle and the brain, which increases the transmission of laser light and therefore allows high resolution imaging of the fly brain through the intact cuticle ( Figure 2). Careful assessments showed that semi head compression does not cause measurable differences in fly courtship (Figure 2-figure supplement 1A), or olfactory behaviors ( Figure 6). Our results clearly demonstrate that long excitation wavelength (e.g. ~1700 nm) is not necessary for imaging the fly brain through the cuticle and our fly preparations enable cuticleintact 2 P and 3 P imaging of common fluorophores (e.g. GFP and GCaMPs) at 920 nm and 1320 nm, respectively (Figures 2-7). While we did not see noticeable differences in the recorded activity traces when performing simultaneous 2 P and 3 P functional imaging of the mushroom body, 3 P imaging has a better SBR than 2 P imaging in deeper regions of the fly brain such as the central complex. Investigating how physiological states, sleep, and learning change the function of neural circuits requires tracking the activity of molecularly defined sets of neurons over long time scales. These experiments require long-term imaging methods to record neural activity in vivo. The through-cuticle imaging method developed here significantly extends the time frame of current in vivo imaging preparations used for anatomical and functional studies in fly neuroscience. Our imaging method will allow researchers to capture the activity of neural populations during changing behavioral states; facilitate decoding of neural plasticity during memory formation; and might permit observation of changes in brain structures during development and aging. Our first demonstration of long-term functional imaging of the fly brain captures food odor responses from mushroom body γ-lobes for up to 12 hr continuously. Our results suggest that odor-evoked Ca 2+ responses did not change during the repeated odor stimulation. Even longer imaging time is possible by feeding flies under the microscope. We performed 2 P imaging for demonstrating the possibility of long-term recording of neural activity because conventional 2 P microscopy has adequate penetration depth for imaging the behavioral responses within the mushroom body, and 2 P microscopy is widely used by the fly neuroscience community. On the other hand, our deep functional imaging data (Figure 4) showed that combination Odor-evoked responses of Kenyon cells captured by 2P excitation at 920 nm and (H) 3P excitation at 1320 nm. (I) Comparison of the average responses captured by simultaneous 2P and 3P imaging over time (n = 3 flies, 4-5 trials per fly, data are presented as mean ± SEM in (G) and (H), gray bar indicates when stimulus is present). Average laser powers are 5 mW at 920 nm and 4 mW at 1320 nm. Images were captured at 160 × 165 pixels/frame and 13.2 Hz frame rate. 2P and 3P data were averaged to 6.8 Hz effective sampling rate for plotting. The online version of this article includes the following source data and figure supplement(s) for figure 5: Source data 1. Source data for plots Figure 5G-I. of our cuticle-intact fly preparation and 3P imaging may provide the exciting possibility of long-term imaging in deeper regions of the fly brain such as the central complex. We note that the success rate of chronic imaging experiments was ~50% because of the drift in the axial position of the brain when imaging for long periods of time. Further optimizations might improve the success rate of chronic imaging. Our focus here was to develop cuticle-intact in vivo structural and functional imaging methods that can extend imaging quality and length for the fly brain. We anticipate that there will be a wide variety of uses for this technology in Drosophila neuroscience research. speed of flies on the spherical treadmill. (E-G) Representative plots for a single fly during the odor stimulation experiments showing rotational speed (Sr) (E), forward speed (Sf) (F) as a function of time, and the total calculated 2D fictive path (G). (H-K) Summary heatmap plots and statistical comparison for rotational (H-I) and forward speed (J-K) 5 s before and after the odor stimulation (n = 3 flies, 3 trials per fly, paired-two tail t-test, p=0.0225). Average laser power at 920 nm is <10 mW. Images were captured at 256 × 128 pixels/frame and 17 Hz frame rate. 2P data were averaged to 5 Hz effective sampling rate for plotting. The online version of this article includes the following source data for figure 6: Source data 1. Source data for plots Figure 6B and C. Source data 2. Source data for plots Figure 6E-K. Figure 6 continued Key resources Optical transmission measurements of the fly head cuticle The measurement setup and procedures are similar to our previous work (Mok et al., 2021). Drosophila cuticle was dissected from the dorsal head capsule of flies that are age and gender controlled (male, 5 days old). The dissected cuticle was sandwiched between two #1 coverslips (VWR #1 16004-094) with ~10 µL of UV curable resin (Bondic UV glue #SK8024) to avoid dehydration of the sample ( Figure 1A). Measurements from each dissected cuticle was done within a day. The first several measurements were repeated at the end of all measurements to ensure that dehydration or protein degradation, which may affect the optical properties of the tissue, did not happen as the experiment progressed. The total transmission and ballistic transmission of cuticle samples were measured using a custom-built device ( Figure 1B). For ballistic transmission experiments, light from a single-mode fiber was magnified and focused on the cuticle with a ~25 µm spot size. We assume that collimated light passes through the sample since the Rayleigh range for a 25 µm (1/e 2 ) focus spot is approximately 0.8-1.3 mm in water (refractive index 1.33) for wavelengths between 852 nm and 1624 nm, which is much larger than the thickness of the entire coverslip sandwich preparation ( <400 µm). The transmitted light from the cuticle was then coupled to another single-mode fiber with identical focusing optics and detected with a power meter (S146C, Thorlabs). Such a confocal setup ensures that only the ballistic transmission is measured. The incident power is ~10 mW on the cuticle for each measurement. An InGaAs camera (WiDy SWIR 640 U-S, NiT) and a CMOS camera (DCC1645C, Thorlabs) were used to image the sample and incident beam to ensure that the incident light spot is always on the cuticle and to avoid the dark pigments (usually at the edge of the cuticle), ocelli, and possible cracks introduced during dissection. The ballistic transmission of the cuticle was then calculated as the power ratio between the ballistic transmissions through the cuticle (PT SMF ) and the surrounding areas without the cuticle ( PT SMF 0 ), i.e., a reference transmission through areas containing only the UV curable resin, using Equation 1: For measuring the total transmission, light from a single-mode fiber was magnified and focused on the cuticle with a ~50 µm spot size. We again assume that collimated light passes through the sample since the Rayleigh range for a 50 µm (1/e 2 ) spot size is 5-10 mm for wavelengths between 532 nm and 1624 nm. An IS power meter (S146C, Thorlabs) is placed immediately after the sample to measure the total transmission. The incident power on the sample is ~10 mW. The same cameras were used to visualize the light spot and the cuticle when the IS is removed. The total transmission of the cuticle was then calculated as the optical power ratio between the transmissions through the cuticle ( PT IS ) and the reference transmission through areas containing only the UV curable resin ( PT IS 0 ), both measured by the IS (Equation 2). (2) Data in Figure 1D and G are acquired by manually translating the sample orthogonal to the light path. For Figure 1E and H, the samples were translated with a motorized stage to acquire a spatially resolved transmission map. We collected data from several locations for each wavelength (ballistic transmission, n = 56 measurements across 5 cuticle samples; total transmission, n = 20 measurements across 4 samples). We then calculated the mean and the standard error across all measurements for ballistic or total transmission for the plots shown in Figure 1D and G, respectively. 2P/3P imaging preparations Through-cuticle imaging preparation with head compression All animals used for imaging experiments were male flies with indicated genotypes kept at 25°C incubators and maintained on conventional cornmeal-agar-molasses medium. Flies used for chronic functional experiments were 2-7 days old, and flies used for short-term functional experiments were 1-4 days old. To perform through-cuticle brain imaging, flies were first head fixed in a 40 mm weigh dish (VWR#76299-236) with a hole made with forceps. A drop of UV curable resin (Liquid plastic welder, Bondic) was applied to the head and thorax, which was then cured with blue light (~470 nm) and fused to a cover glass. The fly antennae are ensured to be fully exposed after curing. Fly proboscises were immobilized with blue light curable resin to minimize head motion caused by muscle contractions. Video 1 explains the imaging preparation. Through-cuticle imaging preparation with air sac removal The dorsal head air sacs were repositioned to the posterior most portion of the head. This was done by deeply anesthetizing the flies on ice for ~5 min. The flies were placed into a modified pipette, allowing their head to stick out of the tip. Dental wax was wrapped around the head stabilizing it to the pipette. A sharpened glass capillary held in a micromanipulator was used to make a small incision just medial to the eye on the dorsal posterior area of the fly's head. A sharpened tungsten needle curved into a micro hook, held in a micro manipulator was inserted into the incision, and run just under the cuticle to hook the dorsal air sac. The hook was pulled to the rear of the head, bringing the air sacs with it. The hook was then manipulated to release the air sac. The procedure was repeated on the other side. The incisions were closed using a very small amount of UV curable resin over the incision site. The flies were then allowed to recover for 24 hr at 25 degrees on conventional food. Flies used in Figure 4 Cuticle-removed imaging preparation Flies were anesthetized on ice for ~1 min then placed into a holder made from a 0.02 mm thick carbon steel sheet with a small hole cut to allow the dorsal thorax and dorsal part of the head to protrude through the sheet. The flies were fixed to the imaging chamber using a UV curable resin (Bondic) around the perimeter of the hole in the sheet. Approximately 500 μl of adult artificial hemolymph was placed on the imaging chamber and the head cuticle was removed using a 20-gauge needle to cut along the medial perimeter of the eyes, the dorsal posterior extent of the head between the eyes, and just posterior to the Video 5. Short-term 2P imaging of mushroom body γlobe neural activity captured through the intact fly head cuticle during walking and odor exposure. Functional imaging is performed in walking flies during a food odor stimulation (apple cider vinegar) with 2P excitation at 920 nm (semi compressed preparation, scale bar = 50 µm). Video is 5× speed up. Figure 7. Long-term 2P imaging of odor-evoked responses of the mushroom body γ-lobes. (A) Stimulus timeline for long-term odor imaging. GCaMP6s fluorescence signal is captured from Kenyon cells axons innervating mushroom body γ-lobes using the semi compressed preparation. (B-E) Quantification of the normalized signal (ΔF/F 0 ) over time in each γ-lobe compartment. Light orange bar indicates when the odor stimulus is present. Each colored line indicates the average response of a fly over multiple trials in a given hour. The average response of three flies is shown. Each compartment's response is labeled with a different color. (F) Quantification of the peak amplitude across different time points and lobes (dF/F 0 ) (Two-way repeated measures ANOVA. Data are presented as mean ± SEM, ns, not significant; n = 3 flies, 3 trials per time point). Average laser power at 920 nm is <10 mW. Images were captured at 256 × 128 pixels/frame and 17 Hz frame rate. 2P data were averaged to 5 Hz effective sampling rate for plotting. Figure 7 continued on next page antenna along the front of the head. Any air sacs, fat bodies, or trachea on top of the exposed brain were removed with fine forceps. Multiphoton excitation source Whole brain 2P/3P imaging The 3P excitation source is a wavelength-tunable optical parametric amplifier (NOPA, Spectra-Physics) pumped by a femtosecond laser (Spirit, Spectra-Physics) with a MOPA (Master Oscillator Power Amplifier) architecture. The center wavelength is set at 1320 nm. An SF11 prism pair (PS853, Thorlabs) is used for dispersion compensation in the system. The laser repetition rate is maintained at 333 kHz. The 2P excitation source is a Ti: Sapphire laser centered at 920 nm (Chameleon, Coherent). The laser repetition rate is at 80 MHz. 3P imaging The excitation source is a wavelength-tunable optical parametric amplifier (OPA, Opera-F, Coherent) pumped by a femtosecond laser (Monaco, Coherent) with a MOPA architecture. The center wavelength is set at 1320 nm. An SF10 prism pair (10SF10, Newport) is used for dispersion compensation in the system. Simultaneous 2P/3P imaging and 2P imaging The 3P excitation source is a wavelength-tunable optical parametric amplifier (NOPA, Spectra-Physics) pumped by a femtosecond laser (Spirit, Spectra-Physics) with a MOPA architecture. The center wavelength is set at 1320 nm. An SF11 prism pair (PS853, Thorlabs) is used for dispersion compensation in the system. The laser repetition rate is maintained at 400 kHz. The 2P excitation source is a Ti: Sapphire laser centered at 920 nm (Tsunami, Spectra-Physics). The laser repetition rate is at 80 MHz. Multiphoton microscopes Whole brain 2P/3P imaging It is taken with a commercial multiphoton microscope with both 2P and 3P light path (Bergamo II, Thorlabs). A high numerical aperture (NA) water immersion microscope objective (Olympus XLPLN25XWMP2, 25 X, NA 1.05) is used. For GFP and THG imaging, fluorescence and THG signals are separated and directed to the detector by a 488 nm dichronic mirror (Di02-R488, Semrock) and 562 nm dichronic mirror (FF562-Di03). Then the GFP and THG signals are further filtered by a 525/50 nm band-pass filter (FF03-525/50, Semrock) and 447/60 nm (FF02-447/60, Semrock) band-pass filter, respectively. The signals are finally detected by GaAsP photomultiplier tubes (PMTs) (PMT2101, Thorlabs). Video 6. Chronic 2P imaging of mushroom body γ-lobe neural activity captured through the intact fly head cuticle during odor exposure. Chronic functional imaging is performed during a food odor stimulation (apple cider vinegar) with 2P excitation at 920 nm (semicompressed preparation, scale bar = 50 µm). Video is 10× speed up. https://elifesciences.org/articles/69094/figures#video6 The online version of this article includes the following source data for figure 7: Source data 1. Source data for plots Figure 7B-E (t=0hr). Source data 4. Source data for plots Source data 5. Source data for plot Figure 7F. 3P imaging A scan lens with 36 mm focal length (LSM03-BB, Thorlabs) and a tube lens with 200 mm focal length are used to conjugate the galvo mirrors to the back aperture of the objective. The same high NA water immersion microscope objective (Olympus XLPLN25XWMP2, 25 X, NA 1.05) is used. Two detection channels are used to collect the fluorescence signal and the THG signal by PMT with GaAsP photocathode (H7422-40, Hamamatsu). For 3-photon imaging of GFP and GCaMP6s at 1320 nm, fluorescence signal and THG signal were filtered by a 520/60 nm band-pass filter (FF01-520/60-25, Semrock) and a 435/40 nm band-pass filter (FF02-435/40-25, Semrock), respectively. For signal sampling, the PMT current is converted to voltage and low pass filter (200 kHz) by a transimpedance amplifier (C7319, Hamamatsu). Analog-to-digital conversion is performed by a data acquisition card (NI PCI-6110, National Instruments). ScanImage 5.4-Vidrio Technologies (Pologruto et al., 2003) running on MATLAB (MathWorks) is used to acquire images and control a movable objective microscope (MOM, Sutter Instrument Company). Temporal multiplexing Simultaneous imaging with 2P and 3P excitation is achieved by temporal multiplexing of the 920 nm Ti: Sapphire laser and 1320 nm Spirit-NOPA laser. The setup is similar to the one described in a previous study (Ouzounov et al., 2017). Briefly, two lasers were combined with a 980 nm long pass dichronic mirror (BLP01-980R-25, Semrock) and passed through the same microscope. They were spatially overlapped at the same focal position after the objective with a remote focusing module in the 2P light path. The 920 nm laser was intensity modulated with an electro-optic modulator (EOM), which was controlled by a transistor-transistor logic (TTL) waveform generated from a signal generator (33,210 A, Keysight) that is triggered by the Spirit-NOPA laser. The EOM has high transmission for 1 μs between two adjacent Spirit-NOPA laser pulses that are 2.5 μs apart. By recording the waveform from the signal generator and the PMT signal simultaneously, the 2P and 3P excited fluorescence signals can be temporally demultiplexed with postprocessing using a custom MATLAB script. Pulse energy comparisons to obtain 0.1 photon/pulse The comparison follows the framework described in our previous work (Wang et al., 2020). In brief, a calibration factor that relates the pixel intensity of the image and the number of detected photons is first acquired by using a photon counter (SR400, Stanford instrument). Then, the brightest 0.25% pixel values of a frame from the whole brain stack are taken as the fluorescence signal and are converted to number of detected photons. Finally, the pulse energy required to obtain 0.1 photon/pulse can be calculated with measured power on the fly surface. The pulse width of the laser pulse to obtain a signal of 0.1 photon/pulse is normalized to 60 fs to account for the difference in pulse width between 3P (~60 fs) and 2P (~100 fs). Imaging depth, resolution, and motion quantifications In all comparisons, signal strength and effective NA for 2P and 3P imaging were similar. Resolution quantifications During the imaging session, the fly was placed on ice to reduce motion. For the mushroom body, 2P and 3P images were taken with a field-of-view (FOV) of 270 × 270 µm with a pixel count of 512 × 512. The zoomed-in images were taken with an FOV of 75 × 75 µm with a pixel count of 512 × 512. For the central complex, 2P and 3P images were taken with an FOV of 250 × 126 µm with a pixel count of 512 × 256. The zoomed-in images were taken with an FOV of 50 × 50 µm with a pixel count of 256 × 256. A step size of 1 µm was taken for axial resolution measurement. Motion quantifications Motion artifact during cuticle-intact in vivo 3P imaging was quantified by imaging ellipsoid body ring neurons expressing GFP. A video of 150 s is taken at a frame rate of 6.5 Hz with a field of view of 74 × 37 µm and pixel count of 256 × 128. The motion was calculated with the 'landmark' output that targets a single neuron from TurboReg plugin in ImageJ during image registration. After image registration, the intensity change of one neuron (I), as indicated in the ROI (Figure 3-figure supplement 4), in time was normalized according to the formula (I− I 0 )/I 0 . I 0 is taken as the mean of all intensity value (I) of the trace. Whole brain signal attenuation quantification 2P and 3P image stacks were taken with an FOV 200 × 100 µm with a pixel count of 512 × 256. Axial step sizes of 5 and 10 µm were used for cuticle-intact and cuticle-removed fly, respectively. The imaging power was increased with imaging depth to keep the signal level approximately constant. The maximum power on the fly brain was 15 mW for both 2P and 3P. The signal (S) of each frame was calculated as the average of the brightest 0.25% pixel values and then normalized by the imaging power (P) on the fly surface. The normalization was S/P 2 and S/P 3 for the 2P and 3P stacks, respectively. The EAL was then derived by least squares linear regression of the normalized fluorescence signal at different imaging depths. Electrical stimulation during 2P/3P functional imaging Cuticle-removed preparation A tungsten wire was inserted into the adult artificial hemolymph on top of the exposed fly brain and secured in place with UV curable resin. A copper wire was placed in contact with ventral portion of the body of the fly and secured in place with UV curable resin. The wires were connected to a variable power supply with the tungsten positive side interrupted with a normally open relay module controlled with a microcontroller (Arduino Uno R3). Cuticle-intact preparation Flies were prepared as described before for head compression imaging (Video 1). A 26-gauge copper wire was secured to the glass cover slip next to the head and another copper wire was secured to the ventral portion of the body. Low melting agarose (GeneMate #E-3126-25) with 0.5 M NaCl (Sigma #S7653-250G) was used to make an electrical connection between the wires and the fly, making sure the electrical path runs through the body. Electrical stimulation and imaging Both the cuticle-intact and cuticle-removed flies were imaged with 2P and 3P excitation, taking images at different depths throughout the brain. For electrical stimulation the flies were stimulated for 1 s at 5 V and imaged at various depths to see a consistent GCaMP signal increase. 3P activity was taken with an FOV of 200 × 100 µm with pixel count of 256 × 128. The frame rate was 6.5 Hz. Every five frames were averaged to achieve an effective frame rate of 1.3 Hz. 2P activity was taken with an FOV of 200 × 100 µm with pixel count of 256 × 128. The frame rate is 113 Hz. Every 100 frames are averaged to achieve an effective frame rate of 1.1 Hz. ROIs were generated by manual segmentation. The baselines of the activity traces (F0) for each ROIs were determined using a rolling average of 4 s over the trace after excluding data points during electric stimulation. The activity traces (F) were normalized according to the formula (F − F 0 )/F 0 . Three stimulations were done for each depth. Olfactory imaging conditions and preparation of flies used in imaging experiments Simultaneous 2P/3P functional imaging Flies were food deprived for 16-24 hr in vials with a wet Kim wipe. Each odor stimulation trial consisted of 50 s of clean mineral oil, 3 s of undiluted apple cider vinegar stimulus, and another 50 s of mineral oil. Between trials, scanning was stopped for 20 s to minimize the risk of imaging-induced tissue stress. Five trials were performed sequentially. Images were captured at 160 × 165 pixels/frame and 13.2 Hz frame rate. 2P and 3P data were averaged to 6.8 Hz effective sampling rate for plotting. 2P functional imaging in behaving flies Flies were head fixed using a custom 3D-printed apparatus which also holds the tube for odor delivery. In this setup, flies are allowed to walk on a spherical treadmill and turn toward the odor stimuli. The odor stimulus is located on the right side of the fly. Each odor stimulation trial consisted of 60 s of clean mineral oil, 3 s of undiluted apple cider vinegar stimulus, and another 60 s of mineral oil. Every change of odor triggers the acquisition software to save in a new file. The images were captured at 256 × 128 pixel resolution and 17 Hz frame rate. Three trials were performed sequentially. 2P chronic functional imaging Flies used in long-term functional imaging experiments were kept on regular fly food before the first trial to assure that they were satiated. Each odor stimulation trial consisted of 60 s of clean mineral oil, 3 s of undiluted apple cider vinegar stimulus, and another 60 s of mineral oil. Every change of odor triggers the acquisition software to save in a new file. The images were captured at 256 × 128 pixel resolution and 17 Hz frame rate. Three trials were performed sequentially, and the three-trial block was repeated every 4 hr. Between trial blocks, scanning was stopped, and air passing through the stimulation tube was redirected to the exhaust valve to prevent desiccation. To further prevent desiccation, flies were placed on a water-absorbing polymer bead. Olfactory stimulation Odor delivery during 2P/3P simultaneous functional imaging Food odor, apple cider vinegar, was delivered using a custom-built olfactometer as described previously (Raccuglia et al., 2016). Clean room air was pumped (Active Aqua Air Pump, 4 Outlets, 6 W, 15 L/min) into the olfactometer, and the flow rate was regulated by a mass flow controller (Aalborg GFC17). Two Arduino controlled 3-way solenoid valves (3-Way Ported Style with Circuit Board Mounts, LFAA0503110H) controlled air flow. One valve delivered the odorized airstream either to an exhaust outlet or to the main air channel, while another valve directed air flow either to the stimulus or control channel. The stimulus channel contained a 50 ml glass vial containing undiluted apple cider vinegar (volume = 10 ml) (Wegmans), while the control channel contained a 50 ml glass vial containing mineral oil (volume = 10 ml). Flies were placed approximately 1 cm from a clear PVC output tube (OD = 1.3 mm, ID = 0.84 mm), which passed an air stream to the antennae (~1L/min). The odor stimulus latency was calculated before the experiments using a photo ionization detector (PID) (Aurora Scientific). We sampled odor delivery using the PID every 20 ms and found average latency to peak odor amplitude was <100 ms across 34 measurements. Flies were stimulated with air (50 s), before and after the odor stimulus (odor + air, 3 s). Same stimulus scheme was repeated three times. Odor delivery during spherical treadmill and chronic imaging An air supported spherical treadmill setup was used to record fly walking behavior during multiphoton imaging. Male flies at 5-6 days post eclosion were anesthetized on ice for about 2 min and mounted to a coverslip with semicompression as described in Video 1. The cover slip was glued to a custom 3D-printed holder with an internal airway to deliver airflow along the underside of the coverslip directly onto the antenna without interfering with the air supported ball. The air duct was positioned 90 degrees to the right of the fly about 1 cm away. Clean room air was pumped (Hygger B07Y8CHXTL) into a mass flow meter set at 1 L/min (Aalborg GFC17). The regulated airflow was directed through an Arduino controlled three-way solenoid pinch valve (Masterflex UX-98302-42) using 1/16" ID tubing. The valve directed the airflow either through 50 ml glass vile containing 10 ml of undiluted apple cider vinegar for the stimulus, or through a 50 ml glass vile containing 10 ml of mineral oil for the control. The latency from stimulus signal from the Arduino to odor molecules arriving at the fly's antenna was measured using a photo-ionization detector (Aurora Scientific) prior to the experiments and found to be <200 ms to peak stimulus. Fly behavior during olfactory stimulation coupled with 2P/3P imaging The spherical treadmill was manufactured by custom milling with 6061 aluminum alloy. The treadmill has a concave surface at the end for placing the ball, which is supported by airflow. We fabricated foam balls (Last-A-Foam FR-7110, General Plastics, Burlington Way, WA USA) that are 10 mm in diameter using a ball-shaped file. We drew random patterns with black ink on the foam balls to provide a highcontrast surface for the ball tracking analysis. Fly behavior was videotaped from the side to capture any movement by a CCD camera (DCC1545M, Thorlabs) equipped with a machine vision camera lens (MVL6 × 12 Z, Thorlabs) and 950 nm long pass filter (FELH0950, Thorlabs). The acquisition frame rate for video recording was set to 8 Hz under IR light illumination at 970 nm (M970L4, Thorlabs). The stimulus signal from the Arduino is captured by NI-6009 (National Instrument) using a custom script written in MATLAB 2020b (Mathworks) to synchronize with the behavior video in data analysis. Male courtship assay 5-6 days of wild-type virgin female and male flies were collected right after eclosion and aged at 25°C for ~5 days. On the day of the courtship assay, control group males were placed on ice for 1-5 min, then placed in the imaging chamber without being head compressed or head fixed. They were allowed to recover for 5 hr at 25°C before getting tested in the courtship assay. Experimental group flies went through the entire head-compression and head-fixing procedure described in Video 1. These flies were removed from the imaging chamber after being head fixed and allowed to recover for 5 hr at 25°C before getting tested in the courtship assay. To quantify male courtship behavior, male and female flies were aspirated into a 1 cm courtship chamber and allowed to interact for 30 min. Courtship assays were recorded using a camera (FLIR Blackfly, BFS-U3-31S4M-C). Immunohistochemistry for brain tissue damage assessment To investigate laser-induced stress in the fly brain, we exposed 4-6 days old male flies (MB > UAS GFP) to 2P laser at 920 nm with 15 mW power or to a 3P laser at 1320 nm with 15 mW power. Flies were prepared using the medium compression preparation described previously. Control flies were prepared the same way and kept in the dark at room temperature for the duration of the experimental procedure. Laser scanning was done in the same depth as the MB γ-lobes. Each scan lasted for 6 min. Flies were rested for 6 min until the next imaging session. Each fly was exposed to four imaging sessions. Once the experiment was completed, fly brains were dissected and stained with the HSP70 antibody. For the positive control group, flies were exposed to 30°C for 10 min in an incubator to induce HSP70 expression. For the negative control group, flies were kept at room temperature. Brains from each experimental and control groups were dissected in phosphate-buffered saline (PBS) and incubated in 4% paraformaldehyde in PBS for 20-30 min at room temperature on an orbital shaker. Tissues were washed three to four times over 1 hr in Phosphate-buffered saline (PBS, calcium-and magnesium-free; Lonza BioWhittaker #17-517Q) containing 0.1% Triton X-100 (PBT, Phosphate-buffered saline + Triton) at room temperature. Samples were blocked in 5% normal goat serum in PBT (NGS-PBT) for 1 hr and then incubated with primary antibodies diluted in NGS-PBT for 24 hr at 4°C. Primary antibodies used were anti-GFP (Torrey Pines, TP40, rabbit polyclonal, 1:1000), anti-BRP (DSHB, nc82, mouse monoclonal, 1:20), and anti-HSP70 (Sigma, SAB5200204, rat monoclonal, 1:200). The next day, samples were washed five to six times over 2 hr in PBT at room temperature and incubated with secondary antibodies (invitrogen) diluted in NGS-PBT for 24 hr at 4°C. On the third day, samples were washed four to six times over 2 hr in PBT at room temperature and mounted with VECTASHIELD Mounting Media (Vector Labs, Burlingame, CA, USA) using glass slides between two bridge glass coverslips. The samples were covered by a glass coverslip on top and sealed using clear nail polish. Images were acquired at 1024 × 1024 pixel resolution at ~1.7 μm intervals using an upright Zeiss LSM 880 laser scanning confocal microscope and Zeiss digital image processing software ZEN. The power, pinhole size and gain values were kept the same for all imaged brains during confocal microscopy. Image processing and data analysis Resolution measurements We measured the lateral or axial brightness distribution of small features within the fly brains using either the GFP fluorescence signal ( Figure 3F) or the THG signal ( Figure 3D and G). Lateral intensity profiles measured along the white lines were fitted by a Gaussian profile for the estimation of the lateral resolution. Axial intensity profiles measured were fitted with a Lorentzian profile to the power of 2 and 3 for 3P and 2P, respectively. The FWHM of the profiles is shown in the figures. Measurement of excitation light attenuation in the fly brain The image stack was taken with 5 μm step size in depth, and the imaging power was increased with imaging depth to keep the signal level approximately constant. The signal (S) of each frame was calculated as the average of the brightest 0.25% pixel values and then normalized by the imaging power (P) on the fly surface. The normalization is S/P 2 and S/P 3 for the 2P and 3P stacks, respectively. The EAL is then derived by least squares linear regression of the normalized fluorescence or THG signal at different imaging depths ( Figure 3B and C). Image processing for structural imaging TIFF stacks containing fluorescence and THG data were processed using Fiji, an open-source platform for biological image analysis (Schindelin et al., 2012). When necessary, stacks were registered using the TurboReg plugin. Multiplex 2P and 3P functional imaging TIFF stacks containing fluorescence data were converted to 32 bits, and pixel values were left unscaled. Lateral movement of the sample in the image series, if any, was corrected by TurboReg plug-in in ImageJ. Images acquired during the multiplexed 2P-3P imaging sessions were first median filtered with a filter radius of 10 pixels to reduce high amplitude noise. To compute ΔF/F 0 traces, γ-lobe ROIs were first manually selected using a custom Python script. F 0 was computed as the average of 10 frames preceding stimulus onset. The F 0 image was then subtracted from each frame, and the resulting image was divided by F 0 . The resulting trace was then low pass filtered by a moving mean filter with a window size of eight frames. Data were analyzed using Python and plotted in Microsoft Excel. Peak ΔF/F 0 was determined by the peak value within 20 frames after the odor delivery. P functional imaging in behaving flies and chronic functional imaging Lateral movement of the sample in the image series, if any, was corrected by TurboReg plug-in in ImageJ. A custom script written in MATLAB 2016b is used for all subsequent processing. Every four frames are averaged to achieve an effective frame rate of 4.25 Hz. ROIs were generated by manual segmentation of the mushroom bodies. The baselines of the activity traces (F 0 ) for each ROIs are determined using a rolling average of 4 s over the trace after excluding data points during odor stimulation. The activity traces (F) were normalized according to the formula (F − F 0 )/F 0 . The trace is finally resampled to 5 Hz with spline interpolation to compile with the timing in the motion tracking trace. Fly walking behavior analysis Fly walking traces were obtained using the FicTrac (Fictive path Tracking) software as published previously (Moore et al., 2014). The ball rotation analysis was performed using the 'sphere_map_fn' function, which allows the use of a previously generated map of the ball to increase tracking accuracy. We post-processed the raw output generated by FicTrac. To calculate forward and rotational speeds, we used the delta rotation vectors for each axis. Then, we down-sampled raw data from 8 Hz to 5 Hz by averaging the values in the 200 ms time windows. The empty data points generated from downsampling were linearly interpolated. Male courtship behavior analysis The courtship videos were scored manually, and the time of copulation was recorded per pair. Statistics Sample sizes used in this study were based on previous literature in the field. Experimenters were not blinded in most conditions as almost all data analysis were automated and done using a standardized computer code. All statistical analysis was performed using Prism 9 Software (GraphPad, version 9.0.2). Comparisons with one variable were first analyzed using one-way ANOVA followed by Tukey's multiple comparisons post hoc test. Comparisons with more than one variable were first analyzed using two-way ANOVA. Comparisons with repeated measures were analyzed using a paired t-test. We used pair-wise log-rank (Mantel-Cox) test to compare the copulation percentage curves in the male courtship assays. P values are indicated as follows: ****p<0.0001; ***p<0.001; **p<0.01; and *p<0.05. Plots labeled with different letters in each panel are significantly different from each other. Additional files Supplementary files • Transparent reporting form Data availability All data generated or analyzed during this study are included in the manuscript and supporting file; Source Data files have been provided.
v3-fos-license
2024-02-17T06:17:03.022Z
2024-01-01T00:00:00.000
267700166
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "743599f1cd250a6637ca4bbd649b8644e7fa6bc7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46794", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "d7169c073f022be6b424b86a7a0f5befab477f30", "year": 2024 }
pes2o/s2orc
Digital therapeutics in the hospital for suicide crisis – content and design recommendations from young people and hospital staff Objective Hospital emergency departments lack the resources to adequately support young people who present for suicidal crisis. Digital therapeutics could fill this service gap by providing psychological support without creating additional burden on hospital staff. However, existing research on what is needed for successful integration of digital therapeutics in hospital settings is scant. Thus, this study sought to identify key considerations for implementing digital therapeutics to manage acute suicidal distress in hospitals. Method Participants were 17 young people who recently presented at the hospital for suicide-related crisis, and 12 hospital staff who regularly interacted with young people experiencing mental ill-health in their day-to-day work. Interviews were conducted via videoconference. Framework analysis and reflexive thematic analysis were used to interpret the data obtained. Results Qualitative insights were centred around three major themes: hospital-specific content, therapeutic content, and usability. Digital therapeutics were seen as a useful means for facilitating hospital-based assessment and treatment planning, and for conducting post-discharge check-ins. Therapeutic content should be focused on helping young people self-manage suicide-related distress while they wait for in-person services. Features to promote usability, such as the availability of customisable features and the use of inclusive design or language, should be considered in the design of digital therapeutics. Conclusions Digital therapeutics in hospital settings need to benefit both patients and staff. Given the unique context of the hospital setting and acute nature of suicidal distress, creating specialty digital therapeutics may be more viable than integrating existing ones. Introduction Over the last two decades, the rates of self-harm and suicide in Australian young people have steadily increased. 1While there is strong evidence for the effectiveness of behavioural therapeutic modalities (e.g.cognitive behavioural therapy and dialectical behavioural therapy) in reducing suicidal thoughts and behaviours, 2 there are significant structural barriers to accessing treatment. 3Even when young people do access face-to-face treatment, 39% will not disclose their suicidal thoughts to their mental health professional. 4s such, there is increasing need for novel models of care that increase young people's access to suicide prevention support in ways that they feel comfortable and willing to engage with. Technology solutions are now recognised as an important component of healthcare for chronic, modifiable conditions, such as mental health. 5Research shows that many young people prefer to engage with mental health support and treatment digitally rather than in-person. 68][9] While there are currently few trials of youth-direct digital therapeutics that target suicidal thoughts and behaviours, there is promising evidence that these tools can benefit young people.For example, a recent Australian study showed in a randomised controlled trial that a self-guided dialectical behavioural therapy-based app significantly reduced suicidal ideation. 10While many digital mental health tools can already be publicly accessed via app stores, 11 including some developed by researchers, progress to integrate these tools into healthcare settings, as part of routine clinical care, remains slow. Hospitals are an implementation setting of particular interest for integrating digital suicide prevention into existing models of care.Hospitals are often utilised in the height of suicidal distress (suicidal thoughts, plans, or attempts) by young people to access lifesaving medical intervention and psychological support.Young people who present to hospital in suicidal distress also have a considerably higher risk of suicide compared to youth who self-harm and experience ideation in the community. 12Despite the important role hospitals have in suicide prevention, young people often report not receiving adequate care for their suicidal distress when attending an emergency department. 13,14There is evidence that a negative hospital experience can decrease the likelihood that someone would return for treatment during a future suicide crisis, 15 potentially increasing the risk of suicide.However, many hospital staff report they do not have adequate time, training, or access to resources to provide the therapeutic care that individuals in suicidal distress need. 16,17Offering digital therapeutics in hospitals has the potential to costeffectively improve or augment workforce capacity to provide evidence-based psychological care, without incurring additional substantial investment from an already strained hospital service. 18he strategic and operational complexities of hospitals mean that 'off-the-shelf' digital therapeutics may not appropriately meet the unique needs of patients in suicidal distress, nor the implementation needs of staff.Given that innovations with low acceptability typically have low engagement, and in turn, reduced clinical benefits, 19 identifying design considerations relevant to hospitals is important to advancing implementation of digital therapeutics in these tertiary healthcare settings. The engagement challenges posed by digital therapeutics 19,20 have led to recent growth in 'design thinking' research which aims to identify enablers and barriers of use.In the field of suicide prevention, much of this research to date has: explored digital therapeutic use in community and secondary mental health care settings, focused on adult populations, and has been conducted in the context of understanding design issues relevant to already existing digital interventions (e.g.2][23][24][25] Many of these methodological considerations extend to the one study that has sought to develop and test two mobile apps for emergency department patients with suicide risk. 24hile these studies provide useful insights into modifying and optimising existing products, the findings may not generalise in ways that are useful to the development of new interventions that are optimally designed from inception to be integrated into hospital-provided care.The limited 'youth-specific focus' of many prior studies also means that the design considerations for enhancing the acceptability and relevancy of digital therapeutics for young people who attend hospital for suicidal distress are unclear. To address these gaps, the aim of the current study is to understand and identify 'general' design considerations that could support researchers and industry to develop digital therapeutics that are fit-for-purpose to implement in hospitals to augment care provision, and which could effectively support young people presenting in suicidal distress. Method This qualitative study uses data collected as part of a larger study which explored enablers and challenges to the implementation of digital therapeutics in Australian hospitals from the perspectives of young people who have previously presented to hospital for suicidal distress and the health professionals who care for them. 26The implementation findings have been reported elsewhere. 26 Study setting Participants were recruited from across Australia via Black Dog Institute's website, Facebook page, and via monthly e-newsletters between May and November 2022.This study was approved by University of New South Wales Human Research Ethics Committee (HC210973), and informed, written consent was obtained by all participants prior to the interviews. Participants Young people were eligible to participate in this study if they met the following criteria (via self-report): 1. Aged between 16 and 24 years (inclusive), 2. Were an Australian resident at the time of the study, 3. Had access to a computer or smartphone with internet connection, 4. Had presented to an Australian hospital for a suicidal crisis (ideation or self-harm with suicidal intent) in the past 12 months, and 5. Were able to speak and understand English fluently. Participants were ineligible if they were flagged as having an active suicide risk as per the 3-item Patient Safety Screener (PSS-3). 27If an individual was deemed ineligible due to active suicide risk, as identified by endorsement of thoughts of suicide in previous 2 weeks and/or a suicide attempt in the past month, they were contacted within 48 h with an invitation for them to speak with a psychologist on the research team to ensure their safety.Young people aged 16 and 17 years were required to complete a brief, five-question Gillick Competency Assessment 28 to determine if they understood what they were consenting to.All five questions needed to be answered correctly before an interview could be scheduled. Hospital staff were eligible for participation if they were: 1. Currently employed (full-or part-time) in an Australian hospital (public or private), 2. Working in an administrative or clinical role during which they interact with young people experiencing mental health issues, 3. Had access to a computer or smartphone with internet connection, and 4. Aged 18 years or older.There were no exclusion criteria for hospital staff. Data collection Interview guides were developed in alignment with three of the factors of normalisation process theory (NPT) 29 : coherencehow people make sense of the intervention in practice; cognitive participationhow people engagement with new practices; and collective actionhow new interventions become part of routine practice.NPT focuses on understanding how stakeholders (young people and hospital staff) make sense of new interventions as part of an implementation framework.No questions in the interview guide (see S1_Interview Guide) specifically asked about design or content considerations, instead the participants voluntarily provided this information in response to other questions about digital therapeutic appropriateness and integration.All interviews were conducted online via a secure videoconferencing platform.Interviews with young people were conducted between May and October 2022; hospital staff were interviewed in November and December 2022.Interviews with each group were continued until saturation had been achieved.Participants were given a $50 (AUD) gift card as reimbursement.The initial four interviews with young people were conducted between two members of the research team (DR, and LM or MT), with the lead author (DR) conducting the remaining interviews (n = 25).Interviews were transcribed and any identifying information was anonymised. Data analysis Data was analysed using Framework Analysis 30,31 and Thematic Analysis 32 approaches.Framework Analysis was utilised in alignment with the methodology of the larger study; 26 however, it became clear that we had a wealth of data relating to the content and design of digital therapeutics which were outside the scope of the parent study aims.Data relating to content and design was separated out and analysed using Thematic Analysis.Data was managed via Nvivo Software (Version 20). Initially analysis followed the first four steps of Framework Analysis and involved DR and RB familiarising themselves with the transcripts, then the first five transcripts from each participant group were coded independently, after which an analytic framework was developed by RB and DR which mapped the codes to NPT factors.This was then repeated with the remaining transcripts.Once codes were identified by thorough interrogation by DR, RB, and MT to be in relation specifically to the digital therapeutic content and design the codes were separated from the larger data set.Thematic analysis was conducted on the raw codes (sans the analytic framework) and involved DR, RB, and MT examining the codes for similarities and developing themes which consolidated overarching ideas.The final themes and codes were then synthesised for reporting. The majority of young people had previously used a digital therapeutic in some capacity, not necessarily in a healthcare setting (n = 13, 76%), while less than half of hospital staff had used, or recommended, a digital therapeutic to a young person in suicide crisis or with mental health concerns as part of their provision of care (n = 7, 58%). Thematic analysis Thematic analysis resulted in three themes which outlined the specific content needs for digital therapeutics when being utilised in a hospital setting: hospital-specific content, therapeutic content, and usability (see Table 1). Theme 1: Hospital-specific content.Hospital staff and young people identified content which was specific to improving the experience of providing, or receiving, care in the hospital.This included providing the young person with information about what they can expect from the hospital visit, the ability for staff to access the young people's responses to facilitate assessment and treatment planning, and finally, a mechanism to facilitate checking-in with the young person after they are discharged from the hospital. Information about what to expect Staff saw a digital therapeutic as an opportunity to empower young people to navigate their care and their own needs, through an opportunity to inform them about the nature of the hospital care or community care journey.The community care journey refers to the steps planned for the young person's treatment as they transition from the hospital back to the community. Platform to consolidate information to staff Hospital staff felt that having functionality within a digital therapeutic which linked patient self-report responses about their current suicide crisis with their clinical hospital records would help the patient begin to reflect and provide them an opportunity to understand the patient more prior to the assessment.Young people agreed with this and saw an option for staff to view their responses as a possible avenue to facilitate conversation and limit the volume which needed to be shared with staff verbally.Young people felt sharing information with staff would reduce feelings of loneliness.Hospital staff felt young people might be more comfortable disclosing sensitive information via a digital therapeutic rather than face-to-face, and as such this was a useful tool to facilitate a thorough risk assessment. 'I feel like it would be good.I feel like if they can link it in with hospital mental health teams as well, they can see who's using it, what they need help with, how they're feeling, everything like that.Then I feel like it definitely would be a lot more beneficial, and it would get rid of that lonely feeling.Yeah, I feel like it would be pretty good.' Check-in function post-discharge Young people and hospital staff identified that a digital therapeutic may provide a preferred alternative to conduct the post-discharge check-in via the digital therapeutic rather than the typical phone call.Check-in via a digital therapeutic was perceived as less intrusive than a phone call. 'Instead of phone calls that last five minutes, it'd be good to just check in on an app and be like, "Yes.I'm doing great.I'm doing awful."And then they can use that data maybe to help with your next follow-up, instead of a five-minute, "Are you alive?" phone call that I got.' -Young Person, 17 years 'Or even maybe being able to send a text from message media and saying, "Hey, just following up.Did you want to have a chat or are things all good?"That way, the person isn't put on the spot, and as a clinician, we don't feel like we're suggestive selling and saying, "Hey, how's your suicidality today?"' -Hospital Staff, Nurse However, some hospital staff felt that while a check-in function post-discharge may be functionally better for young people, it did present other issues around who and how to respond to a suicide risk discloser if that were to happen during check-in. 'My concerns with it would be if there was anything disclosed to a concern.… Whereas when you do have that conversation face to face … you know they're in a safe space.So yeah, I think follow ups great, a hundred percent, just has to be monitored if that's a possibility.'-Hospital Staff, Mental Health Clinician Theme 2: Therapeutic content.Young people and hospital staff suggested a variety of therapeutic content which could make a digital therapeutic more beneficial for young people in a suicide crisis.Content options included coping strategies, activities for distraction and self-reflection, guided safety planning, and the mechanism to start a chat with someone via the digital therapeutic. Coping strategies Hospital staff and young people felt that a digital therapeutic would provide a good opportunity to educate young people about possible coping strategies to either reduce their current level of distress or to help them avoid distress in the future.This includes cognitive behavioural therapy, dialectical behaviour therapy, mindfulness, breathing techniques, distress tolerance, or 'coping mechanisms' more broadly.Young people and hospital staff felt coping strategies would be helpful to manage distress during long waiting times in the emergency department.Hospital staff elaborated that young people could continue to use coping strategies to improve their distress tolerance beyond the hospital visit. 'Like for one passing the time, but also going through a digital therapeutic.Like if you're going to be spending all that time in hospital and you can only see the emergency therapist for half an hour, you might as well be working on coping mechanisms, even if only one of them absorbs into it like into your brain.' -Young Person, 22 years 'I think it's so important for them to be able to learn those coping skills and distress tolerance and things like that because then, [it] gives them a way just to manage it instead of having to go and go to an ED (emergency department) where things can be rather uncomfortable.Just teaching them those skills.'-Hospital Staff, Nurse Distraction Young people believed a digital therapeutic may offer them a distraction which would stop them focusing on the thoughts and feelings which are exacerbating their suicide crisis or reduce the negative impacts of the harsh emergency department environment. 'Yeah, like if I was to recommend any app for the ED (emergency department) it would be [one that] takes like a few hours to work through because of how much content it has … lots of mindfulness things … do you need a distraction?Pop some bubbles.Like that's helpful.' -Young Person, 22 years Hospital staff saw this as an opportunity to alleviate the distress which would improve the quality of their assessments, improving patient flow through the hospital, and hopefully led to better care plans to improve mental health long term. and talk about what's happening.' -Hospital Staff, Mental Health Clinician 'I do think that there is a role for it because at the moment we leave them alone with their thoughts for hours and hours and that is also not helpful and that doesn't help them get safe and that doesn't help them find a solution to their problems either.In fact, a lot of times it causes further detriment.'-Hospital Staff, Doctor Self-reflection Hospital staff and young people saw benefit in content that focused on the young person reflecting on their mental state.Giving young people space to debrief, consider, and reflect on their situation at their own pace was seen as a benefit to assessment and a way to facilitate crisis management. 'I think, good to think more about having a sort of feedback for kind of writing your own feelings or where you're at.Whether it be a sliding scale of some sort which you can add notes to, some sort of system so you can do it in a simple way or you can write things down to express to the nurse when they come to do your observations or ask you how you're going while you're waiting for, whether it be a bed or another review or whatever in ED (emergency department).'-Young Person, 19 years 'So I think just having a space where the [young person] can think and maybe look, the young person can just look and get out what they're feeling would be helpful as well, away from that kind of really intense space.' -Hospital Staff, Mental Health Clinician Safety planning Hospital staff saw an opportunity to digitise safety planning via an appto improve the acceptability of the plan through joint agreement between staff, young person, and carers and to improved accessibility due to accessibility of devices. 'If the [digital therapeutic] can allow a young person and a parent to safety plan together, say what can we do before we hit red?That type of thing.Whether that's safety planning on separate phones that come together, I don't know.But allowing that connection to be there in a crisis but also not be there to allow, seek support from each other.' -Hospital Staff, Mental Health Clinician Chat function Young people mentioned the desire for a chat function, either with a real person or a chat bot would be beneficial as the experience of a suicide crisis is often isolating, and they felt they would benefit from interpersonal connection (even if artificial). 'I think the whole base around people being suicidal and depressed and giving up is because they think no one cares about them.So, I think having even someone you don't know that's like a stranger, taking an interest and talking to you and wanting to find out what the problem is.It's sort of like more comforting I think' -Young Person, 16 years Theme 3: Usability.Digital therapeutic content also needs to focus on how easy and enjoyable it is for the young person to engage with it.Hospital staff and young people both noted that a digital therapeutic which is personalised is ideal; hospital staff also highlighted the importance of inclusivity in design and language, and young people wanted the content to be easy to navigate, with the option to continue to access the digital therapeutic once discharged from the hospital. Bespoke and customisable Hospital staff and young people commented that one of the limitations of existing digital therapeutics is that they lack the ability to be tailored to the users' specific needs.Furthermore, participants identified an option to personalise the digital therapeutic's look and feel would increase their engagement. 'I like being able to personalise stuff.So, it would be like, "Hi [name], welcome back," and stuff like that.I don't know if that's just me, but it kind of makes it feel like my own space if I can choose colours and all that stuff, which is super simple, but having the basic stuff, like meditation or strategies or online therapy links and stuff.But just having the ability to personalise it and stuff would make it, I don't know, just feel more for me.' -Young Person, 17 years 'But if you had some way of going, "Oh, do you feel like this?"And then it takes you down this avenue … so you are homing in on a specific part of what's wrong.So, is it anxiety related or is it situational related?Is it something that just needs something to distract them or does it need fixing …' -Hospital Staff, Nurse Inclusive design and language Hospital staff identified that a digital therapeutic should be inclusive and accessible to all people regardless of sociodemographic background, access to a device, abilities, neurodivergence, different ethnicities, and languages, among others.Staff also highlighted the importance of inclusive language and such as the user selecting their own gender identification. 'Because you've got neurodiversities to consider.You've got disabilities to consider.They may not be able to read and write.They might have the motor skills to use an app or is it functional for carers or family members to fill in for them?They're disabled, they may have a carer … the other bit I would ask a favour too, just because I'm an advocate in this space is LGBT stuff.Like include flags, include pronouns, preferred names, all of this stuff, non-binary sexes, all that stuff' -Hospital Staff, Peer Support Worker Easy to navigate Young people indicated that any digital therapeutic being utilised for suicide crisis should be straightforward to use and have a simple navigation tutorial built into the beginning.Young people said it was difficult for them to concentrate on external things during a suicide crisis so having a digital therapeutic which is easy to navigate is important. 'I feel like most apps already have a "this is how you use the app" and it kind of comes up as a pop up when you just download it and then it disappears.'-Young person, 18 years 'I think it would have to be well designed, and well laid out, and run quite smoothly.Because a lot of mental health apps are quite clunky, and you can tell that they've been made with not a very big budget.Whereas Headspace, which you pay for, is beautifully designed, and everything is clear, because you have to pay to get it.But I think that's the thing, people will stop using it straight away if it's not a nicely designed app that's really simple to use and nice to look at as well.' -Young Person, 23 years Engaging content Young people reflected that digital therapeutics they have utilised in the past were too generic and did not always provide unique and helpful information.Young people want digital therapeutic content, which is engaging and interesting, so they could get the most benefit from it.Some suggestions included novel mental health education and strategies, providing lots of different activities to engage with, and positive stories of other people overcoming their crisis. 'I know people who they actually really help with and the meditation part of it, that helps them sleep and those little five-minute brain puzzle things, they help them.'-Young Person, 17 years 'Probably the option to hear real life stories of how people have turned their life around or what's helped other people get through a suicide attempt or real-life testimonials like videos and stuff.' -Young Person, 24 years Continued access after hospital discharge Some young people felt that being able to continue to use the app after leaving the hospital would be beneficial.This would help them to continue to benefit from the hospital visit and provide them tools to continue to manage their suicide crisis when in the community.Young people also felt that they would likely benefit more from a digital therapeutic if being used outside of the hospital as well, since they are likely out of an immediate crisis and more receptive to information. Discussion This study qualitatively explored young people and hospital staff's content and design recommendations for digital interventions for young people in suicidal distress, as part of broader implementation considerations in hospital settings.Participants provided recommendations for content which would benefit either the young person or the hospital staff in a hospital setting, therapeutic content which is appropriate during suicidal distress, and design considerations which enhance the usability of a digital therapeutic for young people in suicidal distress. A digital therapeutic is more likely to become embedded into routine practices in the hospital and as part of community-based aftercare if it benefits both young people and hospital staff. 33Both young people and hospital staff identified the benefit of being able to consolidate the information the young person shares in the digital therapeutic with the hospital records.Hospital staff, particularly those in the emergency department, have limited time to speak with a patient, 17,34 which can impact their ability to build rapport or get a sound understanding of an individual's suicide presentation. 17In this study, participants identified that linking the digital therapeutic with hospital data could facilitate rapport building and allow staff to efficiently understand a young person's situation so they can focus on their specific needs.Young people have reported the discomfort in having to recall their suicidal crisis numerous times as they tell multiple staff members. 13The ability to provide this information digitally, which all staff could read before speaking with the young person, could limit the repetitive story telling required of young people in suicidal crisis, and alleviate some of the distress due to limited privacy in the hospital setting. 13Moreover, as prior research has shown that using both data from patient self-report and clinician assessments together can improve clinical risk assessment accuracy, 35 being able to capture the proximal circumstances surrounding a young persons' suicidal crisis could add value to clinicians in respect to risk assessment and improved care planning and management. There was strong agreement between young people and hospital staff that digital therapeutics should support coping skill development both inside and outside the hospital.Long periods of waiting are commonplace in emergency departments 13,14,36,37 and this was identified as 'wasted time' by health professionals in our study, who saw value in using this time to use digital tools to teach young people new coping strategies.This complements prior research showing that young people want, and benefit from, digital therapeutics that teach adaptive coping strategies, 22,23,38 and that these strategies continue to be important in the post-discharge period. 39Mental health professionals working within the emergency department are often unable to provide therapeutic intervention, instead focusing on assessing risk and determining future directions for care. 40Hospital staff recognised that helping young people develop coping skills was a key gap in their care delivery.Young people identified that therapeutic modalities such as mindfulness and cognitive behavioural therapy could be beneficial during time of distress; however, they also wanted information about how to manage the stressful or overwhelming instances they may have to deal with in the future, to hopefully decrease their risk of experiencing in suicidal distress in the future. The findings suggested that personalisation functionality was important to young people, including in the visual design (e.g.colour schemes) and in being able to 'favourite' strategies or information that particularly appealed to them, as it would foster a sense of ownership and empowerment which in turn may increase engagement with these tools.The preference for similar 'personalisation' strategies has emerged as an 'enabling' design consideration in other comparable studies, 22,23 suggesting this is a valued feature that generalises across healthcare settings and intervention types, and may offer a cost-effective opportunity to meet users' needs.Almost a quarter of young people identified as non-binary (24%) and hospital staff highlighted the importance of the digital therapeutic tool having an option to specify gender-identify, where relevant, to promote inclusion.The tool also needs to be available in a variety of languages to cater to the needs of young people in multicultural Australia.A recent scoping review found that only 58% of mental health app evaluation frames included any considerations for divesity, equality, and inclusion. 41ome important implications emerge from this study.While new digital therapeutics are being rapidly integrated into hospital settings to support individuals in suicide crisis over recent years, [42][43][44][45] none have specifically targeted young people.This study provides novel insights into the design preferences of young people that can inform the design of setting-specific interventions that are optimised for engagement.It, however, remains to be established which design considerations are practical to implement and would actually improve care provision in hospitals.While this study explored design considerations specifically for a hospital setting, future studies should consider exploring how digital therapeutics may be designed to facilitate coordinated care across the different tiers of the healthcare system (i.e.primary, secondary, and tertiary), as using digital tools to support 'care continuity' may improve outcomes for individuals experiencing suicidal thoughts and behaviours. 46Not all design considerations were consistently raise by both young people and hospital staff.Additional research is needed to interrogate the design considerations where perspectives diverged between these groups (e.g. the use of chat bots, the need for inclusive language).Understanding what design features are meaningful for whom, and in what circumstances, is important for designing tools that optimise effectiveness and value-based care.Further to this point, it would be useful to unpack what the 'active treatment ingredients' of digital tools are, specifically those that are robustly linked to modifications in suicidal thoughts and behaviours.Such insights could help inform design thinking as to how these 'ingredients' should be delivered in high-pressure settings, such as hospitals, to support young people. There are several limitations to consider.First, the majority of staff had not utilised a digital therapeutic before in the care of individuals or for their personal use.This may make it difficult for them to conceptualise what content and design features would be the most beneficial for themselves and young people.Second, the larger proportion of female participants may have influenced the type of content and design considerations provided.Third, staff were from a variety of hospitals across Australia, limiting the ability to make assumptions about what would work within specific hospitals or local health areas.Similarly, there were no intentional or obvious young people-hospital staff dyads, making it difficult to draw conclusions about shared experiences.Despite these limitations, there was considerable agreement between the two groups, suggesting that the recommendations made by the two groups may be broadly applicable. Conclusion This study advances current understandings of digital therapeutics specific to the hospital setting.The results suggest that there are unique design and functionality considerations specific to hospital settings if digital therapeutics are to effectively improve the provision of care for young people in self-harm and suicide distress.These design considerations could inform the development of new interventions or augment existing ones to ensure they are appropriately retrofitted to the needs of complex, highvolume care settings. 'If the young person in emergency department, it can be very daunting and if the apps can have some … really generic information [about] what might happen at the emergency department, then it gives them more relief so they know what is going on …' -Hospital staff, Mental Health Clinician Table 1 . -Young Person, 17 years 'I definitely think it would open up a lot of doors that potentially are currently closed, if that makes sense.In a way that Themes and codes within each, along with indication of young person and/or hospital staff endorsement.questionsyou ask as a clinician you feel a little bit, not uncomfortable, but unsure how to ask the questions in a respectful way.So potentially it allows the young people to get their feelings out and then as a clinician you can read that and approach questions in a way that are suitable, appropriate.… Or potentially young people might want to speak specifically about something but don't know how to start the conversation is probably another one as well.I guess it gives you a pathway into various different things that you can't get to as quick if you have that conversation, if that makes sense.' -Hospital Staff, Mental Health Clinician 'I think it would be helpful for after you've gotten at the hospital as well, like if you've already worked for it, and then you have it on your phone, like you, more likely to open it after.'-Young Person, 22 years
v3-fos-license
2018-12-07T15:24:42.277Z
2006-08-09T00:00:00.000
55126624
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://angeo.copernicus.org/articles/24/2025/2006/angeo-24-2025-2006.pdf", "pdf_hash": "673218ffc25976eef6834e63c464101a5ed081d0", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46795", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "sha1": "d9163e2de4a7652acf71cb7525ea00ab7ad355db", "year": 2006 }
pes2o/s2orc
The atmospheric implications of radiation belt remediation Abstract. High altitude nuclear explosions (HANEs) and geomagnetic storms can produce large scale injections of relativistic particles into the inner radiation belts. It is recognised that these large increases in >1 MeV trapped electron fluxes can shorten the operational lifetime of low Earth orbiting satellites, threatening a large, valuable population. Therefore, studies are being undertaken to bring about practical human control of the radiation belts, termed "Radiation Belt Remediation" (RBR). Here we consider the upper atmospheric consequences of an RBR system operating over either 1 or 10 days. The RBR-forced neutral chemistry changes, leading to NOx enhancements and Ox depletions, are significant during the timescale of the precipitation but are generally not long-lasting. The magnitudes, time-scales, and altitudes of these changes are no more significant than those observed during large solar proton events. In contrast, RBR-operation will lead to unusually intense HF blackouts for about the first half of the operation time, producing large scale disruptions to radio communication and navigation systems. While the neutral atmosphere changes are not particularly important, HF disruptions could be an important area for policy makers to consider, particularly for the remediation of natural injections. Introduction The behaviour of high energy electrons trapped in the Earth's Van Allen radiation belts has been extensively studied, through both experimental and theoretical techniques.During quiet times, energetic radiation belt electrons are Correspondence to: C. J. Rodger (crodger@physics.otago.ac.nz) distributed into two belts divided by the "electron slot" at L∼2.5, near which there is relatively low energetic electron flux.In the more than four decades since the discovery of the belts (Van Allen et al., 1958;Van Allen, 1997), it has proven difficult to confirm the principal source and loss mechanisms that control radiation belt particles (Walt, 1996).It is well known that large scale injections of relativistic particles into the inner radiation belts are associated with geomagnetic storms which can result in a 10 5 -fold increase in the total trapped electron population of the radiation belts (Li and Temerin, 2001).In some cases the relativistic electron fluxes present in the radiation belts may increase by more than two orders of magnitude (Reeves et al., 2003).In most cases, however, these injections do not penetrate into the inner radiation belt.Only in the biggest storms, for example November 2003, does the slot region fill and the inner belt gain a new population of energetic electrons (e.g., Baker et al., 2004). Even before the discovery of the radiation belts, high altitude nuclear explosions (HANEs) were studied as a source for injecting electrons in the geomagnetic field.This was confirmed by the satellite Explorer IV in 1958, when three nuclear explosions conducted under Operation Argus took place in the South Atlantic, producing belts of trapped electrons from the β-decay of the fission fragments.The trapped particles remained stable for several weeks near L=2, and did not drift in L or broaden appreciably (Hess, 1968).Following on from Operation Argus, both the US and USSR conducted a small number of HANEs, all of which produced artificial belts of trapped energetic electrons in the Earth's radiation belts.One of the most studied was the US "Starfish Prime" HANE, a 1.4 Megaton detonation occurring at 400 km above Johnston Island in the central Pacific Ocean on 9 July 1962.Again an artificial belt of trapped energetic electrons was injected, although over a wide range of L-shells from about L=1.25 out to perhaps L=3 (Hess, 1968).The detonation also caused artificial aurora observed as far Published by Copernicus GmbH on behalf of the European Geosciences Union.away as New Zealand, and an electromagnetic pulse which shut down communications and electrical supply in Hawaii, 1300 km away (Dupont, 2004). The effect of the Starfish Prime HANE on the radiation belts was observed by multiple spacecraft.However, the intense artificial belts injected by the HANE damaged 3 of the 5 satellites operating at the time.Within a small number of days, data transmissions from the Ariel, Transit IVB and TRAAC satellites became intermittent or ceased altogether (Massey, 1964), primarily due to degrading solar cells.Other effects were also noted even in this early case; the transistors flown in the first active communications satellite, Telstar, failed due to radiation exposure, even though the satellite was launched after the Starfish Prime HANE. The artificial belts produced by this Starfish Prime HANE allowed some understanding of the loss of energetic electrons from the radiation belts, as demonstrated by the comparison of calculated decay rates with the observed loss of injected electrons (Fig. 7.3 of Walt (1994)).Collisions with atmospheric constituents are the dominant loss process for energetic electrons (>100 keV) only in the inner-most parts of the radiation belts (L<1.3)(Walt, 1996).For higher L-shells, radiation belt particle lifetimes are typically many orders of magnitude shorter than those predicated due to atmospheric collisions, such that other loss processes are clearly dominant.Above L∼1.5 coulomb collision-driven losses are generally less important than those driven by whistler mode waves, including plasmaspheric hiss, lightning-generated whistlers, and manmade transmissions (Abel andThorne, 1998, 1999;Rodger et al., 2003). It is recognised that HANEs would shorten the operational lifetime of Low Earth Orbiting satellites (Parmentola, 2001;U.S. Congress, 2001;Steer, 2002), principally due to the population of HANE-injected >1 MeV trapped electrons.It has been suggested that even a "small" HANE (∼10-20 kilotons) occurring at altitudes of 125-300 km would raise peak radiation fluxes in the inner radiation belt by 3-4 orders of magnitude, and lead to the loss of 90% of all lowearth-orbit satellites within a month (Dupont, 2004).In 2004 there were approximately 250 satellites operating in low-Earth orbit (LEO) (Satellite Industry Association, 2004).These satellites fulfil a large number of roles, including communications, navigation, meteorology, military and science.In the event of a HANE, or an unusually intense natural injection, this large population of valuable satellites would be threatened.Due to the lifetime of the injected electrons, the manned space programme would need to be placed on hold for a year or more.However, recent theoretical calculations have led to the rather surprising conclusion that wave-particle interactions caused by manmade very low frequency (VLF) transmissions may dominate non-storm time losses in the inner radiation belts (Abel and Thorne, 1998;1999).This finding has sparked considerable interest, suggesting practical human control of the radiation belts (Inan et al., 2003) to protect Earth-orbiting systems from natural and manmade injections of high energy electrons.This manmade control of the Van Allen belts has been termed "Radiation Belt Remediation" (RBR).An RBR-system would probably involve a constellation of perhaps 10 satellites (Dupont, 2004), which would transmit VLF waves so as to vastly increase the loss-rate of energetic electrons by precipitation into the upper atmosphere, essentially dumping the HANE-produced artificial radiation belt.In order to be effective, an RBRsystem needs to flush the HANE-produced 1 MeV electrons in a short time scale, which has been suggested to be as low as ∼1-2 days or perhaps as long as 10 days (Papadopoulos, 2001). In this paper we consider the upper atmospheric consequences of an RBR system in operation.The dumping of high-energy relativistic electrons into the atmosphere will create intense energetic particle precipitation, leading to large ionisation changes in the ionosphere.Such precipitation is likely to lead to large changes in atmospheric chemistry and communications disruption, particularly for the case of HANE injections.Particle precipitation results in enhancement of odd nitrogen (NO x ) and odd hydrogen (HO x ).NO x and HO x play a key role in the ozone balance of the middle atmosphere because they destroy odd oxygen through catalytic reactions (e.g., Brasseur and Solomon, pp. 291-299, 1986).Ionisation changes produced by a 1 MeV electron will tend to peak at ∼55 km altitude (Rishbeth and Garriott, 1969).Ionisation increases occurring at similar altitudes, caused by solar proton events are known to lead to local perturbations in ozone levels (Verronen et al., 2005).Changes in NO x and O 3 consistent with solar proton-driven modifications have been observed (Seppälä et al., 2004, Verronen et al., 2005).It is well-known that the precipitation of electrons at high latitudes produce addition ionisation leading to increased HF absorption at high-latitudes (MacNamara, 1985), in extreme cases producing a complete blackout of HF communications in the polar regions. In order to estimate the significance of RBR-driven precipitation to the upper atmosphere, we consider two cases of an RBR-system operating to flush the artificial radiation belt injected by a Starfish Prime-type HANE over either 1 or 10 days.In the first case we consider the effect of a spacebased system, while in the second case we also consider a ground-based RBR system.This work examines the range of realistic potential environmental and technological effects due to this manmade precipitation, including changes to the ozone-balance in the middle atmosphere, and disruption to HF communication. HANE-produced trapped electrons The injection caused by the 1.4 megaton Starfish Prime HANE was extensively studied and reported upon in the open literature.While this HANE was undertaken with the express purpose of injecting energetic electrons into the radiation Ann.Geophys., 24,[2025][2026][2027][2028][2029][2030][2031][2032][2033][2034][2035][2036][2037][2038][2039][2040][2041]2006 www.ann-geophys.net/24/2025/2006/belts, it occurred only 4 years after the discovery of the radiation belts, and in the earliest days of the Space Age.It is expected that a carefully planned modern HANE caused by a relatively small nuclear weapon (∼15 kiloton) delivered to relatively low altitudes (a few 100 km) might produce rather similar effects to Starfish Prime.Nonetheless, as noted above, we consider the Starfish Prime HANE as an extreme example for which reliable information is available. Figure 1 shows a map of the important locations in our study.The Starfish Prime HANE occurred above Johnston Island (16.74 • N, 169.52 • W), shown in Fig. 1 by the square.A HANE produces energetic particles at one point in space.However, within seconds those particles are distributed along the geomagnetic field line which passes through the HANE location, and within a few hours those particles drift around the Earth.The latter motion will cause the HANE-produced injection to spread in longitude, and hence fill the L-shell with an artificial radiation belt of relatively constant flux.In practice, a HANE need not affect a single L-shell.While the Operation Argus HANE led to artificial radiation belts which were only ∼100 km thick, the Starfish Prime HANE injected electrons into a wide range of L-shells. At the lowest L-shells the Starfish Prime HANE injected energetic electrons into the radiation belt with an energy spectrum from 0-10 MeV that was linearly proportional to the equilibrium-fission spectrum (Hess, 1963;Van Allen et al., 1963) exp(−0.575E− 0.055E 2 ) , where E is in MeV.This represents the spectrum of electrons from thermal neutron fission of U 235 (Carter et al., 1959).This spectrum holds at L=1.25, but at higher L-shells the observed spectrum was found to be considerably softer.However, at L=1.25 the mean lifetime of ∼2 MeV electrons is very short (∼30 days) due to collisions with atmospheric constituents.In contrast, electrons of the same energy have a lifetime of ∼1 year at L∼1.5 (Hess, Fig. 5.24, 1968), for the case of "natural" loss processes unassisted by an RBRsystem.It is these long-lived electrons which will strongly reduce the survivability of LEO satellites, and hence would be the focus for a future RBR-system.We therefore focus on the RBR-driven artificial precipitation of HANE-injection electrons around L∼1.5. The injected electron spectrum softens as the bubble produced by the HANE expands (Hess, 1968), given by where E 0 and l 0 are the initial particle energy and bubble radius, and N(E 0 ) describes the energy spectrum at an energy E 0 .For L=1.57 a doubling in the bubble radius (i.e., l/l 0 =2) produced the experimentally observed softening in the spectrum (Hess, 1968).Five days after the explosion, in situ measurements of the Starfish Prime HANE-injected >0.5 MeV omnidirectional integral electron fluxes indicated that the equatorial flux was fairly constant with L, with a value of ∼10 9 electrons cm −2 s −1 from L=1.25-1.7.omnidirectional integral electron flux being equal at the two L-shells, and the softening with increasing L. As the energy spectrum softens with the expansion of the HANE-produced bubble, a larger relative low-energy population is expected at high L for the same omnidirectional integral electron flux. The "undisturbed" omnidirectional differential electron flux is shown by the dotted lines in Fig. 2, produced by the current standard trapped electron model for solar minimum conditions, ESA-SEE1 (Vampola, 1996).This is an update to AE-8MIN (Vette, 1991), and is described in more detail below.The small discontinuities (e.g., at ∼3 MeV at L=1.57) are present in both the ESA-SEE1 and AE-8MIN model, and have been checked against the AE-8MIN model which can be run online at the National Space Science Data Centre website. How does the Starfish Prime electron injection compare with those estimated for a carefully planned modern HANE, available from the open literature?The >1 MeV omnidirectional integral electron fluxes derived from Fig. 2 can be contrasted with the >1 MeV fluxes predicted for "normal" conditions from ESA-SEE1.These Starfish Prime HANE injections are ∼2.9×10 4 larger than AE-8MIN at L=1.25 and ∼5.7×10 2 larger at L=1.57.This is similar to the reported effect of a possible future HANE, i.e., a 3-4 order of magnitude increase in fluxes in the inner radiation belt (Parmentola, 2001;DuPont, 2004), especially when taking into account the suggested error estimates for AE-8 of "about a factor of 2" (Vette, pp. 4-2, 1991).However, it does suggest that the historic reports of the Starfish Prime HANE injections may not represent an extreme case, and instead is similar to the in-jections expected for a carefully planned future HANE using a much lower-yield weapon. In our study we will use the HANE-modified equatorial omnidirectional differential electron fluxes shown in Fig. 2, taken from Starfish Prime, to consider the downstream implications of an RBR-system operating to flush out this energetic population. Sodankylä ion chemistry model Using the Sodankylä Ion Chemistry (SIC) model we consider the atmospheric consequences of an RBR system in operation.Dumping high energy electrons into the atmosphere will change atmospheric chemistry through changes in HO x and NO x .The Sodankylä Ion Chemistry (SIC) model is a 1-D chemical model designed for ionospheric D-region studies, solving the concentrations of 63 ions, including 27 negative ions, and 13 neutral species at altitudes across 20-150 km. Our study made use of SIC version 6.6.0.The model has recently been discussed by Verronen et al. (2005), building on original work by Turunen et al. (1996) and Verronen et al. (2002).A detailed overview of the model was given in Verronen et al. ( 2005), but we summarize in a similar way here to provide background for this study. In the SIC model several hundred reactions are implemented, plus additional external forcing due to solar radiation (1-422.5 nm), electron and proton precipitation, and galactic cosmic radiation.Initial descriptions of the model are provided by Turunen et al. (1996), with neutral species modifications described by Verronen et al. (2002).Solar flux is calculated with the SOLAR2000 model (version 2.21) (Tobiska et al., 2000).The scattered component of solar Lyman-α flux is included using the empirical approximation given by Thomas and Bowman (1986).The SIC code includes vertical transport (Chabrillat et al., 2002) which takes into account molecular diffusion coefficients (Banks and Kockarts, 1973).The background neutral atmosphere is calculated using the MSISE-90 model (Hedin, 1991) and Tables given by Shimazaki (1984).Transport and chemistry are advanced in intervals of 5 or 15 min, while within each interval exponentially increasing time steps are used because of the wide range of chemical time constants of the modelled species. RBR forcing We use the SIC model to produce ionisation rates as outlined by Turunen et al. (1996) (based on the method of Rees (1989)).Hence we examine the altitude and time variation in neutral atmospheric species (e.g., NO x (N + NO + NO 2 ), HO x (OH + HO 2 ), and O x (O + O 3 )), as well as the electron density profile.The SIC model is run for the location of Sapporo, Japan (43 We then assume an operational space-based RBR-system which operates to "flush" the HANE-injected energetic electrons into the upper atmosphere, so that the flux of 1 MeV trapped electrons in a magnetic flux tube is driven down to within twice the ambient levels over a specified time period.In order to determine the precipitation into the upper atmosphere caused by the RBR-system, we need to consider the HANE-modified flux tube electron population.The flux tube electron population at a given L and energy E is found by first determining the differential number of electrons in a magnetic flux tube of 1 square centimetre in area at the equator plane, N(E, L), given by N (E, L)= π/2 0 j eq (α eq , E)τ b (α eq , E) 2π cos α eq sin α eq dα eq , (3) where j eq is the HANE-modified differential directional electron flux in the equatorial plane, α eq is the equatorial electron pitch angle, and τ b is the full bounce period (Voss et al., 1998).The differential number of electrons in a tube having 1 cm 2 area perpendicular to B at the top of the atmosphere, N 100 km (E, L), is obtained by multiplying the equatorial density from Eq. ( 3) by the ratio of the magnetic field magnitude at 100-km altitude to that at the equator.This provides the initial flux tube electron population at a given energy in a magnetic tube having one square centimetre cross section perpendicular to B at 100 km. We assume that the HANE-injected electrons will have a pitch-angle distribution which is much like that of the undisturbed radiation belt population, as reported for the Starfish Prime-injected electrons (Teague and Vette, 1972).The equatorial differential directional electron flux is determined by combining the differential omnidirectional electron fluxes of the ESA-SEE1 electron radiation belt model with the CRRES-satellite observed pitch angle dependences (Vampola, 1996) for 3<L<6.75 and those from the earlier empirical AE-5 radiation belt model (Teague and Vette, 1972).Extrapolations and interpolations have been employed to smoothly join the pitch angle dependences between these two models.The ESA-SEE1 model is an update to AE-8 MIN in which neural networks were trained to predict the CRRES Medium Electrons A (MEA) electron spectrometer flux at five energies (148, 412, 782, 1178, and 1582 keV) at each of six L-values (3, 4, 5, 6, 6.5, and 7) using the dailysum K p .Average fluxes from the trained networks agree with the MEA data to within 15% (mission-average, worst case network).Published spectra from the OV1-19 electron spectrometer (Vampola et al., 1977) were used to extend the The differential number of electrons in a tube at L=1.57 having 1 cm 2 area perpendicular to B at 100 km altitude, N 100 km (E, L), after the injection of energetic electrons from a HANE (sold line), and that predicted for undisturbed conditions from the ESA-SEE1 trapped electron model (dash-dot line). neural network energy spectra down to 40 keV and up to 7 MeV.The ESA-SEE1 model is a major improvement upon AE-8 at high energies.In contrast to ESA-SEE1 for energies >2 MeV the "AE-8 model is not based on reliable data and is an extrapolation of unknown validity" (Vampola, 1996). Figure 3 shows the L=1.57flux tube electron population, N 100 km (E, L) (solid line), after the injection of energetic electrons from a HANE as shown in Fig. 2.This is contrasted with the ESA-SEE1 model prediction for the same flux tube population in undisturbed conditions (dash-dot line).For most energies the ratio of the HANE-injected flux tube population to the undisturbed population is 10 4 -10 5 , although this clearly increases a great deal for energies >3 MeV where the undisturbed population is extremely small. We assume that the RBR-system will precipitate the HANE-injected electrons with an e-folding time such that the flux tube population at 1 MeV, N 100 km (E=1 MeV, L=1.57), is decreased to within twice the ESA-SEE1 value over a specified time period.For example, for the HANE-injected electrons to be returned to the normal population-level over 1 day, an e-folding time of 0.08 day (∼2 h) is required, while to achieve the same effect over 10 days an e-folding of time of 0.8 days (∼19 h) is needed.We make the rather gross assumption that the RBR-driven precipitation rate for 1 MeV electrons will be the same for all other energies.In practice, the loss rate will be considerably more complex, and will be an important feature in the design of the RBR-system.Nonetheless, this approximation allows us to provide an estimate of the impact of the precipitation. Figure 4 shows how the differential omnidirectional trapped flux at L=1.57 caused by the HANE will change with our assumed RBR-system, for the case where the HANE-injected electrons are successfully flushed into the upper atmosphere over 1 day.The left-hand panel of Fig. 4 shows the changing differential omnidirectional trapped flux, while the right-hand panel displays the ratio of the differential omnidirectional trapped flux to that predicted by the ESA-SEE1 model.Note that the energy range in the righthand panel has a maximum value of 5 MeV.While the HANE injects electrons with energies >5 MeV, as shown in Fig. 2, there is no trapped population at these energies.This is also the reason for the extremely high ratio between the HANE-injected and "normal" fluxes in this panel for energies ∼5 MeV.For the other time-scale we consider, where the HANE-injected electrons are successfully flushed into the upper atmosphere over 10 days, Fig. 4 will be identical except that the time scale on both plots is scaled by a factor of 10. The electrons lost from the flux tube are assumed to be precipitated into the upper atmosphere of both conjugate hemispheres, such that half the electrons lost are precipitated above the SIC calculation point above the city of Sapporo.These fluxes are used as an input to the SIC model, from which ionisation rates are calculated and the response of atmospheric chemistry determined. Middle atmosphere response to space-based RBR precipitation Figure 5 shows the SIC-calculated changes due to the flushing of HANE-injected electrons at all local times over 3 days, i.e., the precipitation fluxes shown in Fig. 4 and the subsequent atmospheric recovery.Here the precipitation process is assumed to start at 12:00 LT (03:00 UT), i.e., at local noon. The RBR-forced calculation is termed the "B"-run.In order to interpret the RBR-driven changes, a SIC modelling run has also been undertaken without any RBR-forcing (i.e., zero electron fluxes), termed the "C"-run, or "control".The results of this no-forcing "control" SIC-run, shown in Fig. 6 The top panel of Fig. 5 shows the effect of the RBRforcing on electron number density, shown as the log 10 of the ratio between the forced and control runs.The RBR-forcing leads to a 2-3.5 order of magnitude increase in electron number density beyond normal levels, over a wide altitude range (∼40-80 km).These very large electron number density Ann.Geophys., 24,[2025][2026][2027][2028][2029][2030][2031][2032][2033][2034][2035][2036][2037][2038][2039][2040][2041]2006 www.ann-geophys.net/24/2025/2006/The lifetime of odd nitrogen (NO) is strongly decreased by sunlight, and hence the precipitation might be expected to have a much less significant effect if the largest fluxes occur in sunlit locations.In order to test this we repeated the SIC calculations described above for an RBR-forcing start at 19:00 LT, i.e. around sunset. Figure 7 shows the changes in electron number density, NO x , HO x , and O x , to be contrasted with Fig. 5.While there are some small differences between the timing and evolution of the mesospheric changes shown in these two figures, the altitudes and magnitudes of the changes are rather similar.There is not a strong depen-dence on the RBR-forcing start times in LT, and as such the conclusions we draw from our calculations above Sapporo should apply equally well for all the locations on this L-shell into which RBR-produced precipitation will be driven. Figure 8 considers the case for a 10-day operation time.While the same amount of "total" injected flux is precipitated in this case as in the 1-day case, it is spread out over considerably more in time, and hence with smaller peak fluxes.However, this does not necessarily lead to smaller mesospheric changes in its longitude sector. Figure 8 .In this case the RBR-system is assumed to flush the HANE-injected electrons into the upper atmosphere over 10 days. RBR-system flushes the HANE-injected electrons into the upper atmosphere over the longer time scale of 10 days, starting at 12:00 LT on day 1.Otherwise the format of this plot is identical to Fig. 7.The peak magnitudes in the RBR enhancements to NO x and HO x leading to depletions in O x , are much the same as the previous cases, the primary difference being that some effect lingers in O x depletion for 4-5 days after the RBR-system begins operations.RBR-driven changes in electron density persist for ∼8 days, after which the remaining electron density increases resemble the longlived NO produced change seen in Fig. 5 and 7. The RBR-forced neutral chemistry changes seen in Figs. 5, 7 and 8 are significant during the timescale of the precipitation, but are generally not long-lasting.The magnitude, Ann.Geophys., 24,[2025][2026][2027][2028][2029][2030][2031][2032][2033][2034][2035][2036][2037][2038][2039][2040][2041]2006 www.ann-geophys.net/24/2025/2006/time-scales, and altitudes of these changes are rather similar to the NO x /HO x enhancements and O 3 depletions calculated by the SIC model for large solar proton events (Verronen et al., 2002(Verronen et al., , 2005)), confirmed by experimental observations using the GOMOS satellite-bourn instrument (Seppälä et al., 2005;Verronen et al., 2005) and subionospheric VLF propagation measurements (Clilverd et al., 2005(Clilverd et al., , 2006)).Thus while RBR-forced precipitation should be expected to be a rare occurrence, even if it was used to mitigate the effects of intense natural injections while providing a defence against possible HANE-injections, the effects are no more significant than large solar proton events.Solar protons entering the Earth's magnetosphere are guided by the Earth's magnetic field and precipitate into the polar cap areas (Rodger et al., 2006).Solar proton events can therefore produce NO increases inside the polar vortex during the Antarctic winter, when the low levels of solar illumination lead to long-lived NO enhancements and hence significant depletions of middle atmospheric ozone.In contrast, the RBR-forced precipitation will occur at low-to mid-latitudes and is unlikely to reach polar latitudes, such that the large NO enhancements will generally have short lifetimes.Even in the extreme RBR-system considered here, our calculations indicate that the effects on the neutral constituents of the middle atmosphere will be less than that which occur in the polar regions during large solar proton events. Contrast with ground-based RBR-system Publications discussing a practical space-based RBR-system have in part been triggered by the suggestion that existing manmade very low frequency (VLF) communications transmitters on the Earth's surface may drive the most significant losses from the inner radiation belts (Abel and Thorne, 1998;1999).Such discussions tend to refer to ground-based transmitters acting to test the feasibility of possible spacebased systems (e.g., Dupont, 2004).However, one might also envisage a deployed RBR-system using ground-based VLF transmitters.In order to estimate the possible effect of a ground-based system, we again take the extreme situation, in this case a single RBR-transmitter.We assume that this system can successfully flush the HANE-injected electrons into the upper atmosphere over 10 days.Experimental observations of electron precipitation due to wave-particle interactions from ground-based VLF transmitters have shown that the interactions are likely to be effective for only ∼7 h per day (23:00-06:00 LT, i.e., local nighttime) over 30 • in longitude centred on the transmitter longitude (Datlowe et al., 1995), unlike the case for a system of space based transmitters (which we earlier assumed were regularly spaced).The variation in effective power of a ground-based transmitter has been estimated to have an exponential drop-off with longitude with 15 • folding distance, so that the average wave power is 0.63 of the maximum (Abel and Thorne, 1998).In order to flush all the HANE-injection electrons over 10 days, the precipitation fluxes around the transmitter will be higher than in the space-based case, due to the spatial and LT limits. Figure 9 shows how the differential omnidirectional trapped flux at L=1.57 caused by the HANE will change with our assumed ground-based RBR-system driving a depletion over 10 days, but is otherwise in the same format as Fig. 4. The SIC calculated mesospheric changes due to this precipitation are shown in Fig. 10, again in the same format as Fig. 5. Again a 12:00 LT RBR-start is taken, with the groundbased RBR transmitter assumed to be located at the same longitude as Sapporo, our calculation point.While the electron density changes in Fig. 10 are of similar magnitudes to those shown in Figs. 6 and 7, the long-lasting RBR-driven precipitation produces larger NO x enhancements (by a factor of ∼5) and also deeper O x depletions (depletions down to ∼25% of control).This is as the time-integrated energy deposited by electron precipitation is not the same between the two cases, as in the ground-based transmitter case the www.ann-geophys.net/24/2025/2006/Ann.Geophys., 24, 2025-2041, 2006 electron precipitation is confined in longitude.While this effect is somewhat larger than that shown in the earlier cases, it is still within the expected effects of large solar proton events.The magnitude, duration and altitude range of the initial O x depletion is similar to that calculated for the 29,500 proton flux unit event of 29 October 2003 (Verronen et al., Fig. 3, 2005).The O x depletions become progressively less significant for each day of RBR-operation, lasting perhaps ∼4-5 days of the 10-day operational period assumed. Effect on HF communication It is well known that the additional ionisation produced by solar flares can lead to "Shortwave fadeouts", also known as "HF blackouts", where high frequency (HF) radio waves (3-30 MHz) suffer from increased attenuation caused by absorption in the ionospheric D-region (Davies, 1990).For the case of solar flares, the HF radio blackout will primarily affect the sunlit sector of the Earth, stopping radio contact with mariners and en-route airplanes (Jones et al., 2005).According to NOAA's Space Environment Center, for severe cases, corresponding to peak X-ray fluxes in the 0.1-8 nm range >10 −3 Wm −2 (i.e., >X10), satellite based navigation systems may also suffer increased errors.As these solar flare produced disruptions are caused by additional ionisation in the D-region, the precipitation driven by the RBR-system might also lead to similar disruptions.Unlike the solar flare case, where the effect is limited to the sunlit sector of the Earth, the disruptions cased by the RBR-system could affect both the sunlit and night sectors of the Earth. In order to estimate the HF attenuation levels, we consider the variation with time of the Highest Affected Frequency (HAF) during the RBR-forced precipitation.The HAF is defined as the frequency which suffers a loss of 1 dB during vertical propagation from the ground, through the ionosphere, and back to ground.Radio frequencies lower than the HAF suffer an even greater loss.According to the D Region Absorption Documentation provided by the NOAA Space Environment Center (http://www.sec.noaa.gov/rt plots/dregionDoc.html)and based on the Space Environmental Forecaster Operations Manual (1997), the empirically derived relationship between HAF and solar 0.1-0.8nm X-ray flux is: where HAF is given in MHz and the X-ray flux is in units of W m −2 .As an example, an X20 flare, which has peak 0.1-0.8nm X-ray fluxes of 2.0 mW m −2 , produces a HAF of 38 MHz.Flares of this magnitude lead to "extreme" Radio Blackouts, with essentially no HF radio contact with mariners or en-route aviators, and increased satellite navigation errors.NOAA has defined a Space Weather Scale for Radio Blackouts (Poppe, 2000), ranging from R1 describing a minor disruption due to an M1 flare (10 µW m −2 peak 0.1-0.8nm X-ray flux) to R5 for the extreme blackout case described above.We will employ this scale to provide an indication of the severity of the RBR-induced blackouts. The response of the ionospheric D-region electron density to solar flares has been studied by use of subionospheric VLF propagation (e.g., Thomson et al., 2004;2005).This work has shown that the electron density responds in a consistent way, providing a link between the electron density changes and X-ray fluxes.These authors characterise the D region through a Wait ionosphere defined by just two parameters, the "reflection height" H , in kilometres, and the exponential sharpness factor, β, in km −1 (Wait and Spies, 1964), using the relationship: ] . (5) Figure 9 of Thomson et al. (2005) provides plots of the values of β and H required to reproduce experimentally observed absolute amplitude and phase changes driven by peak solar flare X-ray fluxes.By fitting Wait ionosphere β and H parameters to the SIC calculated electron densities we have used Fig. 9b of Thomson et al. (2005) to estimate the equivalent X-ray fluxes, and hence determine the HAF likely due to the increased ionization produced by the RBR-forced precipitation.Note that the observations of Thomson et al. (2005) suggest that there is little measurable change in ionospheric D-region electron densities for 0.1-0.8nm X-ray flux less than ∼1.5 µW m −2 , at least for observations based on subionospheric VLF propagation.In these cases, which correspond to very minor ionospheric disturbances (or undisturbed conditions), the HAF has been set to zero. The HF blackout estimates for the 1-day space based RBR case (Fig. 5) are shown in Fig. 11.The upper panel indicates the change in the Wait ionosphere H parameter.The heavy line represents the RBR case, while the light dotted line represents the unforced case taken from the control runs.At the start of the RBR forcing (12 LT) there is a very large change in H ' when compared with the undisturbed situation.The large difference between the forced and unforced H ' values lasts until dawn on the second day, when the variation in H ' shifts into a new, but stable, regime because of the increased NO x .This is consistent with the behaviour of the electron density seen in Fig. 5.The middle panel of Fig. 11 shows the equivalent peak X-ray power in the 0.1-0.8nm wavelength range which would cause the same H value during a solar flare, as determined from Fig. 9 of Thomson et al. (2005).There is a very large change in equivalent peak X-ray power over the first 12 h, representing the time period when the RBR-forced changes to the electron density are the most significant.The lower panel of this figure presents the Highest Affected Frequency calculated using Eq. ( 4) from the equivalent peak X-ray power shown in the middle panel.The NOAA Radio Blackout Scale has been added for comparison.At the start of the RBR-forcing the HF blackout level is at "Extreme" levels equivalent to the effects of a X20 solar flare (or larger).Such events are very rare, on average less than once per 11-year solar cycle.Over the course of ∼6 h the HF blackout level drops from "Extreme" to "Severe", which occurs on average about 8 times per solar cycle.The HAF then rapidly decreases to be "Minor" about 9-10 h after the start of the RBR-forcing.The operation of an RBRsystem would clearly lead to unusually intense HF blackouts for ∼9 h, producing large scale disruptions to radio communication and navigation systems.While the system would produce an unusually intense HF blackout, our modelling indicates that the changes in O x levels would be well within those caused by natural forcing, and that both changes would be short-lived.In the extreme, and hopefully unlikely case of a HANE, the disruption caused during RBR operation would probably be viewed as acceptable.However, this might not be the case for an RBRoperating to flush an intense natural injection, although is clearly a societal rather than a scientific matter.We therefore consider whether a longer operational time would lead to smaller levels of HF blackout.Figure 12 examines the HF absorption effects caused by an RBR-system operating over a 10-day timescale, either space based (Fig. 8) or ground based (Fig. 10).In the case of the space-based RBR, the HF blackout levels are smaller than the 1-day case presented in Fig. 11 above, but remain above "Strong" for parts of several days.It appears that an RBR-system operating over a longer time scale would be more disruptive than a system which can operate very rapidly.While a ground-based system produces more short-lived changes to HF blackout levels, due to the limitation to nighttime operation, the peak HF absorption levels are greater than that for a 10-day space-based operation, producing "Moderate" to "Extreme" HF blackout conditions for parts of 6-7 days.Again, it seems likely that any of these system configurations would be acceptable for the extreme case of a high-altitude nuclear explosion, but possibly not for mitigating the effects of an intense natural injection.It appears that HF blackout effects may be the most significant disruptions which would be caused by the operation of an RBR-system.Ann.Geophys., 24,[2025][2026][2027][2028][2029][2030][2031][2032][2033][2034][2035][2036][2037][2038][2039][2040][2041]2006 www.ann-geophys.net/24/2025/2006/ HANE injections at different locations What if the HANE was at a different location than that of Starfish Prime?Most of the publications which discuss the threat posed by HANE "pumping-up" the radiation belts mention different locations with differing geomagnetic latitudes.In addition, nuclear-tipped anti-ballistic missile defence systems are currently operational around Moscow and were also briefly deployed in the United States.These systems are designed to destroy incoming warheads by a nuclear explosion at altitudes >100 km, and could also lead to a HANE-injection into the radiation belts, as an unintended consequence of protecting the target.On the basis of Starfish Prime, we argue that HANE occurring at L<1.25 will produce equilibrium-fission spectra for L<1.25 and softer spectra at higher L-shells, in a similar way to Starfish Prime.Such a HANE will be rather similar to that from Starfish Prime, and any RBR-mitigation efforts will be well represented by the calculations presented in this study.HANE occurring at higher L-shells will move the hard-spectra trapped flux to higher L-shells.On the basis of the conclusions drawn in Fig. 4, we argue that the harder spectra will, if flushed by an RBR-system, produce effects much like a solar proton event with a very hard spectrum.On 20 January 2005 an extremely energetic solar proton event occurred: the flux of extremely high energy solar protons (>100 MeV from GOES) was of the same order as in the well known October 1989 SPE (e.g.Reid et al., 1991;Jackman et al., 1995), whilst the lower energy fluxes remained at moderate levels (>10 MeV proton flux peak 1860 pfu while the >100 MeV protons peaked at 652 pfu).However, a study of this extreme solar proton event using the SIC model indicated that there was little additional ozone loss at stratospheric altitudes, even though a significant population of protons would reach these altitudes at these high-latitudes and cause in-situ changes (Seppälä et al., 2006).Ozone loss in the stratosphere is determined by cycles of NO x , Cl x , Br x , and HO x depending on the altitude region.SPE-induced changes in the stratosphere are due to an increase in NO x .However, the modelling concluded that the SPE-forced NO x production even in this very hard event is not significant in the middle and lower stratosphere when contrasted with the typical NO x population.It appears, therefore, that the calculations presented in our study should generally be representative of a variety of different HANE and RBR locations.One possible mechanism by which a high-latitude HANE followed by RBR operation could lead to stratospheric O 3 changes, involves strong downward transport due to the polar vortex.During the polar winter odd nitrogen produced by energetic particle precipitation can survive, and in the presence of strong polar vortex conditions, descend into the stratosphere (Solomon et al., 1982;Siskind, 2000).During the northern polar winter of 2003-2004 these conditions existed; Randall et al. (2005) reported unprecedented levels of spring-time stratospheric NO x as a result.However, in general both significant intense precipitation fluxes and a strong polar vortex are needed to transport odd nitrogen to stratospheric altitudes (Clilverd et al., 2006 2 ).In the case of a high-latitude HANE, the RBR operation would fill the role of the geomagnetic storm producing intense precipitation fluxes, while a strong polar vortex would still be necessary to transport NO x to lower altitudes. Summary High altitude nuclear explosions (HANEs) can produce large scale injections of relativistic particles into the inner radiation belts.In some cases, geomagnetic storms are also associated with increases in the inner belt relativistic electron population.It is recognised that these large increases in >1 MeV trapped electron fluxes would shorten the operational lifetime of Low Earth Orbiting satellites, threatening a large population of valuable satellites.Due to the lifetime of the injected electrons, any manned spaceflights would also need to be delayed for a year or more.Therefore, studies are being undertaken to bring about practical human control of the radiation belts to protect Earth-orbiting systems from natural and manmade injections of relativistic electrons, termed "Radiation Belt Remediation" (RBR).In this paper we have examined the upper atmospheric consequences of an RBR system in operation. In order to estimate the significance of RBR-driven precipitation to the upper atmosphere, we considered an RBRsystem operating to flush the artificial radiation belt injected by a Starfish Prime-type HANE over either 1 or 10 days, assuming a space-based system operating over all local times.For the longer operation time we also considered a ground-based RBR system.The RBR-forced neutral chemistry changes, leading to NO x enhancements and O x depletions, are significant during the timescale of the precipitation, but are generally not long-lasting.The magnitude, time-scales, and altitudes of these changes are rather similar to the NO x enhancements and O 3 depletions calculated by the SIC model for large solar proton events (Verronen et al., 2002(Verronen et al., , 2005)).Thus while RBR-forced precipitation should be expected to be a rare occurrence, even if it was used to mitigate the effects of intense natural injections while providing a defence against possible HANE-injections, the effects are no more significant than large solar proton events.The primary difference between the RBR-forced changes and those driven by solar proton events is that the RBR-forced precipitation will occur at low-to mid-latitudes and are unlikely to reach polar latitudes.However, in this case the large NO x and HO x enhancements will generally have short lifetimes, such that even for a fairly extreme case of RBR-system operation, and the significance to O 3 levels will be less than that which occurs in the polar regions during large solar proton events. In contrast, RBR-operation will lead to unusually intense HF blackouts for about the first half of its operation time, producing large scale disruptions to radio communication and navigation systems.Both space-based and ground-based RBR systems would create HF disruptions, although the duration and local time of the effect is dependent on the system case.It is not clear that an RBR-system operating over 10 days would produce lower levels of HF disruption than if operated over 1 day.While the neutral atmosphere changes are not particularly important, HF disruptions could be an important area for policy makers to consider, particularly for the remediation of natural injections. Fig. 1 . Fig. 1.Map showing the important locations in our study.The original Starfish Prime HANE occurred above Johnston Island (square), while our SIC modelling points are above the city of Sapporo (circle at L=1.57).The 100 km footprints of the IGRF determined L=1.25 and L=1.57are also shown. Figure 2 shows the equatorial omnidirectional differential electron flux at L=1.25 and L=1.57based on these observations of the injection from the Starfish Prime HANE.Contours showing the International Geomagnetic Reference Field (GRF) determined footprints of L=1.25 and L=1.57 at 100 km altitude are shown in Fig. 1.Note the softening of the trapped equatorial electron spectrum from the equilibrium-fission spectrum at L=1.25 to the considerably different spectrum at L=1.57 due to the doubling in the HANE-produced bubble radius.Note also that the L =1.57 spectrum appears to have ∼4 times more flux at the lowest energies than the L=1.25 spectrum.This is a consequence of the observed >0.5 MeV www.ann-geophys.net/24/2025/2006/Ann.Geophys., 24, 2025-2041, 2006 Fig. 2 . Fig. 2. The post-Starfish Prime HANE radiation belt environment in the inner radiation belt, showing the equatorial omnidirectional differential electron flux.Note the softening of the trapped equatorial electron spectrum from an equilibrium-fission spectrum (at L=1.25) with the expansion of the HANE produced bubble The dotted lines show the ambient trapped population from the ESA-SEE1 model. Fig Fig. 3.The differential number of electrons in a tube at L=1.57 having 1 cm 2 area perpendicular to B at 100 km altitude, N 100 km (E, L), after the injection of energetic electrons from a HANE (sold line), and that predicted for undisturbed conditions from the ESA-SEE1 trapped electron model (dash-dot line). Fig. 4 . Fig. 4. The changing HANE-injected differential flux at L=1.57 during the operation of the assumed RBR-system.Here the injected population drops to ambient levels at 1 MeV in 1 day.a).The changing differential omnidirectional trapped flux.b).The logarithm of the ratio of the differential omnidirectional trapped flux to the ESA-SEE1 predicted ambient level. , allow the calculation of "normal" conditions, and hence an indication of the significance of the changes.The top panel of Fig.6shows the normal diurnal variation in electron number density, the second panel shows NO x number density (N + NO + NO 2 ), the third panel HO x number density, and the lower panel shows O x (O + O 3 ).We use NO x and O x rather than NO and O 3 as there are substantial diurnal variations in both the latter populations, which would lead to distracting features in the relative change plots.In all cases these panels have units of log 10 [cm −3 ].The atmospheric changes modelled in our study mostly occur in the mesosphere, as determined by energy spectra of the precipitating electrons.In the mesosphere changes in O 3 (or O x ) are primarily caused by increases in HO x , although NO x does play some role near 50 km and is important in O x chemistry in the upper stratosphere.Ionisation-produced HO x leads to the O x changes shown.Superimposed upon the panels of Fig.6is a black line indicating the solar zenith angle (plotted in degrees where 1 • =1 km on the altitude scale), and hence the diurnal cycle, where local midnight is shown by the highest points in the curve and the dawn/dusk transition by the horizontal black line. Fig. 5 . Fig. 5.The effect of the RBR-forced precipitation starting at 12 LT as calculated using the SIC model, due to the precipitating fluxes shown in Fig. 4. The top shows the RBR-forced ("B"-run) change in electron densities, relative to normal conditions ("C"-run), while the lower panels show changes in NO x and O x . Fig. 6 . Fig. 6.The results of a SIC modelling run without any RBR-forcing (i.e., zero precipitating electron fluxes), showing the calculated "normal" conditions.Superimposed is a black curve indicating the solar zenith angle, and hence the diurnal cycle, where local midnight is shown by the highest points in the curve and the dawn/dusk transition by the horizontal black line. Fig. 7 . Fig. 7.The effect of the RBR-forced precipitation starting at 19 LT, to be contrasted with Fig. 5. Figure8considers the case for a 10-day operation time.While the same amount of "total" injected flux is precipitated in this case as in the 1-day case, it is spread out over considerably more in time, and hence with smaller peak fluxes.However, this does not necessarily lead to smaller mesospheric changes in its longitude sector.Figure8shows the SIC calculated mesospheric changes for the case where the Fig. 8 . Fig. 8.The effect of the RBR-forced precipitation starting at 12 LT, in the same format as Fig. 7.In this case the RBR-system is assumed to flush the HANE-injected electrons into the upper atmosphere over 10 days. Fig. 9 . Fig. 9.The changing HANE-injected differential flux at L=1.57 during the operation of an assumed ground based RBR-system, in the same format as Fig. 4. Fig. 10 . Fig. 10.The effect of the precipitation driven by a single station ground based RBR system-forced.The format is as shown in Fig. 5 driven by the precipitation fluxes shown in Fig. 9. Fig. 11 . Fig. 11.Estimate of the severity of the RBR-forced HF blackout for the case shown in Fig. 5.The upper panel indicates the change in the Wait ionosphere H parameter for the forced (heavy line) and unforced (light line) cases.The middle panel shows the equivalent peak X-ray power in the 0.1-0.8nm range which would cause the same H during a solar flare.The lower panel is the Highest Affected Frequency calculated from the equivalent peak X-ray power.The NOAA Radio Blackout Scale has been added for comparison. Fig. 12 . Fig. 12.The Highest Affected Frequency for the RBR-cases shown in Figs. 8 and 9, in the same format as the lower panel in Fig. 11. L=1.57 contour for which Starfish Prime measurements exist, as discussed in Sect. 2. The choice of Sapporo as the SIC-modelling point is essentially arbitrary, and was selected in recognition of the city having hosted the IUGG conference in 2003, when we first discussed this collaboration.
v3-fos-license